diff --git a/ChangeLog b/ChangeLog index 5a145584ce..b58cb7ac2b 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,5 +1,13 @@ +279. [func] jerry + libdns++: Implement the AFSDB rrtype according to RFC1183. + (Trac #1114, git ce052cd92cd128ea3db5a8f154bd151956c2920c) + +278. [doc] jelte + Add logging configuration documentation to the guide. + (Trac #1011, git 2cc500af0929c1f268aeb6f8480bc428af70f4c4) + 277. [func] jerry - Implement the SRV rrtype according to RFC2782. + libdns++: Implement the SRV rrtype according to RFC2782. (Trac #1128, git 5fd94aa027828c50e63ae1073d9d6708e0a9c223) 276. [func] stephen diff --git a/README b/README index a6509da2d2..4b84a88939 100644 --- a/README +++ b/README @@ -8,10 +8,10 @@ for serving, maintaining, and developing DNS. BIND10-devel is new development leading up to the production BIND 10 release. It contains prototype code and experimental interfaces. Nevertheless it is ready to use now for testing the -new BIND 10 infrastructure ideas. The Year 2 milestones of the -five year plan are described here: +new BIND 10 infrastructure ideas. The Year 3 goals of the five +year plan are described here: - https://bind10.isc.org/wiki/Year2Milestones + http://bind10.isc.org/wiki/Year3Goals This release includes the bind10 master process, b10-msgq message bus, b10-auth authoritative DNS server (with SQLite3 and in-memory @@ -67,8 +67,8 @@ e.g., Operating-System specific tips: - FreeBSD - You may need to install a python binding for sqlite3 by hand. A - sample procedure is as follows: + You may need to install a python binding for sqlite3 by hand. + A sample procedure is as follows: - add the following to /etc/make.conf PYTHON_VERSION=3.1 - build and install the python binding from ports, assuming the top diff --git a/doc/guide/bind10-guide.html b/doc/guide/bind10-guide.html index 5754cf001e..94adf4aa92 100644 --- a/doc/guide/bind10-guide.html +++ b/doc/guide/bind10-guide.html @@ -1,24 +1,24 @@ -BIND 10 Guide

BIND 10 Guide

Administrator Reference for BIND 10

This is the reference guide for BIND 10 version - 20110519.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by +BIND 10 Guide

BIND 10 Guide

Administrator Reference for BIND 10

This is the reference guide for BIND 10 version + 20110705.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by Internet Systems Consortium (ISC). It includes DNS libraries and modular components for controlling authoritative and recursive DNS servers.

- This is the reference guide for BIND 10 version 20110519. + This is the reference guide for BIND 10 version 20110705. The most up-to-date version of this document, along with - other documents for BIND 10, can be found at http://bind10.isc.org/docs.


Chapter 1. Introduction

+ other documents for BIND 10, can be found at http://bind10.isc.org/docs.


Chapter 1. Introduction

BIND is the popular implementation of a DNS server, developer interfaces, and DNS tools. BIND 10 is a rewrite of BIND 9. BIND 10 is written in C++ and Python and provides a modular environment for serving and maintaining DNS.

Note

This guide covers the experimental prototype of - BIND 10 version 20110519. + BIND 10 version 20110705.

Note

BIND 10 provides a EDNS0- and DNSSEC-capable authoritative DNS server and a caching recursive name server which also provides forwarding. -

Supported Platforms

+

Supported Platforms

BIND 10 builds have been tested on Debian GNU/Linux 5, Ubuntu 9.10, NetBSD 5, Solaris 10, FreeBSD 7 and 8, and CentOS Linux 5.3. @@ -28,13 +28,15 @@ It is planned for BIND 10 to build, install and run on Windows and standard Unix-type platforms. -

Required Software

+

Required Software

BIND 10 requires Python 3.1. Later versions may work, but Python 3.1 is the minimum version which will work.

BIND 10 uses the Botan crypto library for C++. It requires - at least Botan version 1.8. To build BIND 10, install the - Botan libraries and development include headers. + at least Botan version 1.8. +

+ BIND 10 uses the log4cplus C++ logging library. It requires + at least log4cplus version 1.0.3.

The authoritative server requires SQLite 3.3.9 or newer. The b10-xfrin, b10-xfrout, @@ -136,7 +138,10 @@ and, of course, DNS. These include detailed developer documentation and code examples. -

Chapter 2. Installation

Building Requirements

Note

+

Chapter 2. Installation

Building Requirements

+ In addition to the run-time requirements, building BIND 10 + from source code requires various development include headers. +

Note

Some operating systems have split their distribution packages into a run-time and a development package. You will need to install the development package versions, which include header files and @@ -147,6 +152,11 @@

+ To build BIND 10, also install the Botan (at least version + 1.8) and the log4cplus (at least version 1.0.3) + development include headers. +

+ The Python Library and Python _sqlite3 module are required to enable the Xfrout and Xfrin support.

Note

@@ -156,7 +166,7 @@ Building BIND 10 also requires a C++ compiler and standard development headers, make, and pkg-config. BIND 10 builds have been tested with GCC g++ 3.4.3, 4.1.2, - 4.1.3, 4.2.1, 4.3.2, and 4.4.1. + 4.1.3, 4.2.1, 4.3.2, and 4.4.1; Clang++ 2.8; and Sun C++ 5.10.

Quick start

Note

This quickly covers the standard steps for installing and deploying BIND 10 as an authoritative name server using @@ -192,14 +202,14 @@ the Git code revision control system or as a downloadable tar file. It may also be available in pre-compiled ready-to-use packages from operating system vendors. -

Download Tar File

+

Download Tar File

Downloading a release tar file is the recommended method to obtain the source code.

The BIND 10 releases are available as tar file downloads from ftp://ftp.isc.org/isc/bind10/. Periodic development snapshots may also be available. -

Retrieve from Git

+

Retrieve from Git

Downloading this "bleeding edge" code is recommended only for developers or advanced users. Using development code in a production environment is not recommended. @@ -233,7 +243,7 @@ autoheader, automake, and related commands. -

Configure before the build

+

Configure before the build

BIND 10 uses the GNU Build System to discover build environment details. To generate the makefiles using the defaults, simply run: @@ -264,16 +274,16 @@

If the configure fails, it may be due to missing or old dependencies. -

Build

+

Build

After the configure step is complete, to build the executables from the C++ code and prepare the Python scripts, run:

$ make

-

Install

+

Install

To install the BIND 10 executables, support files, and documentation, run:

$ make install

-

Note

The install step may require superuser privileges.

Install Hierarchy

+

Note

The install step may require superuser privileges.

Install Hierarchy

The following is the layout of the complete BIND 10 installation:

  • bin/ — @@ -490,12 +500,12 @@ shutdown the details and relays (over a b10-msgq command channel) the configuration on to the specified module.

    -

Chapter 8. Authoritative Server

+

Chapter 8. Authoritative Server

The b10-auth is the authoritative DNS server. It supports EDNS0 and DNSSEC. It supports IPv6. Normally it is started by the bind10 master process. -

Server Configurations

+

Server Configurations

b10-auth is configured via the b10-cfgmgr configuration manager. The module name is Auth. @@ -515,7 +525,7 @@ This may be a temporary setting until then.

shutdown
Stop the authoritative DNS server.

-

Data Source Backends

Note

+

Data Source Backends

Note

For the development prototype release, b10-auth supports a SQLite3 data source backend and in-memory data source backend. @@ -529,7 +539,7 @@ This may be a temporary setting until then. The default is /usr/local/var/.) This data file location may be changed by defining the database_file configuration. -

Loading Master Zones Files

+

Loading Master Zones Files

RFC 1035 style DNS master zone files may imported into a BIND 10 data source by using the b10-loadzone utility. @@ -607,7 +617,7 @@ This may be a temporary setting until then.

Note

Access control (such as allowing notifies) is not yet provided. The primary/secondary service is not yet complete. -

Chapter 12. Recursive Name Server

Table of Contents

Forwarding

+

Chapter 12. Recursive Name Server

Table of Contents

Forwarding

The b10-resolver process is started by bind10. @@ -636,7 +646,7 @@ This may be a temporary setting until then. > config set Resolver/listen_on [{ "address": "127.0.0.1", "port": 53 }] > config commit

-

Forwarding

+

Forwarding

To enable forwarding, the upstream address and port must be configured to forward queries to, such as: diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 6a4218207a..021c593332 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -146,7 +146,7 @@ The processes started by the bind10 command have names starting with "b10-", including: - + @@ -241,7 +241,7 @@

Managing BIND 10 - + Once BIND 10 is running, a few commands are used to interact directly with the system: @@ -280,7 +280,7 @@ In addition, manual pages are also provided in the default installation. - + - + Starting BIND10 with <command>bind10</command> - BIND 10 provides the bind10 command which + BIND 10 provides the bind10 command which starts up the required processes. bind10 will also restart processes that exit unexpectedly. @@ -711,7 +711,7 @@ Debian and Ubuntu: After starting the b10-msgq communications channel, - bind10 connects to it, + bind10 connects to it, runs the configuration manager, and reads its own configuration. Then it starts the other modules. @@ -742,6 +742,16 @@ Debian and Ubuntu: get additional debugging or diagnostic output. + + + + If the setproctitle Python module is detected at start up, + the process names for the Python-based daemons will be renamed + to better identify them instead of just python. + This is not needed on some operating systems. + + +
@@ -769,7 +779,7 @@ Debian and Ubuntu: b10-msgq service. It listens on 127.0.0.1. - + The configuration data item is: - + database_file - + This is an optional string to define the path to find the SQLite3 database file. @@ -1120,7 +1130,7 @@ This may be a temporary setting until then. shutdown - + Stop the authoritative DNS server. @@ -1176,7 +1186,7 @@ This may be a temporary setting until then. $INCLUDE - + Loads an additional zone file. This may be recursive. @@ -1184,7 +1194,7 @@ This may be a temporary setting until then. $ORIGIN - + Defines the relative domain name. @@ -1192,7 +1202,7 @@ This may be a temporary setting until then. $TTL - + Defines the time-to-live value used for following records that don't include a TTL. @@ -1257,7 +1267,7 @@ TODO The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) @@ -1304,7 +1314,7 @@ what if a NOTIFY is sent? The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) Access control is not yet provided. @@ -1391,6 +1401,67 @@ what is XfroutClient xfr_client?? +
+ Access Control + + + The b10-resolver daemon only accepts + DNS queries from the localhost (127.0.0.1 and ::1). + The configuration may + be used to reject, drop, or allow specific IPs or networks. + This configuration list is first match. + + + + The configuration's item may be + set to ACCEPT to allow the incoming query, + REJECT to respond with a DNS REFUSED return + code, or DROP to ignore the query without + any response (such as a blackhole). For more information, + see the respective debugging messages: RESOLVER_QUERY_ACCEPTED, + RESOLVER_QUERY_REJECTED, + and RESOLVER_QUERY_DROPPED. + + + + The required configuration's item is set + to an IPv4 or IPv6 address, addresses with an network mask, or to + the special lowercase keywords any6 (for + any IPv6 address) or any4 (for any IPv4 + address). + + + + + + For example to allow the 192.168.1.0/24 + network to use your recursive name server, at the + bindctl prompt run: + + + +> config add Resolver/query_acl +> config set Resolver/query_acl[2]/action "ACCEPT" +> config set Resolver/query_acl[2]/from "192.168.1.0/24" +> config commit + + + (Replace the 2 + as needed; run config show + Resolver/query_acl if needed.) + + + This prototype access control configuration + syntax may be changed. + +
+
Forwarding @@ -1470,61 +1541,679 @@ then change those defaults with config set Resolver/forward_addresses[0]/address Logging - +
+ Logging configuration - - Each message written by BIND 10 to the configured logging destinations - comprises a number of components that identify the origin of the - message and, if the message indicates a problem, information about the - problem that may be useful in fixing it. - + - - Consider the message below logged to a file: - 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] - ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53) - + The logging system in BIND 10 is configured through the + Logging module. All BIND 10 modules will look at the + configuration in Logging to see what should be logged and + to where. - - Note: the layout of messages written to the system logging - file (syslog) may be slightly different. This message has - been split across two lines here for display reasons; in the - logging file, it will appear on one line.) - + - - The log message comprises a number of components: + + +
+ Loggers + + + + Within BIND 10, a message is logged through a component + called a "logger". Different parts of BIND 10 log messages + through different loggers, and each logger can be configured + independently of one another. + + + + + + In the Logging module, you can specify the configuration + for zero or more loggers; any that are not specified will + take appropriate default values.. + + + + + + The three most important elements of a logger configuration + are the (the component that is + generating the messages), the + (what to log), and the + (where to log). + + + +
+ name (string) + + + Each logger in the system has a name, the name being that + of the component using it to log messages. For instance, + if you want to configure logging for the resolver module, + you add an entry for a logger named Resolver. This + configuration will then be used by the loggers in the + Resolver module, and all the libraries used by it. + + + + + + + If you want to specify logging for one specific library + within the module, you set the name to + module.library. For example, the + logger used by the nameserver address store component + has the full name of Resolver.nsas. If + there is no entry in Logging for a particular library, + it will use the configuration given for the module. + + + + + + + + + + To illustrate this, suppose you want the cache library + to log messages of severity DEBUG, and the rest of the + resolver code to log messages of severity INFO. To achieve + this you specify two loggers, one with the name + Resolver and severity INFO, and one with + the name Resolver.cache with severity + DEBUG. As there are no entries for other libraries (e.g. + the nsas), they will use the configuration for the module + (Resolver), so giving the desired behavior. + + + + + + One special case is that of a module name of * + (asterisks), which is interpreted as any + module. You can set global logging options by using this, + including setting the logging configuration for a library + that is used by multiple modules (e.g. *.config + specifies the configuration library code in whatever + module is using it). + + + + + + If there are multiple logger specifications in the + configuration that might match a particular logger, the + specification with the more specific logger name takes + precedence. For example, if there are entries for for + both * and Resolver, the + resolver module — and all libraries it uses — + will log messages according to the configuration in the + second entry (Resolver). All other modules + will use the configuration of the first entry + (*). If there was also a configuration + entry for Resolver.cache, the cache library + within the resolver would use that in preference to the + entry for Resolver. + + + + + + One final note about the naming. When specifying the + module name within a logger, use the name of the module + as specified in bindctl, e.g. + Resolver for the resolver module, + Xfrout for the xfrout module, etc. When + the message is logged, the message will include the name + of the logger generating the message, but with the module + name replaced by the name of the process implementing + the module (so for example, a message generated by the + Auth.cache logger will appear in the output + with a logger name of b10-auth.cache). + + + +
+ +
+ severity (string) + + + + This specifies the category of messages logged. + Each message is logged with an associated severity which + may be one of the following (in descending order of + severity): + + + + + FATAL + + + + ERROR + + + + WARN + + + + INFO + + + + DEBUG + + + + + + When the severity of a logger is set to one of these + values, it will only log messages of that severity, and + the severities above it. The severity may also be set to + NONE, in which case all messages from that logger are + inhibited. + + + + + +
+ +
+ output_options (list) + + + + Each logger can have zero or more + . These specify where log + messages are sent to. These are explained in detail below. + + + + + + The other options for a logger are: + + + +
+ +
+ debuglevel (integer) + + + + When a logger's severity is set to DEBUG, this value + specifies what debug messages should be printed. It ranges + from 0 (least verbose) to 99 (most verbose). + + + + + + + + If severity for the logger is not DEBUG, this value is ignored. + + + +
+ +
+ additive (true or false) + + + + If this is true, the from + the parent will be used. For example, if there are two + loggers configured; Resolver and + Resolver.cache, and + is true in the second, it will write the log messages + not only to the destinations specified for + Resolver.cache, but also to the destinations + as specified in the in + the logger named Resolver. + + + + + +
+ +
+ +
+ Output Options + + + + The main settings for an output option are the + and a value called + , the meaning of which depends on + the destination that is set. + + + +
+ destination (string) + + + + The destination is the type of output. It can be one of: + + + + + + + console + + + + file + + + + syslog + + + + +
+ +
+ output (string) + + + + Depending on what is set as the output destination, this + value is interpreted as follows: + + - - 2011-06-15 13:48:22.034 - - The date and time at which the message was generated. - - - - ERROR - - The severity of the message. - - + + is console + + + The value of output must be one of stdout + (messages printed to standard output) or + stderr (messages printed to standard + error). + + + - - [b10-resolver.asiolink] - - The source of the message. This comprises two components: - the BIND 10 process generating the message (in this - case, b10-resolver) and the module - within the program from which the message originated - (which in the example is the asynchronous I/O link - module, asiolink). - - + + is file + + + The value of output is interpreted as a file name; + log messages will be appended to this file. + + + - - ASIODNS_OPENSOCK - + + is syslog + + + The value of output is interpreted as the + syslog facility (e.g. + local0) that should be used + for log messages. + + + + + + + + + The other options for are: + + + +
+ flush (true of false) + + + Flush buffers after each log message. Doing this will + reduce performance but will ensure that if the program + terminates abnormally, all messages up to the point of + termination are output. + + +
+ +
+ maxsize (integer) + + + Only relevant when destination is file, this is maximum + file size of output files in bytes. When the maximum + size is reached, the file is renamed and a new file opened. + (For example, a ".1" is appended to the name — + if a ".1" file exists, it is renamed ".2", + etc.) + + + + If this is 0, no maximum file size is used. + + +
+ +
+ maxver (integer) + + + Maximum number of old log files to keep around when + rolling the output file. Only relevant when + is file. + + +
+ +
+ +
+ +
+ Example session + + + + In this example we want to set the global logging to + write to the file /var/log/my_bind10.log, + at severity WARN. We want the authoritative server to + log at DEBUG with debuglevel 40, to a different file + (/tmp/debug_messages). + + + + + + Start bindctl. + + + + + + ["login success "] +> config show Logging +Logging/loggers [] list + + + + + + + By default, no specific loggers are configured, in which + case the severity defaults to INFO and the output is + written to stderr. + + + + + + Let's first add a default logger: + + + + + + + > config add Logging/loggers +> config show Logging +Logging/loggers/ list (modified) + + + + + + + The loggers value line changed to indicate that it is no + longer an empty list: + + + + + + > config show Logging/loggers +Logging/loggers[0]/name "" string (default) +Logging/loggers[0]/severity "INFO" string (default) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options [] list (default) + + + + + + + The name is mandatory, so we must set it. We will also + change the severity as well. Let's start with the global + logger. + + + + + + > config set Logging/loggers[0]/name * +> config set Logging/loggers[0]/severity WARN +> config show Logging/loggers +Logging/loggers[0]/name "*" string (modified) +Logging/loggers[0]/severity "WARN" string (modified) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options [] list (default) + + + + + + + Of course, we need to specify where we want the log + messages to go, so we add an entry for an output option. + + + + + + > config add Logging/loggers[0]/output_options +> config show Logging/loggers[0]/output_options +Logging/loggers[0]/output_options[0]/destination "console" string (default) +Logging/loggers[0]/output_options[0]/output "stdout" string (default) +Logging/loggers[0]/output_options[0]/flush false boolean (default) +Logging/loggers[0]/output_options[0]/maxsize 0 integer (default) +Logging/loggers[0]/output_options[0]/maxver 0 integer (default) + + + + + + + + These aren't the values we are looking for. + + + + + + > config set Logging/loggers[0]/output_options[0]/destination file +> config set Logging/loggers[0]/output_options[0]/output /var/log/bind10.log +> config set Logging/loggers[0]/output_options[0]/maxsize 30000 +> config set Logging/loggers[0]/output_options[0]/maxver 8 + + + + + + + Which would make the entire configuration for this logger + look like: + + + + + + > config show all Logging/loggers +Logging/loggers[0]/name "*" string (modified) +Logging/loggers[0]/severity "WARN" string (modified) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options[0]/destination "file" string (modified) +Logging/loggers[0]/output_options[0]/output "/var/log/bind10.log" string (modified) +Logging/loggers[0]/output_options[0]/flush false boolean (default) +Logging/loggers[0]/output_options[0]/maxsize 30000 integer (modified) +Logging/loggers[0]/output_options[0]/maxver 8 integer (modified) + + + + + + + That looks OK, so let's commit it before we add the + configuration for the authoritative server's logger. + + + + + + > config commit + + + + + + Now that we have set it, and checked each value along + the way, adding a second entry is quite similar. + + + + + + > config add Logging/loggers +> config set Logging/loggers[1]/name Auth +> config set Logging/loggers[1]/severity DEBUG +> config set Logging/loggers[1]/debuglevel 40 +> config add Logging/loggers[1]/output_options +> config set Logging/loggers[1]/output_options[0]/destination file +> config set Logging/loggers[1]/output_options[0]/output /tmp/auth_debug.log +> config commit + + + + + + + And that's it. Once we have found whatever it was we + needed the debug messages for, we can simply remove the + second logger to let the authoritative server use the + same settings as the rest. + + + + + + > config remove Logging/loggers[1] +> config commit + + + + + + + And every module will now be using the values from the + logger named *. + + + +
+ +
+ +
+ Logging Message Format + + + Each message written by BIND 10 to the configured logging + destinations comprises a number of components that identify + the origin of the message and, if the message indicates + a problem, information about the problem that may be + useful in fixing it. + + + + Consider the message below logged to a file: + 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] + ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53) + + + + Note: the layout of messages written to the system logging + file (syslog) may be slightly different. This message has + been split across two lines here for display reasons; in the + logging file, it will appear on one line.) + + + + The log message comprises a number of components: + + + + 2011-06-15 13:48:22.034 + + + The date and time at which the message was generated. + + + + + ERROR + + The severity of the message. + + + + + [b10-resolver.asiolink] + + The source of the message. This comprises two components: + the BIND 10 process generating the message (in this + case, b10-resolver) and the module + within the program from which the message originated + (which in the example is the asynchronous I/O link + module, asiolink). + + + + + ASIODNS_OPENSOCK + The message identification. Every message in BIND 10 has a unique identification, which can be used as an index into the () from which more information can be obtained. - - + + - - error 111 opening TCP socket to 127.0.0.1(53) - - A brief description of the cause of the problem. Within this text, - information relating to the condition that caused the message to - be logged will be included. In this example, error number 111 - (an operating system-specific error number) was encountered when - trying to open a TCP connection to port 53 on the local system - (address 127.0.0.1). The next step would be to find out the reason - for the failure by consulting your system's documentation to - identify what error number 111 means. - - - + + error 111 opening TCP socket to 127.0.0.1(53) + + A brief description of the cause of the problem. + Within this text, information relating to the condition + that caused the message to be logged will be included. + In this example, error number 111 (an operating + system-specific error number) was encountered when + trying to open a TCP connection to port 53 on the + local system (address 127.0.0.1). The next step + would be to find out the reason for the failure by + consulting your system's documentation to identify + what error number 111 means. + + + + + +
-
diff --git a/doc/guide/bind10-messages.html b/doc/guide/bind10-messages.html index b075e96eb3..ecebcd825c 100644 --- a/doc/guide/bind10-messages.html +++ b/doc/guide/bind10-messages.html @@ -1,10 +1,10 @@ -BIND 10 Messages Manual

BIND 10 Messages Manual

This is the messages manual for BIND 10 version - 20110519.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by +BIND 10 Messages Manual

BIND 10 Messages Manual

This is the messages manual for BIND 10 version + 20110705.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by Internet Systems Consortium (ISC). It includes DNS libraries and modular components for controlling authoritative and recursive DNS servers.

- This is the messages manual for BIND 10 version 20110519. + This is the messages manual for BIND 10 version 20110705. The most up-to-date version of this document, along with other documents for BIND 10, can be found at http://bind10.isc.org/docs. @@ -26,38 +26,337 @@ For information on configuring and using BIND 10 logging, refer to the BIND 10 Guide.

Chapter 2. BIND 10 Messages

-

ASIODNS_FETCHCOMP upstream fetch to %1(%2) has now completed

-A debug message, this records the the upstream fetch (a query made by the +

ASIODNS_FETCH_COMPLETED upstream fetch to %1(%2) has now completed

+A debug message, this records that the upstream fetch (a query made by the resolver on behalf of its client) to the specified address has completed. -

ASIODNS_FETCHSTOP upstream fetch to %1(%2) has been stopped

+

ASIODNS_FETCH_STOPPED upstream fetch to %1(%2) has been stopped

An external component has requested the halting of an upstream fetch. This is an allowed operation, and the message should only appear if debug is enabled. -

ASIODNS_OPENSOCK error %1 opening %2 socket to %3(%4)

+

ASIODNS_OPEN_SOCKET error %1 opening %2 socket to %3(%4)

The asynchronous I/O code encountered an error when trying to open a socket of the specified protocol in order to send a message to the target address. -The the number of the system error that cause the problem is given in the +The number of the system error that cause the problem is given in the message. -

ASIODNS_RECVSOCK error %1 reading %2 data from %3(%4)

-The asynchronous I/O code encountered an error when trying read data from -the specified address on the given protocol. The the number of the system +

ASIODNS_READ_DATA error %1 reading %2 data from %3(%4)

+The asynchronous I/O code encountered an error when trying to read data from +the specified address on the given protocol. The number of the system error that cause the problem is given in the message. -

ASIODNS_RECVTMO receive timeout while waiting for data from %1(%2)

+

ASIODNS_READ_TIMEOUT receive timeout while waiting for data from %1(%2)

An upstream fetch from the specified address timed out. This may happen for any number of reasons and is most probably a problem at the remote server or a problem on the network. The message will only appear if debug is enabled. -

ASIODNS_SENDSOCK error %1 sending data using %2 to %3(%4)

+

ASIODNS_SEND_DATA error %1 sending data using %2 to %3(%4)

The asynchronous I/O code encountered an error when trying send data to the specified address on the given protocol. The the number of the system error that cause the problem is given in the message. -

ASIODNS_UNKORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3)

-This message should not appear and indicates an internal error if it does. -Please enter a bug report. -

ASIODNS_UNKRESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3)

-The termination method of the resolver's upstream fetch class was called with -an unknown result code (which is given in the message). This message should -not appear and may indicate an internal error. Please enter a bug report. +

ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3)

+An internal consistency check on the origin of a message from the +asynchronous I/O module failed. This may indicate an internal error; +please submit a bug report. +

ASIODNS_UNKNOWN_RESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3)

+An internal error indicating that the termination method of the resolver's +upstream fetch class was called with an unknown result code (which is +given in the message). Please submit a bug report. +

AUTH_AXFR_ERROR error handling AXFR request: %1

+This is a debug message produced by the authoritative server when it +has encountered an error processing an AXFR request. The message gives +the reason for the error, and the server will return a SERVFAIL code to +the sender. +

AUTH_AXFR_UDP AXFR query received over UDP

+This is a debug message output when the authoritative server has received +an AXFR query over UDP. Use of UDP for AXFRs is not permitted by the +protocol, so the server will return a FORMERR error to the sender. +

AUTH_COMMAND_FAILED execution of command channel instruction '%1' failed: %2

+Execution of the specified command by the authoritative server failed. The +message contains the reason for the failure. +

AUTH_CONFIG_CHANNEL_CREATED configuration session channel created

+This is a debug message indicating that authoritative server has created +the channel to the configuration manager. It is issued during server +startup is an indication that the initialization is proceeding normally. +

AUTH_CONFIG_CHANNEL_ESTABLISHED configuration session channel established

+This is a debug message indicating that authoritative server +has established communication the configuration manager over the +previously-created channel. It is issued during server startup is an +indication that the initialization is proceeding normally. +

AUTH_CONFIG_CHANNEL_STARTED configuration session channel started

+This is a debug message, issued when the authoritative server has +posted a request to be notified when new configuration information is +available. It is issued during server startup is an indication that +the initialization is proceeding normally. +

AUTH_CONFIG_LOAD_FAIL load of configuration failed: %1

+An attempt to configure the server with information from the configuration +database during the startup sequence has failed. (The reason for +the failure is given in the message.) The server will continue its +initialization although it may not be configured in the desired way. +

AUTH_CONFIG_UPDATE_FAIL update of configuration failed: %1

+At attempt to update the configuration the server with information +from the configuration database has failed, the reason being given in +the message. +

AUTH_DATA_SOURCE data source database file: %1

+This is a debug message produced by the authoritative server when it accesses a +datebase data source, listing the file that is being accessed. +

AUTH_DNS_SERVICES_CREATED DNS services created

+This is a debug message indicating that the component that will handling +incoming queries for the authoritiative server (DNSServices) has been +successfully created. It is issued during server startup is an indication +that the initialization is proceeding normally. +

AUTH_HEADER_PARSE_FAIL unable to parse header in received DNS packet: %1

+This is a debug message, generated by the authoritative server when an +attempt to parse the header of a received DNS packet has failed. (The +reason for the failure is given in the message.) The server will drop the +packet. +

AUTH_LOAD_TSIG loading TSIG keys

+This is a debug message indicating that the authoritiative server +has requested the keyring holding TSIG keys from the configuration +database. It is issued during server startup is an indication that the +initialization is proceeding normally. +

AUTH_LOAD_ZONE loaded zone %1/%2

+This debug message is issued during the processing of the 'loadzone' command +when the authoritative server has successfully loaded the named zone of the +named class. +

AUTH_MEM_DATASRC_DISABLED memory data source is disabled for class %1

+This is a debug message reporting that the authoritative server has +discovered that the memory data source is disabled for the given class. +

AUTH_MEM_DATASRC_ENABLED memory data source is enabled for class %1

+This is a debug message reporting that the authoritative server has +discovered that the memory data source is enabled for the given class. +

AUTH_NOTIFY_QUESTIONS invalid number of questions (%1) in incoming NOTIFY

+This debug message is logged by the authoritative server when it receives +a NOTIFY packet that contains zero or more than one question. (A valid +NOTIFY packet contains one question.) The server will return a FORMERR +error to the sender. +

AUTH_NOTIFY_RRTYPE invalid question RR type (%1) in incoming NOTIFY

+This debug message is logged by the authoritative server when it receives +a NOTIFY packet that an RR type of something other than SOA in the +question section. (The RR type received is included in the message.) The +server will return a FORMERR error to the sender. +

AUTH_NO_STATS_SESSION session interface for statistics is not available

+The authoritative server had no session with the statistics module at the +time it attempted to send it data: the attempt has been abandoned. This +could be an error in configuration. +

AUTH_NO_XFRIN received NOTIFY but XFRIN session is not running

+This is a debug message produced by the authoritative server when it receives +a NOTIFY packet but the XFRIN process is not running. The packet will be +dropped and nothing returned to the sender. +

AUTH_PACKET_PARSE_ERROR unable to parse received DNS packet: %1

+This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to something other +than a protocol error. The reason for the failure is given in the message; +the server will return a SERVFAIL error code to the sender. +

AUTH_PACKET_PROTOCOL_ERROR DNS packet protocol error: %1. Returning %2

+This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to a protocol error. +The reason for the failure is given in the message, as is the error code +that will be returned to the sender. +

AUTH_PACKET_RECEIVED message received:\n%1

+This is a debug message output by the authoritative server when it +receives a valid DNS packet. +

+Note: This message includes the packet received, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_PROCESS_FAIL message processing failure: %1

+This message is generated by the authoritative server when it has +encountered an internal error whilst processing a received packet: +the cause of the error is included in the message. +

+The server will return a SERVFAIL error code to the sender of the packet. +However, this message indicates a potential error in the server. +Please open a bug ticket for this issue. +

AUTH_RECEIVED_COMMAND command '%1' received

+This is a debug message issued when the authoritative server has received +a command on the command channel. +

AUTH_RECEIVED_SENDSTATS command 'sendstats' received

+This is a debug message issued when the authoritative server has received +a command from the statistics module to send it data. The 'sendstats' +command is handled differently to other commands, which is why the debug +message associated with it has its own code. +

AUTH_RESPONSE_RECEIVED received response message, ignoring

+This is a debug message, this is output if the authoritative server +receives a DNS packet with the QR bit set, i.e. a DNS response. The +server ignores the packet as it only responds to question packets. +

AUTH_SEND_ERROR_RESPONSE sending an error response (%1 bytes):\n%2

+This is a debug message recording that the authoritative server is sending +an error response to the originator of the query. A previous message will +have recorded details of the failure. +

+Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_SEND_NORMAL_RESPONSE sending an error response (%1 bytes):\n%2

+This is a debug message recording that the authoritative server is sending +a response to the originator of a query. +

+Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_SERVER_CREATED server created

+An informational message indicating that the authoritative server process has +been created and is initializing. The AUTH_SERVER_STARTED message will be +output when initialization has successfully completed and the server starts +accepting queries. +

AUTH_SERVER_FAILED server failed: %1

+The authoritative server has encountered a fatal error and is terminating. The +reason for the failure is included in the message. +

AUTH_SERVER_STARTED server started

+Initialization of the authoritative server has completed successfully +and it is entering the main loop, waiting for queries to arrive. +

AUTH_SQLITE3 nothing to do for loading sqlite3

+This is a debug message indicating that the authoritative server has +found that the data source it is loading is an SQLite3 data source, +so no further validation is needed. +

AUTH_STATS_CHANNEL_CREATED STATS session channel created

+This is a debug message indicating that the authoritative server has +created a channel to the statistics process. It is issued during server +startup is an indication that the initialization is proceeding normally. +

AUTH_STATS_CHANNEL_ESTABLISHED STATS session channel established

+This is a debug message indicating that the authoritative server +has established communication over the previously created statistics +channel. It is issued during server startup is an indication that the +initialization is proceeding normally. +

AUTH_STATS_COMMS communication error in sending statistics data: %1

+An error was encountered when the authoritiative server tried to send data +to the statistics daemon. The message includes additional information +describing the reason for the failure. +

AUTH_STATS_TIMEOUT timeout while sending statistics data: %1

+The authoritative server sent data to the statistics daemon but received +no acknowledgement within the specified time. The message includes +additional information describing the reason for the failure. +

AUTH_STATS_TIMER_DISABLED statistics timer has been disabled

+This is a debug message indicating that the statistics timer has been +disabled in the authoritative server and no statistics information is +being produced. +

AUTH_STATS_TIMER_SET statistics timer set to %1 second(s)

+This is a debug message indicating that the statistics timer has been +enabled and that the authoritative server will produce statistics data +at the specified interval. +

AUTH_UNSUPPORTED_OPCODE unsupported opcode: %1

+This is a debug message, produced when a received DNS packet being +processed by the authoritative server has been found to contain an +unsupported opcode. (The opcode is included in the message.) The server +will return an error code of NOTIMPL to the sender. +

AUTH_XFRIN_CHANNEL_CREATED XFRIN session channel created

+This is a debug message indicating that the authoritative server has +created a channel to the XFRIN (Transfer-in) process. It is issued +during server startup is an indication that the initialization is +proceeding normally. +

AUTH_XFRIN_CHANNEL_ESTABLISHED XFRIN session channel established

+This is a debug message indicating that the authoritative server has +established communication over the previously-created channel to the +XFRIN (Transfer-in) process. It is issued during server startup is an +indication that the initialization is proceeding normally. +

AUTH_ZONEMGR_COMMS error communicating with zone manager: %1

+This is a debug message output during the processing of a NOTIFY request. +An error (listed in the message) has been encountered whilst communicating +with the zone manager. The NOTIFY request will not be honored. +

AUTH_ZONEMGR_ERROR received error response from zone manager: %1

+This is a debug message output during the processing of a NOTIFY +request. The zone manager component has been informed of the request, +but has returned an error response (which is included in the message). The +NOTIFY request will not be honored. +

CC_ASYNC_READ_FAILED asynchronous read failed

+This marks a low level error, we tried to read data from the message queue +daemon asynchronously, but the ASIO library returned an error. +

CC_CONN_ERROR error connecting to message queue (%1)

+It is impossible to reach the message queue daemon for the reason given. It +is unlikely there'll be reason for whatever program this currently is to +continue running, as the communication with the rest of BIND 10 is vital +for the components. +

CC_DISCONNECT disconnecting from message queue daemon

+The library is disconnecting from the message queue daemon. This debug message +indicates that the program is trying to shut down gracefully. +

CC_ESTABLISH trying to establish connection with message queue daemon at %1

+This debug message indicates that the command channel library is about to +connect to the message queue daemon, which should be listening on the UNIX-domain +socket listed in the output. +

CC_ESTABLISHED successfully connected to message queue daemon

+This debug message indicates that the connection was successfully made, this +should follow CC_ESTABLISH. +

CC_GROUP_RECEIVE trying to receive a message

+Debug message, noting that a message is expected to come over the command +channel. +

CC_GROUP_RECEIVED message arrived ('%1', '%2')

+Debug message, noting that we successfully received a message (its envelope and +payload listed). This follows CC_GROUP_RECEIVE, but might happen some time +later, depending if we waited for it or just polled. +

CC_GROUP_SEND sending message '%1' to group '%2'

+Debug message, we're about to send a message over the command channel. +

CC_INVALID_LENGTHS invalid length parameters (%1, %2)

+This happens when garbage comes over the command channel or some kind of +confusion happens in the program. The data received from the socket make no +sense if we interpret it as lengths of message. The first one is total length +of message, the second length of the header. The header and it's length +(2 bytes) is counted in the total length. +

CC_LENGTH_NOT_READY length not ready

+There should be data representing length of message on the socket, but it +is not there. +

CC_NO_MESSAGE no message ready to be received yet

+The program polled for incoming messages, but there was no message waiting. +This is a debug message which may happen only after CC_GROUP_RECEIVE. +

CC_NO_MSGQ unable to connect to message queue (%1)

+It isn't possible to connect to the message queue daemon, for reason listed. +It is unlikely any program will be able continue without the communication. +

CC_READ_ERROR error reading data from command channel (%1)

+A low level error happened when the library tried to read data from the +command channel socket. The reason is listed. +

CC_READ_EXCEPTION error reading data from command channel (%1)

+We received an exception while trying to read data from the command +channel socket. The reason is listed. +

CC_REPLY replying to message from '%1' with '%2'

+Debug message, noting we're sending a response to the original message +with the given envelope. +

CC_SET_TIMEOUT setting timeout to %1ms

+Debug message. A timeout for which the program is willing to wait for a reply +is being set. +

CC_START_READ starting asynchronous read

+Debug message. From now on, when a message (or command) comes, it'll wake the +program and the library will automatically pass it over to correct place. +

CC_SUBSCRIBE subscribing to communication group %1

+Debug message. The program wants to receive messages addressed to this group. +

CC_TIMEOUT timeout reading data from command channel

+The program waited too long for data from the command channel (usually when it +sent a query to different program and it didn't answer for whatever reason). +

CC_UNSUBSCRIBE unsubscribing from communication group %1

+Debug message. The program no longer wants to receive messages addressed to +this group. +

CC_WRITE_ERROR error writing data to command channel (%1)

+A low level error happened when the library tried to write data to the command +channel socket. +

CC_ZERO_LENGTH invalid message length (0)

+The library received a message length being zero, which makes no sense, since +all messages must contain at least the envelope. +

CFGMGR_AUTOMATIC_CONFIG_DATABASE_UPDATE Updating configuration database from version %1 to %2

+An older version of the configuration database has been found, from which +there was an automatic upgrade path to the current version. These changes +are now applied, and no action from the administrator is necessary. +

CFGMGR_CC_SESSION_ERROR Error connecting to command channel: %1

+The configuration manager daemon was unable to connect to the messaging +system. The most likely cause is that msgq is not running. +

CFGMGR_DATA_READ_ERROR error reading configuration database from disk: %1

+There was a problem reading the persistent configuration data as stored +on disk. The file may be corrupted, or it is of a version from where +there is no automatic upgrade path. The file needs to be repaired or +removed. The configuration manager daemon will now shut down. +

CFGMGR_IOERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1

+There was an IO error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the directory where +the file is stored does not exist, or is not writable. The updated +configuration is not stored. +

CFGMGR_OSERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1

+There was an OS error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the system does not have +write access to the configuration database file. The updated +configuration is not stored. +

CFGMGR_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the cfgmgr daemon. The +daemon will now shut down.

CONFIG_CCSESSION_MSG error in CC session message: %1

There was a problem with an incoming message on the command and control channel. The message does not appear to be a valid command, and is @@ -65,33 +364,36 @@ missing a required element or contains an unknown data format. This most likely means that another BIND10 module is sending a bad message. The message itself is ignored by this module.

CONFIG_CCSESSION_MSG_INTERNAL error handling CC session message: %1

-There was an internal problem handling an incoming message on the -command and control channel. An unexpected exception was thrown. This -most likely points to an internal inconsistency in the module code. The -exception message is appended to the log error, and the module will -continue to run, but will not send back an answer. -

CONFIG_FOPEN_ERR error opening %1: %2

-There was an error opening the given file. -

CONFIG_JSON_PARSE JSON parse error in %1: %2

-There was a parse error in the JSON file. The given file does not appear -to be in valid JSON format. Please verify that the filename is correct -and that the contents are valid JSON. -

CONFIG_MANAGER_CONFIG error getting configuration from cfgmgr: %1

+There was an internal problem handling an incoming message on the command +and control channel. An unexpected exception was thrown, details of +which are appended to the message. The module will continue to run, +but will not send back an answer. +

+The most likely cause of this error is a programming error. Please raise +a bug report. +

CONFIG_GET_FAIL error getting configuration from cfgmgr: %1

The configuration manager returned an error when this module requested the configuration. The full error message answer from the configuration manager is appended to the log error. The most likely cause is that the module is of a different (command specification) version than the running configuration manager. -

CONFIG_MANAGER_MOD_SPEC module specification not accepted by cfgmgr: %1

-The module specification file for this module was rejected by the -configuration manager. The full error message answer from the -configuration manager is appended to the log error. The most likely -cause is that the module is of a different (specification file) version -than the running configuration manager. -

CONFIG_MODULE_SPEC module specification error in %1: %2

-The given file does not appear to be a valid specification file. Please -verify that the filename is correct and that its contents are a valid -BIND10 module specification. +

CONFIG_JSON_PARSE JSON parse error in %1: %2

+There was an error parsing the JSON file. The given file does not appear +to be in valid JSON format. Please verify that the filename is correct +and that the contents are valid JSON. +

CONFIG_MOD_SPEC_FORMAT module specification error in %1: %2

+The given file does not appear to be a valid specification file: details +are included in the message. Please verify that the filename is correct +and that its contents are a valid BIND10 module specification. +

CONFIG_MOD_SPEC_REJECT module specification rejected by cfgmgr: %1

+The specification file for this module was rejected by the configuration +manager. The full error message answer from the configuration manager is +appended to the log error. The most likely cause is that the module is of +a different (specification file) version than the running configuration +manager. +

CONFIG_OPEN_FAIL error opening %1: %2

+There was an error opening the given file. The reason for the failure +is included in the message.

DATASRC_CACHE_CREATE creating the hotspot cache

Debug information that the hotspot cache was created at startup.

DATASRC_CACHE_DESTROY destroying the hotspot cache

@@ -146,7 +448,7 @@ Debug information. The requested domain is an alias to a different domain, returning the CNAME instead.

DATASRC_MEM_CNAME_COEXIST can't add data to CNAME in domain '%1'

This is the same problem as in MEM_CNAME_TO_NONEMPTY, but it happened the -other way around -- adding some outher data to CNAME. +other way around -- adding some other data to CNAME.

DATASRC_MEM_CNAME_TO_NONEMPTY can't add CNAME to domain with other data in '%1'

Someone or something tried to add a CNAME into a domain that already contains some other data. But the protocol forbids coexistence of CNAME with anything @@ -164,7 +466,7 @@ encountered on the way. This may lead to redirection to a different domain and stop the search.

DATASRC_MEM_DNAME_FOUND DNAME found at '%1'

Debug information. A DNAME was found instead of the requested information. -

DATASRC_MEM_DNAME_NS dNAME and NS can't coexist in non-apex domain '%1'

+

DATASRC_MEM_DNAME_NS DNAME and NS can't coexist in non-apex domain '%1'

It was requested for DNAME and NS records to be put into the same domain which is not the apex (the top of the zone). This is forbidden by RFC 2672, section 3. This indicates a problem with provided data. @@ -222,12 +524,12 @@ destroyed. Debug information. A domain above wildcard was reached, but there's something below the requested domain. Therefore the wildcard doesn't apply here. This behaviour is specified by RFC 1034, section 4.3.3 -

DATASRC_MEM_WILDCARD_DNAME dNAME record in wildcard domain '%1'

+

DATASRC_MEM_WILDCARD_DNAME DNAME record in wildcard domain '%1'

The software refuses to load DNAME records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should behave and BIND 9 refuses that as well. Please describe your intention using different tools. -

DATASRC_MEM_WILDCARD_NS nS record in wildcard domain '%1'

+

DATASRC_MEM_WILDCARD_NS NS record in wildcard domain '%1'

The software refuses to load NS records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should behave and BIND 9 refuses that as well. Please describe your intention using @@ -269,7 +571,7 @@ response message.

DATASRC_QUERY_DELEGATION looking for delegation on the path to '%1'

Debug information. The software is trying to identify delegation points on the way down to the given domain. -

DATASRC_QUERY_EMPTY_CNAME cNAME at '%1' is empty

+

DATASRC_QUERY_EMPTY_CNAME CNAME at '%1' is empty

There was an CNAME and it was being followed. But it contains no records, so there's nowhere to go. There will be no answer. This indicates a problem with supplied data. @@ -363,7 +665,7 @@ DNAMEs will be synthesized.

DATASRC_QUERY_TASK_FAIL task failed with %1

The query subtask failed. The reason should have been reported by the subtask already. The code is 1 for error, 2 for not implemented. -

DATASRC_QUERY_TOO_MANY_CNAMES cNAME chain limit exceeded at '%1'

+

DATASRC_QUERY_TOO_MANY_CNAMES CNAME chain limit exceeded at '%1'

A CNAME led to another CNAME and it led to another, and so on. After 16 CNAMEs, the software gave up. Long CNAME chains are discouraged, and this might possibly be a loop as well. Note that some of the CNAMEs might have @@ -385,15 +687,15 @@ While processing a wildcard, a referral was met. But it wasn't possible to get enough information for it. The code is 1 for error, 2 for not implemented.

DATASRC_SQLITE_CLOSE closing SQLite database

Debug information. The SQLite data source is closing the database file. -

DATASRC_SQLITE_CREATE sQLite data source created

+

DATASRC_SQLITE_CREATE SQLite data source created

Debug information. An instance of SQLite data source is being created. -

DATASRC_SQLITE_DESTROY sQLite data source destroyed

+

DATASRC_SQLITE_DESTROY SQLite data source destroyed

Debug information. An instance of SQLite data source is being destroyed.

DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1'

-Debug information. The SQLite data source is trying to identify, which zone +Debug information. The SQLite data source is trying to identify which zone should hold this domain.

DATASRC_SQLITE_ENCLOSURE_NOTFOUND no zone contains it

-Debug information. The last SQLITE_ENCLOSURE query was unsuccessful, there's +Debug information. The last SQLITE_ENCLOSURE query was unsuccessful; there's no such zone in our data.

DATASRC_SQLITE_FIND looking for RRset '%1/%2'

Debug information. The SQLite data source is looking up a resource record @@ -417,7 +719,7 @@ and type in the database. Debug information. The SQLite data source is identifying if this domain is a referral and where it goes.

DATASRC_SQLITE_FINDREF_BAD_CLASS class mismatch looking for referral ('%1' and '%2')

-The SQLite data source was trying to identify, if there's a referral. But +The SQLite data source was trying to identify if there's a referral. But it contains different class than the query was for.

DATASRC_SQLITE_FIND_BAD_CLASS class mismatch looking for an RRset ('%1' and '%2')

The SQLite data source was looking up an RRset, but the data source contains @@ -452,142 +754,173 @@ data source.

DATASRC_UNEXPECTED_QUERY_STATE unexpected query state

This indicates a programming error. An internal task of unknown type was generated. -

LOGIMPL_ABOVEDBGMAX debug level of %1 is too high and will be set to the maximum of %2

-A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is above the maximum allowed value and has -been reduced to that value. -

LOGIMPL_BADDEBUG debug string is '%1': must be of the form DEBUGn

-The string indicating the extended logging level (used by the underlying -logger implementation code) is not of the stated form. In particular, -it starts DEBUG but does not end with an integer. -

LOGIMPL_BELOWDBGMIN debug level of %1 is too low and will be set to the minimum of %2

-A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is below the minimum allowed value and has -been increased to that value. -

MSG_BADDESTINATION unrecognized log destination: %1

+

LOGIMPL_ABOVE_MAX_DEBUG debug level of %1 is too high and will be set to the maximum of %2

+A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is above the maximum allowed value and has +been reduced to that value. The appearance of this message may indicate +a programming error - please submit a bug report. +

LOGIMPL_BAD_DEBUG_STRING debug string '%1' has invalid format

+A message from the interface to the underlying logger implementation +reporting that an internally-created string used to set the debug level +is not of the correct format (it should be of the form DEBUGn, where n +is an integer, e.g. DEBUG22). The appearance of this message indicates +a programming error - please submit a bug report. +

LOGIMPL_BELOW_MIN_DEBUG debug level of %1 is too low and will be set to the minimum of %2

+A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is below the minimum allowed value and has +been increased to that value. The appearance of this message may indicate +a programming error - please submit a bug report. +

LOG_BAD_DESTINATION unrecognized log destination: %1

A logger destination value was given that was not recognized. The destination should be one of "console", "file", or "syslog". -

MSG_BADSEVERITY unrecognized log severity: %1

+

LOG_BAD_SEVERITY unrecognized log severity: %1

A logger severity value was given that was not recognized. The severity should be one of "DEBUG", "INFO", "WARN", "ERROR", or "FATAL". -

MSG_BADSTREAM bad log console output stream: %1

-A log console output stream was given that was not recognized. The -output stream should be one of "stdout", or "stderr" -

MSG_DUPLNS line %1: duplicate $NAMESPACE directive found

-When reading a message file, more than one $NAMESPACE directive was found. In -this version of the code, such a condition is regarded as an error and the -read will be abandoned. -

MSG_DUPMSGID duplicate message ID (%1) in compiled code

-Indicative of a programming error, when it started up, BIND10 detected that -the given message ID had been registered by one or more modules. (All message -IDs should be unique throughout BIND10.) This has no impact on the operation -of the server other that erroneous messages may be logged. (When BIND10 loads -the message IDs (and their associated text), if a duplicate ID is found it is -discarded. However, when the module that supplied the duplicate ID logs that -particular message, the text supplied by the module that added the original -ID will be output - something that may bear no relation to the condition being -logged. -

MSG_IDNOTFND could not replace message text for '%1': no such message

-During start-up a local message file was read. A line with the listed -message identification was found in the file, but the identification is not -one contained in the compiled-in message dictionary. Either the message -identification has been mis-spelled in the file, or the local file was used -for an earlier version of the software and the message with that -identification has been removed. +

LOG_BAD_STREAM bad log console output stream: %1

+A log console output stream was given that was not recognized. The output +stream should be one of "stdout", or "stderr" +

LOG_DUPLICATE_MESSAGE_ID duplicate message ID (%1) in compiled code

+During start-up, BIND10 detected that the given message identification had +been defined multiple times in the BIND10 code.

-This message may appear a number of times in the file, once for every such -unknown message identification. -

MSG_INVMSGID line %1: invalid message identification '%2'

-The concatenation of the prefix and the message identification is used as -a symbol in the C++ module; as such it may only contain -

MSG_NOMSGID line %1: message definition line found without a message ID

-Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -indicates the message compiler found a line in the message file comprising -just the "%" and nothing else. -

MSG_NOMSGTXT line %1: line found containing a message ID ('%2') and no text

-Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -is generated when a line is found in the message file that contains the -leading "%" and the message identification but no text. -

MSG_NSEXTRARG line %1: $NAMESPACE directive has too many arguments

-The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with more than one argument. -

MSG_NSINVARG line %1: $NAMESPACE directive has an invalid argument ('%2')

-The $NAMESPACE argument should be a valid C++ namespace. The reader does a -cursory check on its validity, checking that the characters in the namespace -are correct. The error is generated when the reader finds an invalid -character. (Valid are alphanumeric characters, underscores and colons.) -

MSG_NSNOARG line %1: no arguments were given to the $NAMESPACE directive

-The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with no arguments. -

MSG_OPENIN unable to open message file %1 for input: %2

-The program was not able to open the specified input message file for the -reason given. -

MSG_OPENOUT unable to open %1 for output: %2

-The program was not able to open the specified output file for the reason -given. -

MSG_PRFEXTRARG line %1: $PREFIX directive has too many arguments

-The $PREFIX directive takes a single argument, a prefix to be added to the -symbol names when a C++ .h file is created. This error is generated when the -compiler finds a $PREFIX directive with more than one argument. -

MSG_PRFINVARG line %1: $PREFIX directive has an invalid argument ('%2')

-The $PREFIX argument is used in a symbol name in a C++ header file. As such, -it must adhere to restrictions on C++ symbol names (e.g. may only contain -alphanumeric characters or underscores, and may nor start with a digit). -A $PREFIX directive was found with an argument (given in the message) that -violates those restictions. -

MSG_RDLOCMES reading local message file %1

-This is an informational message output by BIND10 when it starts to read a -local message file. (A local message file may replace the text of one of more -messages; the ID of the message will not be changed though.) -

MSG_READERR error reading from message file %1: %2

+This has no ill-effects other than the possibility that an erronous +message may be logged. However, as it is indicative of a programming +error, please log a bug report. +

LOG_DUPLICATE_NAMESPACE line %1: duplicate $NAMESPACE directive found

+When reading a message file, more than one $NAMESPACE directive was found. +Such a condition is regarded as an error and the read will be abandoned. +

LOG_INPUT_OPEN_FAIL unable to open message file %1 for input: %2

+The program was not able to open the specified input message file for +the reason given. +

LOG_INVALID_MESSAGE_ID line %1: invalid message identification '%2'

+An invalid message identification (ID) has been found during the read of +a message file. Message IDs should comprise only alphanumeric characters +and the underscore, and should not start with a digit. +

LOG_NAMESPACE_EXTRA_ARGS line %1: $NAMESPACE directive has too many arguments

+The $NAMESPACE directive in a message file takes a single argument, a +namespace in which all the generated symbol names are placed. This error +is generated when the compiler finds a $NAMESPACE directive with more +than one argument. +

LOG_NAMESPACE_INVALID_ARG line %1: $NAMESPACE directive has an invalid argument ('%2')

+The $NAMESPACE argument in a message file should be a valid C++ namespace. +This message is output if the simple check on the syntax of the string +carried out by the reader fails. +

LOG_NAMESPACE_NO_ARGS line %1: no arguments were given to the $NAMESPACE directive

+The $NAMESPACE directive in a message file takes a single argument, +a C++ namespace in which all the generated symbol names are placed. +This error is generated when the compiler finds a $NAMESPACE directive +with no arguments. +

LOG_NO_MESSAGE_ID line %1: message definition line found without a message ID

+Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line in +the message file comprising just the "%" and nothing else. +

LOG_NO_MESSAGE_TEXT line %1: line found containing a message ID ('%2') and no text

+Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line +in the message file comprising just the "%" and message identification, +but no text. +

LOG_NO_SUCH_MESSAGE could not replace message text for '%1': no such message

+During start-up a local message file was read. A line with the listed +message identification was found in the file, but the identification is +not one contained in the compiled-in message dictionary. This message +may appear a number of times in the file, once for every such unknown +message identification. +

+There may be several reasons why this message may appear: +

+- The message ID has been mis-spelled in the local message file. +

+- The program outputting the message may not use that particular message +(e.g. it originates in a module not used by the program.) +

+- The local file was written for an earlier version of the BIND10 software +and the later version no longer generates that message. +

+Whatever the reason, there is no impact on the operation of BIND10. +

LOG_OPEN_OUTPUT_FAIL unable to open %1 for output: %2

+Originating within the logging code, the program was not able to open +the specified output file for the reason given. +

LOG_PREFIX_EXTRA_ARGS line %1: $PREFIX directive has too many arguments

+Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +This error is generated when the compiler finds a $PREFIX directive with +more than one argument. +

+Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. +

LOG_PREFIX_INVALID_ARG line %1: $PREFIX directive has an invalid argument ('%2')

+Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +As such, it must adhere to restrictions on C++ symbol names (e.g. may +only contain alphanumeric characters or underscores, and may nor start +with a digit). A $PREFIX directive was found with an argument (given +in the message) that violates those restictions. +

+Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. +

LOG_READING_LOCAL_FILE reading local message file %1

+This is an informational message output by BIND10 when it starts to read +a local message file. (A local message file may replace the text of +one of more messages; the ID of the message will not be changed though.) +

LOG_READ_ERROR error reading from message file %1: %2

The specified error was encountered reading from the named message file. -

MSG_UNRECDIR line %1: unrecognised directive '%2'

-A line starting with a dollar symbol was found, but the first word on the line -(shown in the message) was not a recognised message compiler directive. -

MSG_WRITERR error writing to %1: %2

-The specified error was encountered by the message compiler when writing to -the named output file. -

NSAS_INVRESPSTR queried for %1 but got invalid response

-This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for a RR for the -specified nameserver but received an invalid response. Either the success -function was called without a DNS message or the message was invalid on some -way. (In the latter case, the error should have been picked up elsewhere in -the processing logic, hence the raising of the error here.) -

NSAS_INVRESPTC queried for %1 RR of type/class %2/%3, received response %4/%5

-This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for the given RR -type and class, but instead received an answer with the given type and class. -

NSAS_LOOKUPCANCEL lookup for zone %1 has been cancelled

-A debug message, this is output when a NSAS (nameserver address store - -part of the resolver) lookup for a zone has been cancelled. -

NSAS_LOOKUPZONE searching NSAS for nameservers for zone %1

-A debug message, this is output when a call is made to the nameserver address -store (part of the resolver) to obtain the nameservers for the specified zone. -

NSAS_NSADDR asking resolver to obtain A and AAAA records for %1

-A debug message, the NSAS (nameserver address store - part of the resolver) is -making a callback into the resolver to retrieve the address records for the -specified nameserver. -

NSAS_NSLKUPFAIL failed to lookup any %1 for %2

-A debug message, the NSAS (nameserver address store - part of the resolver) -has been unable to retrieve the specified resource record for the specified -nameserver. This is not necessarily a problem - the nameserver may be -unreachable, in which case the NSAS will try other nameservers in the zone. -

NSAS_NSLKUPSUCC found address %1 for %2

-A debug message, the NSAS (nameserver address store - part of the resolver) -has retrieved the given address for the specified nameserver through an -external query. -

NSAS_SETRTT reporting RTT for %1 as %2; new value is now %3

+

LOG_UNRECOGNISED_DIRECTIVE line %1: unrecognised directive '%2'

+Within a message file, a line starting with a dollar symbol was found +(indicating the presence of a directive) but the first word on the line +(shown in the message) was not recognised. +

LOG_WRITE_ERROR error writing to %1: %2

+The specified error was encountered by the message compiler when writing +to the named output file. +

NSAS_FIND_NS_ADDRESS asking resolver to obtain A and AAAA records for %1

+A debug message issued when the NSAS (nameserver address store - part +of the resolver) is making a callback into the resolver to retrieve the +address records for the specified nameserver. +

NSAS_FOUND_ADDRESS found address %1 for %2

+A debug message issued when the NSAS (nameserver address store - part +of the resolver) has retrieved the given address for the specified +nameserver through an external query. +

NSAS_INVALID_RESPONSE queried for %1 but got invalid response

+The NSAS (nameserver address store - part of the resolver) made a query +for a RR for the specified nameserver but received an invalid response. +Either the success function was called without a DNS message or the +message was invalid on some way. (In the latter case, the error should +have been picked up elsewhere in the processing logic, hence the raising +of the error here.) +

+This message indicates an internal error in the NSAS. Please raise a +bug report. +

NSAS_LOOKUP_CANCEL lookup for zone %1 has been canceled

+A debug message issued when an NSAS (nameserver address store - part of +the resolver) lookup for a zone has been canceled. +

NSAS_NS_LOOKUP_FAIL failed to lookup any %1 for %2

+A debug message issued when the NSAS (nameserver address store - part of +the resolver) has been unable to retrieve the specified resource record +for the specified nameserver. This is not necessarily a problem - the +nameserver may be unreachable, in which case the NSAS will try other +nameservers in the zone. +

NSAS_SEARCH_ZONE_NS searching NSAS for nameservers for zone %1

+A debug message output when a call is made to the NSAS (nameserver +address store - part of the resolver) to obtain the nameservers for +the specified zone. +

NSAS_UPDATE_RTT update RTT for %1: was %2 ms, is now %3 ms

A NSAS (nameserver address store - part of the resolver) debug message -reporting the round-trip time (RTT) for a query made to the specified -nameserver. The RTT has been updated using the value given and the new RTT is -displayed. (The RTT is subject to a calculation that damps out sudden -changes. As a result, the new RTT is not necessarily equal to the RTT -reported.) +reporting the update of a round-trip time (RTT) for a query made to the +specified nameserver. The RTT has been updated using the value given +and the new RTT is displayed. (The RTT is subject to a calculation that +damps out sudden changes. As a result, the new RTT used by the NSAS in +future decisions of which nameserver to use is not necessarily equal to +the RTT reported.) +

NSAS_WRONG_ANSWER queried for %1 RR of type/class %2/%3, received response %4/%5

+A NSAS (nameserver address store - part of the resolver) made a query for +a resource record of a particular type and class, but instead received +an answer with a different given type and class. +

+This message indicates an internal error in the NSAS. Please raise a +bug report.

RESLIB_ANSWER answer received in response to query for <%1>

A debug message recording that an answer has been received to an upstream query for the specified question. Previous debug messages will have indicated @@ -599,95 +932,95 @@ the server to which the question was sent.

RESLIB_DEEPEST did not find <%1> in cache, deepest delegation found is %2

A debug message, a cache lookup did not find the specified <name, class, type> tuple in the cache; instead, the deepest delegation found is indicated. -

RESLIB_FOLLOWCNAME following CNAME chain to <%1>

+

RESLIB_FOLLOW_CNAME following CNAME chain to <%1>

A debug message, a CNAME response was received and another query is being issued for the <name, class, type> tuple. -

RESLIB_LONGCHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded

+

RESLIB_LONG_CHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded

A debug message recording that a CNAME response has been received to an upstream query for the specified question (Previous debug messages will have indicated the server to which the question was sent). However, receipt of this CNAME has meant that the resolver has exceeded the CNAME chain limit (a CNAME chain is where on CNAME points to another) and so an error is being returned. -

RESLIB_NONSRRSET no NS RRSet in referral response received to query for <%1>

+

RESLIB_NO_NS_RRSET no NS RRSet in referral response received to query for <%1>

A debug message, this indicates that a response was received for the specified -query and was categorised as a referral. However, the received message did +query and was categorized as a referral. However, the received message did not contain any NS RRsets. This may indicate a programming error in the response classification code. -

RESLIB_NSASLOOK looking up nameserver for zone %1 in the NSAS

+

RESLIB_NSAS_LOOKUP looking up nameserver for zone %1 in the NSAS

A debug message, the RunningQuery object is querying the NSAS for the nameservers for the specified zone. -

RESLIB_NXDOMRR NXDOMAIN/NXRRSET received in response to query for <%1>

+

RESLIB_NXDOM_NXRR NXDOMAIN/NXRRSET received in response to query for <%1>

A debug message recording that either a NXDOMAIN or an NXRRSET response has been received to an upstream query for the specified question. Previous debug messages will have indicated the server to which the question was sent.

RESLIB_PROTOCOL protocol error in answer for %1: %3

A debug message indicating that a protocol error was received. As there are no retries left, an error will be reported. -

RESLIB_PROTOCOLRTRY protocol error in answer for %1: %2 (retries left: %3)

+

RESLIB_PROTOCOL_RETRY protocol error in answer for %1: %2 (retries left: %3)

A debug message indicating that a protocol error was received and that the resolver is repeating the query to the same nameserver. After this repeated query, there will be the indicated number of retries left. -

RESLIB_RCODERR RCODE indicates error in response to query for <%1>

+

RESLIB_RCODE_ERR RCODE indicates error in response to query for <%1>

A debug message, the response to the specified query indicated an error that is not covered by a specific code path. A SERVFAIL will be returned. -

RESLIB_REFERRAL referral received in response to query for <%1>

-A debug message recording that a referral response has been received to an -upstream query for the specified question. Previous debug messages will -have indicated the server to which the question was sent. -

RESLIB_REFERZONE referred to zone %1

-A debug message indicating that the last referral message was to the specified -zone. -

RESLIB_RESCAFND found <%1> in the cache (resolve() instance %2)

+

RESLIB_RECQ_CACHE_FIND found <%1> in the cache (resolve() instance %2)

This is a debug message and indicates that a RecursiveQuery object found the the specified <name, class, type> tuple in the cache. The instance number at the end of the message indicates which of the two resolve() methods has been called. -

RESLIB_RESCANOTFND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2)

+

RESLIB_RECQ_CACHE_NO_FIND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2)

This is a debug message and indicates that the look in the cache made by the RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery object has been created to resolve the question. The instance number at the end of the message indicates which of the two resolve() methods has been called. +

RESLIB_REFERRAL referral received in response to query for <%1>

+A debug message recording that a referral response has been received to an +upstream query for the specified question. Previous debug messages will +have indicated the server to which the question was sent. +

RESLIB_REFER_ZONE referred to zone %1

+A debug message indicating that the last referral message was to the specified +zone.

RESLIB_RESOLVE asked to resolve <%1> (resolve() instance %2)

A debug message, the RecursiveQuery::resolve method has been called to resolve the specified <name, class, type> tuple. The first action will be to lookup the specified tuple in the cache. The instance number at the end of the message indicates which of the two resolve() methods has been called. -

RESLIB_RRSETFND found single RRset in the cache when querying for <%1> (resolve() instance %2)

+

RESLIB_RRSET_FOUND found single RRset in the cache when querying for <%1> (resolve() instance %2)

A debug message, indicating that when RecursiveQuery::resolve queried the cache, a single RRset was found which was put in the answer. The instance number at the end of the message indicates which of the two resolve() methods has been called.

RESLIB_RTT round-trip time of last query calculated as %1 ms

A debug message giving the round-trip time of the last query and response. -

RESLIB_RUNCAFND found <%1> in the cache

+

RESLIB_RUNQ_CACHE_FIND found <%1> in the cache

This is a debug message and indicates that a RunningQuery object found the specified <name, class, type> tuple in the cache. -

RESLIB_RUNCALOOK looking up up <%1> in the cache

+

RESLIB_RUNQ_CACHE_LOOKUP looking up up <%1> in the cache

This is a debug message and indicates that a RunningQuery object has made a call to its doLookup() method to look up the specified <name, class, type> tuple, the first action of which will be to examine the cache. -

RESLIB_RUNQUFAIL failure callback - nameservers are unreachable

+

RESLIB_RUNQ_FAIL failure callback - nameservers are unreachable

A debug message indicating that a RunningQuery's failure callback has been called because all nameservers for the zone in question are unreachable. -

RESLIB_RUNQUSUCC success callback - sending query to %1

+

RESLIB_RUNQ_SUCCESS success callback - sending query to %1

A debug message indicating that a RunningQuery's success callback has been called because a nameserver has been found, and that a query is being sent to the specified nameserver. -

RESLIB_TESTSERV setting test server to %1(%2)

+

RESLIB_TEST_SERVER setting test server to %1(%2)

This is an internal debugging message and is only generated in unit tests. It indicates that all upstream queries from the resolver are being routed to the specified server, regardless of the address of the nameserver to which the query would normally be routed. As it should never be seen in normal operation, it is a warning message instead of a debug message. -

RESLIB_TESTUPSTR sending upstream query for <%1> to test server at %2

+

RESLIB_TEST_UPSTREAM sending upstream query for <%1> to test server at %2

This is a debug message and should only be seen in unit tests. A query for the specified <name, class, type> tuple is being sent to a test nameserver whose address is given in the message.

RESLIB_TIMEOUT query <%1> to %2 timed out

A debug message indicating that the specified query has timed out and as there are no retries left, an error will be reported. -

RESLIB_TIMEOUTRTRY query <%1> to %2 timed out, re-trying (retries left: %3)

+

RESLIB_TIMEOUT_RETRY query <%1> to %2 timed out, re-trying (retries left: %3)

A debug message indicating that the specified query has timed out and that the resolver is repeating the query to the same nameserver. After this repeated query, there will be the indicated number of retries left. @@ -699,118 +1032,134 @@ gives no cause for concern.

RESLIB_UPSTREAM sending upstream query for <%1> to %2

A debug message indicating that a query for the specified <name, class, type> tuple is being sent to a nameserver whose address is given in the message. -

RESOLVER_AXFRTCP AXFR request received over TCP

+

RESOLVER_AXFR_TCP AXFR request received over TCP

A debug message, the resolver received a NOTIFY message over TCP. The server cannot process it and will return an error message to the sender with the RCODE set to NOTIMP. -

RESOLVER_AXFRUDP AXFR request received over UDP

+

RESOLVER_AXFR_UDP AXFR request received over UDP

A debug message, the resolver received a NOTIFY message over UDP. The server cannot process it (and in any case, an AXFR request should be sent over TCP) and will return an error message to the sender with the RCODE set to FORMERR. -

RESOLVER_CLTMOSMALL client timeout of %1 is too small

+

RESOLVER_CLIENT_TIME_SMALL client timeout of %1 is too small

An error indicating that the configuration value specified for the query timeout is too small. -

RESOLVER_CONFIGCHAN configuration channel created

+

RESOLVER_CONFIG_CHANNEL configuration channel created

A debug message, output when the resolver has successfully established a connection to the configuration channel. -

RESOLVER_CONFIGERR error in configuration: %1

+

RESOLVER_CONFIG_ERROR error in configuration: %1

An error was detected in a configuration update received by the resolver. This may be in the format of the configuration message (in which case this is a programming error) or it may be in the data supplied (in which case it is a user error). The reason for the error, given as a parameter in the message, will give more details. -

RESOLVER_CONFIGLOAD configuration loaded

+

RESOLVER_CONFIG_LOADED configuration loaded

A debug message, output when the resolver configuration has been successfully loaded. -

RESOLVER_CONFIGUPD configuration updated: %1

+

RESOLVER_CONFIG_UPDATED configuration updated: %1

A debug message, the configuration has been updated with the specified information.

RESOLVER_CREATED main resolver object created

A debug message, output when the Resolver() object has been created. -

RESOLVER_DNSMSGRCVD DNS message received: %1

+

RESOLVER_DNS_MESSAGE_RECEIVED DNS message received: %1

A debug message, this always precedes some other logging message and is the formatted contents of the DNS packet that the other message refers to. -

RESOLVER_DNSMSGSENT DNS message of %1 bytes sent: %2

+

RESOLVER_DNS_MESSAGE_SENT DNS message of %1 bytes sent: %2

A debug message, this contains details of the response sent back to the querying system.

RESOLVER_FAILED resolver failed, reason: %1

This is an error message output when an unhandled exception is caught by the resolver. All it can do is to shut down. -

RESOLVER_FWDADDR setting forward address %1(%2)

+

RESOLVER_FORWARD_ADDRESS setting forward address %1(%2)

This message may appear multiple times during startup, and it lists the forward addresses used by the resolver when running in forwarding mode. -

RESOLVER_FWDQUERY processing forward query

+

RESOLVER_FORWARD_QUERY processing forward query

The received query has passed all checks and is being forwarded to upstream servers. -

RESOLVER_HDRERR message received, exception when processing header: %1

+

RESOLVER_HEADER_ERROR message received, exception when processing header: %1

A debug message noting that an exception occurred during the processing of a received packet. The packet has been dropped.

RESOLVER_IXFR IXFR request received

The resolver received a NOTIFY message over TCP. The server cannot process it and will return an error message to the sender with the RCODE set to NOTIMP. -

RESOLVER_LKTMOSMALL lookup timeout of %1 is too small

+

RESOLVER_LOOKUP_TIME_SMALL lookup timeout of %1 is too small

An error indicating that the configuration value specified for the lookup timeout is too small. -

RESOLVER_NFYNOTAUTH NOTIFY arrived but server is not authoritative

-The resolver received a NOTIFY message. As the server is not authoritative it -cannot process it, so it returns an error message to the sender with the RCODE -set to NOTAUTH. -

RESOLVER_NORMQUERY processing normal query

-The received query has passed all checks and is being processed by the resolver. -

RESOLVER_NOROOTADDR no root addresses available

-A warning message during startup, indicates that no root addresses have been -set. This may be because the resolver will get them from a priming query. -

RESOLVER_NOTIN non-IN class request received, returning REFUSED message

+

RESOLVER_MESSAGE_ERROR error parsing received message: %1 - returning %2

+A debug message noting that the resolver received a message and the +parsing of the body of the message failed due to some error (although +the parsing of the header succeeded). The message parameters give a +textual description of the problem and the RCODE returned. +

RESOLVER_NEGATIVE_RETRIES negative number of retries (%1) specified in the configuration

+An error message indicating that the resolver configuration has specified a +negative retry count. Only zero or positive values are valid. +

RESOLVER_NON_IN_PACKET non-IN class request received, returning REFUSED message

A debug message, the resolver has received a DNS packet that was not IN class. The resolver cannot handle such packets, so is returning a REFUSED response to the sender. -

RESOLVER_NOTONEQUES query contained %1 questions, exactly one question was expected

+

RESOLVER_NORMAL_QUERY processing normal query

+The received query has passed all checks and is being processed by the resolver. +

RESOLVER_NOTIFY_RECEIVED NOTIFY arrived but server is not authoritative

+The resolver received a NOTIFY message. As the server is not authoritative it +cannot process it, so it returns an error message to the sender with the RCODE +set to NOTAUTH. +

RESOLVER_NOT_ONE_QUESTION query contained %1 questions, exactly one question was expected

A debug message, the resolver received a query that contained the number of entires in the question section detailed in the message. This is a malformed message, as a DNS query must contain only one question. The resolver will return a message to the sender with the RCODE set to FORMERR. -

RESOLVER_OPCODEUNS opcode %1 not supported by the resolver

-A debug message, the resolver received a message with an unsupported opcode -(it can only process QUERY opcodes). It will return a message to the sender -with the RCODE set to NOTIMP. -

RESOLVER_PARSEERR error parsing received message: %1 - returning %2

+

RESOLVER_NO_ROOT_ADDRESS no root addresses available

+A warning message during startup, indicates that no root addresses have been +set. This may be because the resolver will get them from a priming query. +

RESOLVER_PARSE_ERROR error parsing received message: %1 - returning %2

A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some non-protocol related reason (although the parsing of the header succeeded). The message parameters give a textual description of the problem and the RCODE returned. -

RESOLVER_PRINTMSG print message command, aeguments are: %1

+

RESOLVER_PRINT_COMMAND print message command, arguments are: %1

This message is logged when a "print_message" command is received over the command channel. -

RESOLVER_PROTERR protocol error parsing received message: %1 - returning %2

+

RESOLVER_PROTOCOL_ERROR protocol error parsing received message: %1 - returning %2

A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some protocol error (although the parsing of the header succeeded). The message parameters give a textual description of the problem and the RCODE returned. -

RESOLVER_QUSETUP query setup

+

RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4

+A debug message that indicates an incoming query is accepted in terms of +the query ACL. The log message shows the query in the form of +<query name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. +

RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4

+An informational message that indicates an incoming query is dropped +in terms of the query ACL. Unlike the RESOLVER_QUERY_REJECTED +case, the server does not return any response. The log message +shows the query in the form of <query name>/<query type>/<query +class>, and the client that sends the query in the form of <Source +IP address>#<source port>. +

RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4

+An informational message that indicates an incoming query is rejected +in terms of the query ACL. This results in a response with an RCODE of +REFUSED. The log message shows the query in the form of <query +name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. +

RESOLVER_QUERY_SETUP query setup

A debug message noting that the resolver is creating a RecursiveQuery object. -

RESOLVER_QUSHUT query shutdown

+

RESOLVER_QUERY_SHUTDOWN query shutdown

A debug message noting that the resolver is destroying a RecursiveQuery object. -

RESOLVER_QUTMOSMALL query timeout of %1 is too small

+

RESOLVER_QUERY_TIME_SMALL query timeout of %1 is too small

An error indicating that the configuration value specified for the query timeout is too small. -

RESOLVER_RECURSIVE running in recursive mode

-This is an informational message that appears at startup noting that the -resolver is running in recursive mode. -

RESOLVER_RECVMSG resolver has received a DNS message

+

RESOLVER_RECEIVED_MESSAGE resolver has received a DNS message

A debug message indicating that the resolver has received a message. Depending on the debug settings, subsequent log output will indicate the nature of the message. -

RESOLVER_RETRYNEG negative number of retries (%1) specified in the configuration

-An error message indicating that the resolver configuration has specified a -negative retry count. Only zero or positive values are valid. -

RESOLVER_ROOTADDR setting root address %1(%2)

-This message may appear multiple times during startup; it lists the root -addresses used by the resolver. -

RESOLVER_SERVICE service object created

+

RESOLVER_RECURSIVE running in recursive mode

+This is an informational message that appears at startup noting that the +resolver is running in recursive mode. +

RESOLVER_SERVICE_CREATED service object created

A debug message, output when the main service object (which handles the received queries) is created. -

RESOLVER_SETPARAM query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4

-A debug message, lists the parameters associated with the message. These are: +

RESOLVER_SET_PARAMS query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4

+A debug message, lists the parameters being set for the resolver. These are: query timeout: the timeout (in ms) used for queries originated by the resolver to upstream servers. Client timeout: the interval to resolver a query by a client: after this time, the resolver sends back a SERVFAIL to the client @@ -819,14 +1168,20 @@ resolver gives up trying to resolve a query. Retry count: the number of times the resolver will retry a query to an upstream server if it gets a timeout.

The client and lookup timeouts require a bit more explanation. The -resolution of the clent query might require a large number of queries to +resolution of the client query might require a large number of queries to upstream nameservers. Even if none of these queries timeout, the total time taken to perform all the queries may exceed the client timeout. When this happens, a SERVFAIL is returned to the client, but the resolver continues with the resolution process. Data received is added to the cache. However, -there comes a time - the lookup timeout - when even the resolve gives up. +there comes a time - the lookup timeout - when even the resolver gives up. At this point it will wait for pending upstream queries to complete or timeout and drop the query. +

RESOLVER_SET_QUERY_ACL query ACL is configured

+A debug message that appears when a new query ACL is configured for the +resolver. +

RESOLVER_SET_ROOT_ADDRESS setting root address %1(%2)

+This message may appear multiple times during startup; it lists the root +addresses used by the resolver.

RESOLVER_SHUTDOWN resolver shutdown complete

This information message is output when the resolver has shut down.

RESOLVER_STARTED resolver started

@@ -834,8 +1189,166 @@ This informational message is output by the resolver when all initialization has been completed and it is entering its main loop.

RESOLVER_STARTING starting resolver with command line '%1'

An informational message, this is output when the resolver starts up. -

RESOLVER_UNEXRESP received unexpected response, ignoring

+

RESOLVER_UNEXPECTED_RESPONSE received unexpected response, ignoring

A debug message noting that the server has received a response instead of a query and is ignoring it. +

RESOLVER_UNSUPPORTED_OPCODE opcode %1 not supported by the resolver

+A debug message, the resolver received a message with an unsupported opcode +(it can only process QUERY opcodes). It will return a message to the sender +with the RCODE set to NOTIMP. +

XFRIN_AXFR_DATABASE_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to a database problem. +The error is shown in the log message. +

XFRIN_AXFR_INTERNAL_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to an internal +problem in the bind10 python wrapper library. +The error is shown in the log message. +

XFRIN_AXFR_TRANSFER_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to a protocol error. +The error is shown in the log message. +

XFRIN_AXFR_TRANSFER_STARTED AXFR transfer of zone %1 started

+A connection to the master server has been made, the serial value in +the SOA record has been checked, and a zone transfer has been started. +

XFRIN_AXFR_TRANSFER_SUCCESS AXFR transfer of zone %1 succeeded

+The AXFR transfer of the given zone was successfully completed. +

XFRIN_BAD_MASTER_ADDR_FORMAT bad format for master address: %1

+The given master address is not a valid IP address. +

XFRIN_BAD_MASTER_PORT_FORMAT bad format for master port: %1

+The master port as read from the configuration is not a valid port number. +

XFRIN_BAD_TSIG_KEY_STRING bad TSIG key string: %1

+The TSIG key string as read from the configuration does not represent +a valid TSIG key. +

XFRIN_BAD_ZONE_CLASS Invalid zone class: %1

+The zone class as read from the configuration is not a valid DNS class. +

XFRIN_CC_SESSION_ERROR error reading from cc channel: %1

+There was a problem reading from the command and control channel. The +most likely cause is that xfrin the msgq daemon is not running. +

XFRIN_COMMAND_ERROR error while executing command '%1': %2

+There was an error while the given command was being processed. The +error is given in the log message. +

XFRIN_CONNECT_MASTER error connecting to master at %1: %2

+There was an error opening a connection to the master. The error is +shown in the log message. +

XFRIN_IMPORT_DNS error importing python DNS module: %1

+There was an error importing the python DNS module pydnspp. The most +likely cause is a PYTHONPATH problem. +

XFRIN_MSGQ_SEND_ERROR error while contacting %1 and %2

+There was a problem sending a message to the xfrout module or the +zone manager. This most likely means that the msgq daemon has quit or +was killed. +

XFRIN_MSGQ_SEND_ERROR_ZONE_MANAGER error while contacting %1

+There was a problem sending a message to the zone manager. This most +likely means that the msgq daemon has quit or was killed. +

XFRIN_RETRANSFER_UNKNOWN_ZONE got notification to retransfer unknown zone %1

+There was an internal command to retransfer the given zone, but the +zone is not known to the system. This may indicate that the configuration +for xfrin is incomplete, or there was a typographical error in the +zone name in the configuration. +

XFRIN_STARTING starting resolver with command line '%1'

+An informational message, this is output when the resolver starts up. +

XFRIN_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the xfrin daemon. The +daemon will now shut down. +

XFRIN_UNKNOWN_ERROR unknown error: %1

+An uncaught exception was raised while running the xfrin daemon. The +exception message is printed in the log message. +

XFROUT_AXFR_TRANSFER_DONE transfer of %1/%2 complete

+The transfer of the given zone has been completed successfully, or was +aborted due to a shutdown event. +

XFROUT_AXFR_TRANSFER_ERROR error transferring zone %1/%2: %3

+An uncaught exception was encountered while sending the response to +an AXFR query. The error message of the exception is included in the +log message, but this error most likely points to incomplete exception +handling in the code. +

XFROUT_AXFR_TRANSFER_FAILED transfer of %1/%2 failed, rcode: %3

+A transfer out for the given zone failed. An error response is sent +to the client. The given rcode is the rcode that is set in the error +response. This is either NOTAUTH (we are not authoritative for the +zone), SERVFAIL (our internal database is missing the SOA record for +the zone), or REFUSED (the limit of simultaneous outgoing AXFR +transfers, as specified by the configuration value +Xfrout/max_transfers_out, has been reached). +

XFROUT_AXFR_TRANSFER_STARTED transfer of zone %1/%2 has started

+A transfer out of the given zone has started. +

XFROUT_BAD_TSIG_KEY_STRING bad TSIG key string: %1

+The TSIG key string as read from the configuration does not represent +a valid TSIG key. +

XFROUT_CC_SESSION_ERROR error reading from cc channel: %1

+There was a problem reading from the command and control channel. The +most likely cause is that the msgq daemon is not running. +

XFROUT_CC_SESSION_TIMEOUT_ERROR timeout waiting for cc response

+There was a problem reading a response from antoher module over the +command and control channel. The most likely cause is that the +configuration manager b10-cfgmgr is not running. +

XFROUT_FETCH_REQUEST_ERROR socket error while fetching a request from the auth daemon

+There was a socket error while contacting the b10-auth daemon to +fetch a transfer request. The auth daemon may have shutdown. +

XFROUT_HANDLE_QUERY_ERROR error while handling query: %1

+There was a general error handling an xfrout query. The error is shown +in the message. In principle this error should not appear, and points +to an oversight catching exceptions in the right place. However, to +ensure the daemon keeps running, this error is caught and reported. +

XFROUT_IMPORT error importing python module: %1

+There was an error importing a python module. One of the modules needed +by xfrout could not be found. This suggests that either some libraries +are missing on the system, or the PYTHONPATH variable is not correct. +The specific place where this library needs to be depends on your +system and your specific installation. +

XFROUT_NEW_CONFIG Update xfrout configuration

+New configuration settings have been sent from the configuration +manager. The xfrout daemon will now apply them. +

XFROUT_NEW_CONFIG_DONE Update xfrout configuration done

+The xfrout daemon is now done reading the new configuration settings +received from the configuration manager. +

XFROUT_NOTIFY_COMMAND received command to send notifies for %1/%2

+The xfrout daemon received a command on the command channel that +NOTIFY packets should be sent for the given zone. +

XFROUT_PARSE_QUERY_ERROR error parsing query: %1

+There was a parse error while reading an incoming query. The parse +error is shown in the log message. A remote client sent a packet we +do not understand or support. The xfrout request will be ignored. +In general, this should only occur for unexpected problems like +memory allocation failures, as the query should already have been +parsed by the b10-auth daemon, before it was passed here. +

XFROUT_PROCESS_REQUEST_ERROR error processing transfer request: %2

+There was an error processing a transfer request. The error is included +in the log message, but at this point no specific information other +than that could be given. This points to incomplete exception handling +in the code. +

XFROUT_RECEIVED_SHUTDOWN_COMMAND shutdown command received

+The xfrout daemon received a shutdown command from the command channel +and will now shut down. +

XFROUT_RECEIVE_FILE_DESCRIPTOR_ERROR error receiving the file descriptor for an XFR connection

+There was an error receiving the file descriptor for the transfer +request. Normally, the request is received by b10-auth, and passed on +to the xfrout daemon, so it can answer directly. However, there was a +problem receiving this file descriptor. The request will be ignored. +

XFROUT_REMOVE_OLD_UNIX_SOCKET_FILE_ERROR error removing unix socket file %1: %2

+The unix socket file xfrout needs for contact with the auth daemon +already exists, and needs to be removed first, but there is a problem +removing it. It is likely that we do not have permission to remove +this file. The specific error is show in the log message. The xfrout +daemon will shut down. +

XFROUT_REMOVE_UNIX_SOCKET_FILE_ERROR error clearing unix socket file %1: %2

+When shutting down, the xfrout daemon tried to clear the unix socket +file used for communication with the auth daemon. It failed to remove +the file. The reason for the failure is given in the error message. +

XFROUT_SOCKET_SELECT_ERROR error while calling select() on request socket: %1

+There was an error while calling select() on the socket that informs +the xfrout daemon that a new xfrout request has arrived. This should +be a result of rare local error such as memory allocation failure and +shouldn't happen under normal conditions. The error is included in the +log message. +

XFROUT_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the xfrout daemon. The +daemon will now shut down. +

XFROUT_STOPPING the xfrout daemon is shutting down

+The current transfer is aborted, as the xfrout daemon is shutting down. +

XFROUT_UNIX_SOCKET_FILE_IN_USE another xfrout process seems to be using the unix socket file %1

+While starting up, the xfrout daemon tried to clear the unix domain +socket needed for contacting the b10-auth daemon to pass requests +on, but the file is in use. The most likely cause is that another +xfrout daemon process is still running. This xfrout daemon (the one +printing this message) will not start.

diff --git a/doc/guide/bind10-messages.xml b/doc/guide/bind10-messages.xml index eaa8bb99a1..d146a9ca56 100644 --- a/doc/guide/bind10-messages.xml +++ b/doc/guide/bind10-messages.xml @@ -5,6 +5,12 @@ %version; ]> + @@ -62,16 +68,16 @@ - -ASIODNS_FETCHCOMP upstream fetch to %1(%2) has now completed + +ASIODNS_FETCH_COMPLETED upstream fetch to %1(%2) has now completed -A debug message, this records the the upstream fetch (a query made by the +A debug message, this records that the upstream fetch (a query made by the resolver on behalf of its client) to the specified address has completed. - -ASIODNS_FETCHSTOP upstream fetch to %1(%2) has been stopped + +ASIODNS_FETCH_STOPPED upstream fetch to %1(%2) has been stopped An external component has requested the halting of an upstream fetch. This is an allowed operation, and the message should only appear if debug is @@ -79,27 +85,27 @@ enabled. - -ASIODNS_OPENSOCK error %1 opening %2 socket to %3(%4) + +ASIODNS_OPEN_SOCKET error %1 opening %2 socket to %3(%4) The asynchronous I/O code encountered an error when trying to open a socket of the specified protocol in order to send a message to the target address. -The the number of the system error that cause the problem is given in the +The number of the system error that cause the problem is given in the message. - -ASIODNS_RECVSOCK error %1 reading %2 data from %3(%4) + +ASIODNS_READ_DATA error %1 reading %2 data from %3(%4) -The asynchronous I/O code encountered an error when trying read data from -the specified address on the given protocol. The the number of the system +The asynchronous I/O code encountered an error when trying to read data from +the specified address on the given protocol. The number of the system error that cause the problem is given in the message. - -ASIODNS_RECVTMO receive timeout while waiting for data from %1(%2) + +ASIODNS_READ_TIMEOUT receive timeout while waiting for data from %1(%2) An upstream fetch from the specified address timed out. This may happen for any number of reasons and is most probably a problem at the remote server @@ -108,8 +114,8 @@ enabled. - -ASIODNS_SENDSOCK error %1 sending data using %2 to %3(%4) + +ASIODNS_SEND_DATA error %1 sending data using %2 to %3(%4) The asynchronous I/O code encountered an error when trying send data to the specified address on the given protocol. The the number of the system @@ -117,20 +123,674 @@ error that cause the problem is given in the message. - -ASIODNS_UNKORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3) + +ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3) -This message should not appear and indicates an internal error if it does. -Please enter a bug report. +An internal consistency check on the origin of a message from the +asynchronous I/O module failed. This may indicate an internal error; +please submit a bug report. - -ASIODNS_UNKRESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3) + +ASIODNS_UNKNOWN_RESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3) -The termination method of the resolver's upstream fetch class was called with -an unknown result code (which is given in the message). This message should -not appear and may indicate an internal error. Please enter a bug report. +An internal error indicating that the termination method of the resolver's +upstream fetch class was called with an unknown result code (which is +given in the message). Please submit a bug report. + + + + +AUTH_AXFR_ERROR error handling AXFR request: %1 + +This is a debug message produced by the authoritative server when it +has encountered an error processing an AXFR request. The message gives +the reason for the error, and the server will return a SERVFAIL code to +the sender. + + + + +AUTH_AXFR_UDP AXFR query received over UDP + +This is a debug message output when the authoritative server has received +an AXFR query over UDP. Use of UDP for AXFRs is not permitted by the +protocol, so the server will return a FORMERR error to the sender. + + + + +AUTH_COMMAND_FAILED execution of command channel instruction '%1' failed: %2 + +Execution of the specified command by the authoritative server failed. The +message contains the reason for the failure. + + + + +AUTH_CONFIG_CHANNEL_CREATED configuration session channel created + +This is a debug message indicating that authoritative server has created +the channel to the configuration manager. It is issued during server +startup is an indication that the initialization is proceeding normally. + + + + +AUTH_CONFIG_CHANNEL_ESTABLISHED configuration session channel established + +This is a debug message indicating that authoritative server +has established communication the configuration manager over the +previously-created channel. It is issued during server startup is an +indication that the initialization is proceeding normally. + + + + +AUTH_CONFIG_CHANNEL_STARTED configuration session channel started + +This is a debug message, issued when the authoritative server has +posted a request to be notified when new configuration information is +available. It is issued during server startup is an indication that +the initialization is proceeding normally. + + + + +AUTH_CONFIG_LOAD_FAIL load of configuration failed: %1 + +An attempt to configure the server with information from the configuration +database during the startup sequence has failed. (The reason for +the failure is given in the message.) The server will continue its +initialization although it may not be configured in the desired way. + + + + +AUTH_CONFIG_UPDATE_FAIL update of configuration failed: %1 + +At attempt to update the configuration the server with information +from the configuration database has failed, the reason being given in +the message. + + + + +AUTH_DATA_SOURCE data source database file: %1 + +This is a debug message produced by the authoritative server when it accesses a +datebase data source, listing the file that is being accessed. + + + + +AUTH_DNS_SERVICES_CREATED DNS services created + +This is a debug message indicating that the component that will handling +incoming queries for the authoritiative server (DNSServices) has been +successfully created. It is issued during server startup is an indication +that the initialization is proceeding normally. + + + + +AUTH_HEADER_PARSE_FAIL unable to parse header in received DNS packet: %1 + +This is a debug message, generated by the authoritative server when an +attempt to parse the header of a received DNS packet has failed. (The +reason for the failure is given in the message.) The server will drop the +packet. + + + + +AUTH_LOAD_TSIG loading TSIG keys + +This is a debug message indicating that the authoritiative server +has requested the keyring holding TSIG keys from the configuration +database. It is issued during server startup is an indication that the +initialization is proceeding normally. + + + + +AUTH_LOAD_ZONE loaded zone %1/%2 + +This debug message is issued during the processing of the 'loadzone' command +when the authoritative server has successfully loaded the named zone of the +named class. + + + + +AUTH_MEM_DATASRC_DISABLED memory data source is disabled for class %1 + +This is a debug message reporting that the authoritative server has +discovered that the memory data source is disabled for the given class. + + + + +AUTH_MEM_DATASRC_ENABLED memory data source is enabled for class %1 + +This is a debug message reporting that the authoritative server has +discovered that the memory data source is enabled for the given class. + + + + +AUTH_NOTIFY_QUESTIONS invalid number of questions (%1) in incoming NOTIFY + +This debug message is logged by the authoritative server when it receives +a NOTIFY packet that contains zero or more than one question. (A valid +NOTIFY packet contains one question.) The server will return a FORMERR +error to the sender. + + + + +AUTH_NOTIFY_RRTYPE invalid question RR type (%1) in incoming NOTIFY + +This debug message is logged by the authoritative server when it receives +a NOTIFY packet that an RR type of something other than SOA in the +question section. (The RR type received is included in the message.) The +server will return a FORMERR error to the sender. + + + + +AUTH_NO_STATS_SESSION session interface for statistics is not available + +The authoritative server had no session with the statistics module at the +time it attempted to send it data: the attempt has been abandoned. This +could be an error in configuration. + + + + +AUTH_NO_XFRIN received NOTIFY but XFRIN session is not running + +This is a debug message produced by the authoritative server when it receives +a NOTIFY packet but the XFRIN process is not running. The packet will be +dropped and nothing returned to the sender. + + + + +AUTH_PACKET_PARSE_ERROR unable to parse received DNS packet: %1 + +This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to something other +than a protocol error. The reason for the failure is given in the message; +the server will return a SERVFAIL error code to the sender. + + + + +AUTH_PACKET_PROTOCOL_ERROR DNS packet protocol error: %1. Returning %2 + +This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to a protocol error. +The reason for the failure is given in the message, as is the error code +that will be returned to the sender. + + + + +AUTH_PACKET_RECEIVED message received:\n%1 + +This is a debug message output by the authoritative server when it +receives a valid DNS packet. + +Note: This message includes the packet received, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_PROCESS_FAIL message processing failure: %1 + +This message is generated by the authoritative server when it has +encountered an internal error whilst processing a received packet: +the cause of the error is included in the message. + +The server will return a SERVFAIL error code to the sender of the packet. +However, this message indicates a potential error in the server. +Please open a bug ticket for this issue. + + + + +AUTH_RECEIVED_COMMAND command '%1' received + +This is a debug message issued when the authoritative server has received +a command on the command channel. + + + + +AUTH_RECEIVED_SENDSTATS command 'sendstats' received + +This is a debug message issued when the authoritative server has received +a command from the statistics module to send it data. The 'sendstats' +command is handled differently to other commands, which is why the debug +message associated with it has its own code. + + + + +AUTH_RESPONSE_RECEIVED received response message, ignoring + +This is a debug message, this is output if the authoritative server +receives a DNS packet with the QR bit set, i.e. a DNS response. The +server ignores the packet as it only responds to question packets. + + + + +AUTH_SEND_ERROR_RESPONSE sending an error response (%1 bytes):\n%2 + +This is a debug message recording that the authoritative server is sending +an error response to the originator of the query. A previous message will +have recorded details of the failure. + +Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_SEND_NORMAL_RESPONSE sending an error response (%1 bytes):\n%2 + +This is a debug message recording that the authoritative server is sending +a response to the originator of a query. + +Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_SERVER_CREATED server created + +An informational message indicating that the authoritative server process has +been created and is initializing. The AUTH_SERVER_STARTED message will be +output when initialization has successfully completed and the server starts +accepting queries. + + + + +AUTH_SERVER_FAILED server failed: %1 + +The authoritative server has encountered a fatal error and is terminating. The +reason for the failure is included in the message. + + + + +AUTH_SERVER_STARTED server started + +Initialization of the authoritative server has completed successfully +and it is entering the main loop, waiting for queries to arrive. + + + + +AUTH_SQLITE3 nothing to do for loading sqlite3 + +This is a debug message indicating that the authoritative server has +found that the data source it is loading is an SQLite3 data source, +so no further validation is needed. + + + + +AUTH_STATS_CHANNEL_CREATED STATS session channel created + +This is a debug message indicating that the authoritative server has +created a channel to the statistics process. It is issued during server +startup is an indication that the initialization is proceeding normally. + + + + +AUTH_STATS_CHANNEL_ESTABLISHED STATS session channel established + +This is a debug message indicating that the authoritative server +has established communication over the previously created statistics +channel. It is issued during server startup is an indication that the +initialization is proceeding normally. + + + + +AUTH_STATS_COMMS communication error in sending statistics data: %1 + +An error was encountered when the authoritiative server tried to send data +to the statistics daemon. The message includes additional information +describing the reason for the failure. + + + + +AUTH_STATS_TIMEOUT timeout while sending statistics data: %1 + +The authoritative server sent data to the statistics daemon but received +no acknowledgement within the specified time. The message includes +additional information describing the reason for the failure. + + + + +AUTH_STATS_TIMER_DISABLED statistics timer has been disabled + +This is a debug message indicating that the statistics timer has been +disabled in the authoritative server and no statistics information is +being produced. + + + + +AUTH_STATS_TIMER_SET statistics timer set to %1 second(s) + +This is a debug message indicating that the statistics timer has been +enabled and that the authoritative server will produce statistics data +at the specified interval. + + + + +AUTH_UNSUPPORTED_OPCODE unsupported opcode: %1 + +This is a debug message, produced when a received DNS packet being +processed by the authoritative server has been found to contain an +unsupported opcode. (The opcode is included in the message.) The server +will return an error code of NOTIMPL to the sender. + + + + +AUTH_XFRIN_CHANNEL_CREATED XFRIN session channel created + +This is a debug message indicating that the authoritative server has +created a channel to the XFRIN (Transfer-in) process. It is issued +during server startup is an indication that the initialization is +proceeding normally. + + + + +AUTH_XFRIN_CHANNEL_ESTABLISHED XFRIN session channel established + +This is a debug message indicating that the authoritative server has +established communication over the previously-created channel to the +XFRIN (Transfer-in) process. It is issued during server startup is an +indication that the initialization is proceeding normally. + + + + +AUTH_ZONEMGR_COMMS error communicating with zone manager: %1 + +This is a debug message output during the processing of a NOTIFY request. +An error (listed in the message) has been encountered whilst communicating +with the zone manager. The NOTIFY request will not be honored. + + + + +AUTH_ZONEMGR_ERROR received error response from zone manager: %1 + +This is a debug message output during the processing of a NOTIFY +request. The zone manager component has been informed of the request, +but has returned an error response (which is included in the message). The +NOTIFY request will not be honored. + + + + +CC_ASYNC_READ_FAILED asynchronous read failed + +This marks a low level error, we tried to read data from the message queue +daemon asynchronously, but the ASIO library returned an error. + + + + +CC_CONN_ERROR error connecting to message queue (%1) + +It is impossible to reach the message queue daemon for the reason given. It +is unlikely there'll be reason for whatever program this currently is to +continue running, as the communication with the rest of BIND 10 is vital +for the components. + + + + +CC_DISCONNECT disconnecting from message queue daemon + +The library is disconnecting from the message queue daemon. This debug message +indicates that the program is trying to shut down gracefully. + + + + +CC_ESTABLISH trying to establish connection with message queue daemon at %1 + +This debug message indicates that the command channel library is about to +connect to the message queue daemon, which should be listening on the UNIX-domain +socket listed in the output. + + + + +CC_ESTABLISHED successfully connected to message queue daemon + +This debug message indicates that the connection was successfully made, this +should follow CC_ESTABLISH. + + + + +CC_GROUP_RECEIVE trying to receive a message + +Debug message, noting that a message is expected to come over the command +channel. + + + + +CC_GROUP_RECEIVED message arrived ('%1', '%2') + +Debug message, noting that we successfully received a message (its envelope and +payload listed). This follows CC_GROUP_RECEIVE, but might happen some time +later, depending if we waited for it or just polled. + + + + +CC_GROUP_SEND sending message '%1' to group '%2' + +Debug message, we're about to send a message over the command channel. + + + + +CC_INVALID_LENGTHS invalid length parameters (%1, %2) + +This happens when garbage comes over the command channel or some kind of +confusion happens in the program. The data received from the socket make no +sense if we interpret it as lengths of message. The first one is total length +of message, the second length of the header. The header and it's length +(2 bytes) is counted in the total length. + + + + +CC_LENGTH_NOT_READY length not ready + +There should be data representing length of message on the socket, but it +is not there. + + + + +CC_NO_MESSAGE no message ready to be received yet + +The program polled for incoming messages, but there was no message waiting. +This is a debug message which may happen only after CC_GROUP_RECEIVE. + + + + +CC_NO_MSGQ unable to connect to message queue (%1) + +It isn't possible to connect to the message queue daemon, for reason listed. +It is unlikely any program will be able continue without the communication. + + + + +CC_READ_ERROR error reading data from command channel (%1) + +A low level error happened when the library tried to read data from the +command channel socket. The reason is listed. + + + + +CC_READ_EXCEPTION error reading data from command channel (%1) + +We received an exception while trying to read data from the command +channel socket. The reason is listed. + + + + +CC_REPLY replying to message from '%1' with '%2' + +Debug message, noting we're sending a response to the original message +with the given envelope. + + + + +CC_SET_TIMEOUT setting timeout to %1ms + +Debug message. A timeout for which the program is willing to wait for a reply +is being set. + + + + +CC_START_READ starting asynchronous read + +Debug message. From now on, when a message (or command) comes, it'll wake the +program and the library will automatically pass it over to correct place. + + + + +CC_SUBSCRIBE subscribing to communication group %1 + +Debug message. The program wants to receive messages addressed to this group. + + + + +CC_TIMEOUT timeout reading data from command channel + +The program waited too long for data from the command channel (usually when it +sent a query to different program and it didn't answer for whatever reason). + + + + +CC_UNSUBSCRIBE unsubscribing from communication group %1 + +Debug message. The program no longer wants to receive messages addressed to +this group. + + + + +CC_WRITE_ERROR error writing data to command channel (%1) + +A low level error happened when the library tried to write data to the command +channel socket. + + + + +CC_ZERO_LENGTH invalid message length (0) + +The library received a message length being zero, which makes no sense, since +all messages must contain at least the envelope. + + + + +CFGMGR_AUTOMATIC_CONFIG_DATABASE_UPDATE Updating configuration database from version %1 to %2 + +An older version of the configuration database has been found, from which +there was an automatic upgrade path to the current version. These changes +are now applied, and no action from the administrator is necessary. + + + + +CFGMGR_CC_SESSION_ERROR Error connecting to command channel: %1 + +The configuration manager daemon was unable to connect to the messaging +system. The most likely cause is that msgq is not running. + + + + +CFGMGR_DATA_READ_ERROR error reading configuration database from disk: %1 + +There was a problem reading the persistent configuration data as stored +on disk. The file may be corrupted, or it is of a version from where +there is no automatic upgrade path. The file needs to be repaired or +removed. The configuration manager daemon will now shut down. + + + + +CFGMGR_IOERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1 + +There was an IO error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the directory where +the file is stored does not exist, or is not writable. The updated +configuration is not stored. + + + + +CFGMGR_OSERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1 + +There was an OS error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the system does not have +write access to the configuration database file. The updated +configuration is not stored. + + + + +CFGMGR_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the cfgmgr daemon. The +daemon will now shut down. @@ -148,32 +808,18 @@ The message itself is ignored by this module. CONFIG_CCSESSION_MSG_INTERNAL error handling CC session message: %1 -There was an internal problem handling an incoming message on the -command and control channel. An unexpected exception was thrown. This -most likely points to an internal inconsistency in the module code. The -exception message is appended to the log error, and the module will -continue to run, but will not send back an answer. +There was an internal problem handling an incoming message on the command +and control channel. An unexpected exception was thrown, details of +which are appended to the message. The module will continue to run, +but will not send back an answer. + +The most likely cause of this error is a programming error. Please raise +a bug report. - -CONFIG_FOPEN_ERR error opening %1: %2 - -There was an error opening the given file. - - - - -CONFIG_JSON_PARSE JSON parse error in %1: %2 - -There was a parse error in the JSON file. The given file does not appear -to be in valid JSON format. Please verify that the filename is correct -and that the contents are valid JSON. - - - - -CONFIG_MANAGER_CONFIG error getting configuration from cfgmgr: %1 + +CONFIG_GET_FAIL error getting configuration from cfgmgr: %1 The configuration manager returned an error when this module requested the configuration. The full error message answer from the configuration @@ -183,23 +829,40 @@ running configuration manager. - -CONFIG_MANAGER_MOD_SPEC module specification not accepted by cfgmgr: %1 + +CONFIG_JSON_PARSE JSON parse error in %1: %2 -The module specification file for this module was rejected by the -configuration manager. The full error message answer from the -configuration manager is appended to the log error. The most likely -cause is that the module is of a different (specification file) version -than the running configuration manager. +There was an error parsing the JSON file. The given file does not appear +to be in valid JSON format. Please verify that the filename is correct +and that the contents are valid JSON. - -CONFIG_MODULE_SPEC module specification error in %1: %2 + +CONFIG_MOD_SPEC_FORMAT module specification error in %1: %2 -The given file does not appear to be a valid specification file. Please -verify that the filename is correct and that its contents are a valid -BIND10 module specification. +The given file does not appear to be a valid specification file: details +are included in the message. Please verify that the filename is correct +and that its contents are a valid BIND10 module specification. + + + + +CONFIG_MOD_SPEC_REJECT module specification rejected by cfgmgr: %1 + +The specification file for this module was rejected by the configuration +manager. The full error message answer from the configuration manager is +appended to the log error. The most likely cause is that the module is of +a different (specification file) version than the running configuration +manager. + + + + +CONFIG_OPEN_FAIL error opening %1: %2 + +There was an error opening the given file. The reason for the failure +is included in the message. @@ -349,7 +1012,7 @@ returning the CNAME instead. DATASRC_MEM_CNAME_COEXIST can't add data to CNAME in domain '%1' This is the same problem as in MEM_CNAME_TO_NONEMPTY, but it happened the -other way around -- adding some outher data to CNAME. +other way around -- adding some other data to CNAME. @@ -401,7 +1064,7 @@ Debug information. A DNAME was found instead of the requested information. -DATASRC_MEM_DNAME_NS dNAME and NS can't coexist in non-apex domain '%1' +DATASRC_MEM_DNAME_NS DNAME and NS can't coexist in non-apex domain '%1' It was requested for DNAME and NS records to be put into the same domain which is not the apex (the top of the zone). This is forbidden by RFC @@ -544,7 +1207,7 @@ behaviour is specified by RFC 1034, section 4.3.3 -DATASRC_MEM_WILDCARD_DNAME dNAME record in wildcard domain '%1' +DATASRC_MEM_WILDCARD_DNAME DNAME record in wildcard domain '%1' The software refuses to load DNAME records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should @@ -554,7 +1217,7 @@ different tools. -DATASRC_MEM_WILDCARD_NS nS record in wildcard domain '%1' +DATASRC_MEM_WILDCARD_NS NS record in wildcard domain '%1' The software refuses to load NS records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should @@ -666,7 +1329,7 @@ way down to the given domain. -DATASRC_QUERY_EMPTY_CNAME cNAME at '%1' is empty +DATASRC_QUERY_EMPTY_CNAME CNAME at '%1' is empty There was an CNAME and it was being followed. But it contains no records, so there's nowhere to go. There will be no answer. This indicates a problem @@ -905,7 +1568,7 @@ already. The code is 1 for error, 2 for not implemented. -DATASRC_QUERY_TOO_MANY_CNAMES cNAME chain limit exceeded at '%1' +DATASRC_QUERY_TOO_MANY_CNAMES CNAME chain limit exceeded at '%1' A CNAME led to another CNAME and it led to another, and so on. After 16 CNAMEs, the software gave up. Long CNAME chains are discouraged, and this @@ -962,14 +1625,14 @@ Debug information. The SQLite data source is closing the database file. -DATASRC_SQLITE_CREATE sQLite data source created +DATASRC_SQLITE_CREATE SQLite data source created Debug information. An instance of SQLite data source is being created. -DATASRC_SQLITE_DESTROY sQLite data source destroyed +DATASRC_SQLITE_DESTROY SQLite data source destroyed Debug information. An instance of SQLite data source is being destroyed. @@ -978,7 +1641,7 @@ Debug information. An instance of SQLite data source is being destroyed. DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1' -Debug information. The SQLite data source is trying to identify, which zone +Debug information. The SQLite data source is trying to identify which zone should hold this domain. @@ -986,7 +1649,7 @@ should hold this domain. DATASRC_SQLITE_ENCLOSURE_NOTFOUND no zone contains it -Debug information. The last SQLITE_ENCLOSURE query was unsuccessful, there's +Debug information. The last SQLITE_ENCLOSURE query was unsuccessful; there's no such zone in our data. @@ -1050,7 +1713,7 @@ a referral and where it goes. DATASRC_SQLITE_FINDREF_BAD_CLASS class mismatch looking for referral ('%1' and '%2') -The SQLite data source was trying to identify, if there's a referral. But +The SQLite data source was trying to identify if there's a referral. But it contains different class than the query was for. @@ -1143,294 +1806,325 @@ generated. - -LOGIMPL_ABOVEDBGMAX debug level of %1 is too high and will be set to the maximum of %2 + +LOGIMPL_ABOVE_MAX_DEBUG debug level of %1 is too high and will be set to the maximum of %2 -A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is above the maximum allowed value and has -been reduced to that value. +A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is above the maximum allowed value and has +been reduced to that value. The appearance of this message may indicate +a programming error - please submit a bug report. - -LOGIMPL_BADDEBUG debug string is '%1': must be of the form DEBUGn + +LOGIMPL_BAD_DEBUG_STRING debug string '%1' has invalid format -The string indicating the extended logging level (used by the underlying -logger implementation code) is not of the stated form. In particular, -it starts DEBUG but does not end with an integer. +A message from the interface to the underlying logger implementation +reporting that an internally-created string used to set the debug level +is not of the correct format (it should be of the form DEBUGn, where n +is an integer, e.g. DEBUG22). The appearance of this message indicates +a programming error - please submit a bug report. - -LOGIMPL_BELOWDBGMIN debug level of %1 is too low and will be set to the minimum of %2 + +LOGIMPL_BELOW_MIN_DEBUG debug level of %1 is too low and will be set to the minimum of %2 -A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is below the minimum allowed value and has -been increased to that value. +A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is below the minimum allowed value and has +been increased to that value. The appearance of this message may indicate +a programming error - please submit a bug report. - -MSG_BADDESTINATION unrecognized log destination: %1 + +LOG_BAD_DESTINATION unrecognized log destination: %1 A logger destination value was given that was not recognized. The destination should be one of "console", "file", or "syslog". - -MSG_BADSEVERITY unrecognized log severity: %1 + +LOG_BAD_SEVERITY unrecognized log severity: %1 A logger severity value was given that was not recognized. The severity should be one of "DEBUG", "INFO", "WARN", "ERROR", or "FATAL". - -MSG_BADSTREAM bad log console output stream: %1 + +LOG_BAD_STREAM bad log console output stream: %1 -A log console output stream was given that was not recognized. The -output stream should be one of "stdout", or "stderr" +A log console output stream was given that was not recognized. The output +stream should be one of "stdout", or "stderr" - -MSG_DUPLNS line %1: duplicate $NAMESPACE directive found + +LOG_DUPLICATE_MESSAGE_ID duplicate message ID (%1) in compiled code -When reading a message file, more than one $NAMESPACE directive was found. In -this version of the code, such a condition is regarded as an error and the -read will be abandoned. +During start-up, BIND10 detected that the given message identification had +been defined multiple times in the BIND10 code. + +This has no ill-effects other than the possibility that an erronous +message may be logged. However, as it is indicative of a programming +error, please log a bug report. - -MSG_DUPMSGID duplicate message ID (%1) in compiled code + +LOG_DUPLICATE_NAMESPACE line %1: duplicate $NAMESPACE directive found -Indicative of a programming error, when it started up, BIND10 detected that -the given message ID had been registered by one or more modules. (All message -IDs should be unique throughout BIND10.) This has no impact on the operation -of the server other that erroneous messages may be logged. (When BIND10 loads -the message IDs (and their associated text), if a duplicate ID is found it is -discarded. However, when the module that supplied the duplicate ID logs that -particular message, the text supplied by the module that added the original -ID will be output - something that may bear no relation to the condition being -logged. +When reading a message file, more than one $NAMESPACE directive was found. +Such a condition is regarded as an error and the read will be abandoned. - -MSG_IDNOTFND could not replace message text for '%1': no such message + +LOG_INPUT_OPEN_FAIL unable to open message file %1 for input: %2 + +The program was not able to open the specified input message file for +the reason given. + + + + +LOG_INVALID_MESSAGE_ID line %1: invalid message identification '%2' + +An invalid message identification (ID) has been found during the read of +a message file. Message IDs should comprise only alphanumeric characters +and the underscore, and should not start with a digit. + + + + +LOG_NAMESPACE_EXTRA_ARGS line %1: $NAMESPACE directive has too many arguments + +The $NAMESPACE directive in a message file takes a single argument, a +namespace in which all the generated symbol names are placed. This error +is generated when the compiler finds a $NAMESPACE directive with more +than one argument. + + + + +LOG_NAMESPACE_INVALID_ARG line %1: $NAMESPACE directive has an invalid argument ('%2') + +The $NAMESPACE argument in a message file should be a valid C++ namespace. +This message is output if the simple check on the syntax of the string +carried out by the reader fails. + + + + +LOG_NAMESPACE_NO_ARGS line %1: no arguments were given to the $NAMESPACE directive + +The $NAMESPACE directive in a message file takes a single argument, +a C++ namespace in which all the generated symbol names are placed. +This error is generated when the compiler finds a $NAMESPACE directive +with no arguments. + + + + +LOG_NO_MESSAGE_ID line %1: message definition line found without a message ID + +Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line in +the message file comprising just the "%" and nothing else. + + + + +LOG_NO_MESSAGE_TEXT line %1: line found containing a message ID ('%2') and no text + +Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line +in the message file comprising just the "%" and message identification, +but no text. + + + + +LOG_NO_SUCH_MESSAGE could not replace message text for '%1': no such message During start-up a local message file was read. A line with the listed -message identification was found in the file, but the identification is not -one contained in the compiled-in message dictionary. Either the message -identification has been mis-spelled in the file, or the local file was used -for an earlier version of the software and the message with that -identification has been removed. +message identification was found in the file, but the identification is +not one contained in the compiled-in message dictionary. This message +may appear a number of times in the file, once for every such unknown +message identification. -This message may appear a number of times in the file, once for every such -unknown message identification. +There may be several reasons why this message may appear: + +- The message ID has been mis-spelled in the local message file. + +- The program outputting the message may not use that particular message +(e.g. it originates in a module not used by the program.) + +- The local file was written for an earlier version of the BIND10 software +and the later version no longer generates that message. + +Whatever the reason, there is no impact on the operation of BIND10. - -MSG_INVMSGID line %1: invalid message identification '%2' + +LOG_OPEN_OUTPUT_FAIL unable to open %1 for output: %2 -The concatenation of the prefix and the message identification is used as -a symbol in the C++ module; as such it may only contain +Originating within the logging code, the program was not able to open +the specified output file for the reason given. - -MSG_NOMSGID line %1: message definition line found without a message ID + +LOG_PREFIX_EXTRA_ARGS line %1: $PREFIX directive has too many arguments -Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -indicates the message compiler found a line in the message file comprising -just the "%" and nothing else. +Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +This error is generated when the compiler finds a $PREFIX directive with +more than one argument. + +Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. - -MSG_NOMSGTXT line %1: line found containing a message ID ('%2') and no text + +LOG_PREFIX_INVALID_ARG line %1: $PREFIX directive has an invalid argument ('%2') -Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -is generated when a line is found in the message file that contains the -leading "%" and the message identification but no text. +Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +As such, it must adhere to restrictions on C++ symbol names (e.g. may +only contain alphanumeric characters or underscores, and may nor start +with a digit). A $PREFIX directive was found with an argument (given +in the message) that violates those restictions. + +Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. - -MSG_NSEXTRARG line %1: $NAMESPACE directive has too many arguments + +LOG_READING_LOCAL_FILE reading local message file %1 -The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with more than one argument. +This is an informational message output by BIND10 when it starts to read +a local message file. (A local message file may replace the text of +one of more messages; the ID of the message will not be changed though.) - -MSG_NSINVARG line %1: $NAMESPACE directive has an invalid argument ('%2') - -The $NAMESPACE argument should be a valid C++ namespace. The reader does a -cursory check on its validity, checking that the characters in the namespace -are correct. The error is generated when the reader finds an invalid -character. (Valid are alphanumeric characters, underscores and colons.) - - - - -MSG_NSNOARG line %1: no arguments were given to the $NAMESPACE directive - -The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with no arguments. - - - - -MSG_OPENIN unable to open message file %1 for input: %2 - -The program was not able to open the specified input message file for the -reason given. - - - - -MSG_OPENOUT unable to open %1 for output: %2 - -The program was not able to open the specified output file for the reason -given. - - - - -MSG_PRFEXTRARG line %1: $PREFIX directive has too many arguments - -The $PREFIX directive takes a single argument, a prefix to be added to the -symbol names when a C++ .h file is created. This error is generated when the -compiler finds a $PREFIX directive with more than one argument. - - - - -MSG_PRFINVARG line %1: $PREFIX directive has an invalid argument ('%2') - -The $PREFIX argument is used in a symbol name in a C++ header file. As such, -it must adhere to restrictions on C++ symbol names (e.g. may only contain -alphanumeric characters or underscores, and may nor start with a digit). -A $PREFIX directive was found with an argument (given in the message) that -violates those restictions. - - - - -MSG_RDLOCMES reading local message file %1 - -This is an informational message output by BIND10 when it starts to read a -local message file. (A local message file may replace the text of one of more -messages; the ID of the message will not be changed though.) - - - - -MSG_READERR error reading from message file %1: %2 + +LOG_READ_ERROR error reading from message file %1: %2 The specified error was encountered reading from the named message file. - -MSG_UNRECDIR line %1: unrecognised directive '%2' + +LOG_UNRECOGNISED_DIRECTIVE line %1: unrecognised directive '%2' -A line starting with a dollar symbol was found, but the first word on the line -(shown in the message) was not a recognised message compiler directive. +Within a message file, a line starting with a dollar symbol was found +(indicating the presence of a directive) but the first word on the line +(shown in the message) was not recognised. - -MSG_WRITERR error writing to %1: %2 + +LOG_WRITE_ERROR error writing to %1: %2 -The specified error was encountered by the message compiler when writing to -the named output file. +The specified error was encountered by the message compiler when writing +to the named output file. - -NSAS_INVRESPSTR queried for %1 but got invalid response + +NSAS_FIND_NS_ADDRESS asking resolver to obtain A and AAAA records for %1 -This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for a RR for the -specified nameserver but received an invalid response. Either the success -function was called without a DNS message or the message was invalid on some -way. (In the latter case, the error should have been picked up elsewhere in -the processing logic, hence the raising of the error here.) +A debug message issued when the NSAS (nameserver address store - part +of the resolver) is making a callback into the resolver to retrieve the +address records for the specified nameserver. - -NSAS_INVRESPTC queried for %1 RR of type/class %2/%3, received response %4/%5 + +NSAS_FOUND_ADDRESS found address %1 for %2 -This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for the given RR -type and class, but instead received an answer with the given type and class. +A debug message issued when the NSAS (nameserver address store - part +of the resolver) has retrieved the given address for the specified +nameserver through an external query. - -NSAS_LOOKUPCANCEL lookup for zone %1 has been cancelled + +NSAS_INVALID_RESPONSE queried for %1 but got invalid response -A debug message, this is output when a NSAS (nameserver address store - -part of the resolver) lookup for a zone has been cancelled. +The NSAS (nameserver address store - part of the resolver) made a query +for a RR for the specified nameserver but received an invalid response. +Either the success function was called without a DNS message or the +message was invalid on some way. (In the latter case, the error should +have been picked up elsewhere in the processing logic, hence the raising +of the error here.) + +This message indicates an internal error in the NSAS. Please raise a +bug report. - -NSAS_LOOKUPZONE searching NSAS for nameservers for zone %1 + +NSAS_LOOKUP_CANCEL lookup for zone %1 has been canceled -A debug message, this is output when a call is made to the nameserver address -store (part of the resolver) to obtain the nameservers for the specified zone. +A debug message issued when an NSAS (nameserver address store - part of +the resolver) lookup for a zone has been canceled. - -NSAS_NSADDR asking resolver to obtain A and AAAA records for %1 + +NSAS_NS_LOOKUP_FAIL failed to lookup any %1 for %2 -A debug message, the NSAS (nameserver address store - part of the resolver) is -making a callback into the resolver to retrieve the address records for the -specified nameserver. +A debug message issued when the NSAS (nameserver address store - part of +the resolver) has been unable to retrieve the specified resource record +for the specified nameserver. This is not necessarily a problem - the +nameserver may be unreachable, in which case the NSAS will try other +nameservers in the zone. - -NSAS_NSLKUPFAIL failed to lookup any %1 for %2 + +NSAS_SEARCH_ZONE_NS searching NSAS for nameservers for zone %1 -A debug message, the NSAS (nameserver address store - part of the resolver) -has been unable to retrieve the specified resource record for the specified -nameserver. This is not necessarily a problem - the nameserver may be -unreachable, in which case the NSAS will try other nameservers in the zone. +A debug message output when a call is made to the NSAS (nameserver +address store - part of the resolver) to obtain the nameservers for +the specified zone. - -NSAS_NSLKUPSUCC found address %1 for %2 - -A debug message, the NSAS (nameserver address store - part of the resolver) -has retrieved the given address for the specified nameserver through an -external query. - - - - -NSAS_SETRTT reporting RTT for %1 as %2; new value is now %3 + +NSAS_UPDATE_RTT update RTT for %1: was %2 ms, is now %3 ms A NSAS (nameserver address store - part of the resolver) debug message -reporting the round-trip time (RTT) for a query made to the specified -nameserver. The RTT has been updated using the value given and the new RTT is -displayed. (The RTT is subject to a calculation that damps out sudden -changes. As a result, the new RTT is not necessarily equal to the RTT -reported.) +reporting the update of a round-trip time (RTT) for a query made to the +specified nameserver. The RTT has been updated using the value given +and the new RTT is displayed. (The RTT is subject to a calculation that +damps out sudden changes. As a result, the new RTT used by the NSAS in +future decisions of which nameserver to use is not necessarily equal to +the RTT reported.) + + + + +NSAS_WRONG_ANSWER queried for %1 RR of type/class %2/%3, received response %4/%5 + +A NSAS (nameserver address store - part of the resolver) made a query for +a resource record of a particular type and class, but instead received +an answer with a different given type and class. + +This message indicates an internal error in the NSAS. Please raise a +bug report. @@ -1460,16 +2154,16 @@ type> tuple in the cache; instead, the deepest delegation found is indicated. - -RESLIB_FOLLOWCNAME following CNAME chain to <%1> + +RESLIB_FOLLOW_CNAME following CNAME chain to <%1> A debug message, a CNAME response was received and another query is being issued for the <name, class, type> tuple. - -RESLIB_LONGCHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded + +RESLIB_LONG_CHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded A debug message recording that a CNAME response has been received to an upstream query for the specified question (Previous debug messages will have indicated @@ -1479,26 +2173,26 @@ is where on CNAME points to another) and so an error is being returned. - -RESLIB_NONSRRSET no NS RRSet in referral response received to query for <%1> + +RESLIB_NO_NS_RRSET no NS RRSet in referral response received to query for <%1> A debug message, this indicates that a response was received for the specified -query and was categorised as a referral. However, the received message did +query and was categorized as a referral. However, the received message did not contain any NS RRsets. This may indicate a programming error in the response classification code. - -RESLIB_NSASLOOK looking up nameserver for zone %1 in the NSAS + +RESLIB_NSAS_LOOKUP looking up nameserver for zone %1 in the NSAS A debug message, the RunningQuery object is querying the NSAS for the nameservers for the specified zone. - -RESLIB_NXDOMRR NXDOMAIN/NXRRSET received in response to query for <%1> + +RESLIB_NXDOM_NXRR NXDOMAIN/NXRRSET received in response to query for <%1> A debug message recording that either a NXDOMAIN or an NXRRSET response has been received to an upstream query for the specified question. Previous debug @@ -1514,8 +2208,8 @@ are no retries left, an error will be reported. - -RESLIB_PROTOCOLRTRY protocol error in answer for %1: %2 (retries left: %3) + +RESLIB_PROTOCOL_RETRY protocol error in answer for %1: %2 (retries left: %3) A debug message indicating that a protocol error was received and that the resolver is repeating the query to the same nameserver. After this @@ -1523,14 +2217,35 @@ repeated query, there will be the indicated number of retries left. - -RESLIB_RCODERR RCODE indicates error in response to query for <%1> + +RESLIB_RCODE_ERR RCODE indicates error in response to query for <%1> A debug message, the response to the specified query indicated an error that is not covered by a specific code path. A SERVFAIL will be returned. + +RESLIB_RECQ_CACHE_FIND found <%1> in the cache (resolve() instance %2) + +This is a debug message and indicates that a RecursiveQuery object found the +the specified <name, class, type> tuple in the cache. The instance number +at the end of the message indicates which of the two resolve() methods has +been called. + + + + +RESLIB_RECQ_CACHE_NO_FIND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2) + +This is a debug message and indicates that the look in the cache made by the +RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery +object has been created to resolve the question. The instance number at +the end of the message indicates which of the two resolve() methods has +been called. + + + RESLIB_REFERRAL referral received in response to query for <%1> @@ -1540,35 +2255,14 @@ have indicated the server to which the question was sent. - -RESLIB_REFERZONE referred to zone %1 + +RESLIB_REFER_ZONE referred to zone %1 A debug message indicating that the last referral message was to the specified zone. - -RESLIB_RESCAFND found <%1> in the cache (resolve() instance %2) - -This is a debug message and indicates that a RecursiveQuery object found the -the specified <name, class, type> tuple in the cache. The instance number -at the end of the message indicates which of the two resolve() methods has -been called. - - - - -RESLIB_RESCANOTFND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2) - -This is a debug message and indicates that the look in the cache made by the -RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery -object has been created to resolve the question. The instance number at -the end of the message indicates which of the two resolve() methods has -been called. - - - RESLIB_RESOLVE asked to resolve <%1> (resolve() instance %2) @@ -1579,8 +2273,8 @@ message indicates which of the two resolve() methods has been called. - -RESLIB_RRSETFND found single RRset in the cache when querying for <%1> (resolve() instance %2) + +RESLIB_RRSET_FOUND found single RRset in the cache when querying for <%1> (resolve() instance %2) A debug message, indicating that when RecursiveQuery::resolve queried the cache, a single RRset was found which was put in the answer. The instance @@ -1596,16 +2290,16 @@ A debug message giving the round-trip time of the last query and response. - -RESLIB_RUNCAFND found <%1> in the cache + +RESLIB_RUNQ_CACHE_FIND found <%1> in the cache This is a debug message and indicates that a RunningQuery object found the specified <name, class, type> tuple in the cache. - -RESLIB_RUNCALOOK looking up up <%1> in the cache + +RESLIB_RUNQ_CACHE_LOOKUP looking up up <%1> in the cache This is a debug message and indicates that a RunningQuery object has made a call to its doLookup() method to look up the specified <name, class, type> @@ -1613,16 +2307,16 @@ tuple, the first action of which will be to examine the cache. - -RESLIB_RUNQUFAIL failure callback - nameservers are unreachable + +RESLIB_RUNQ_FAIL failure callback - nameservers are unreachable A debug message indicating that a RunningQuery's failure callback has been called because all nameservers for the zone in question are unreachable. - -RESLIB_RUNQUSUCC success callback - sending query to %1 + +RESLIB_RUNQ_SUCCESS success callback - sending query to %1 A debug message indicating that a RunningQuery's success callback has been called because a nameserver has been found, and that a query is being sent @@ -1630,8 +2324,8 @@ to the specified nameserver. - -RESLIB_TESTSERV setting test server to %1(%2) + +RESLIB_TEST_SERVER setting test server to %1(%2) This is an internal debugging message and is only generated in unit tests. It indicates that all upstream queries from the resolver are being routed to @@ -1641,8 +2335,8 @@ operation, it is a warning message instead of a debug message. - -RESLIB_TESTUPSTR sending upstream query for <%1> to test server at %2 + +RESLIB_TEST_UPSTREAM sending upstream query for <%1> to test server at %2 This is a debug message and should only be seen in unit tests. A query for the specified <name, class, type> tuple is being sent to a test nameserver @@ -1658,8 +2352,8 @@ there are no retries left, an error will be reported. - -RESLIB_TIMEOUTRTRY query <%1> to %2 timed out, re-trying (retries left: %3) + +RESLIB_TIMEOUT_RETRY query <%1> to %2 timed out, re-trying (retries left: %3) A debug message indicating that the specified query has timed out and that the resolver is repeating the query to the same nameserver. After this @@ -1685,8 +2379,8 @@ tuple is being sent to a nameserver whose address is given in the message. - -RESOLVER_AXFRTCP AXFR request received over TCP + +RESOLVER_AXFR_TCP AXFR request received over TCP A debug message, the resolver received a NOTIFY message over TCP. The server cannot process it and will return an error message to the sender with the @@ -1694,8 +2388,8 @@ RCODE set to NOTIMP. - -RESOLVER_AXFRUDP AXFR request received over UDP + +RESOLVER_AXFR_UDP AXFR request received over UDP A debug message, the resolver received a NOTIFY message over UDP. The server cannot process it (and in any case, an AXFR request should be sent over TCP) @@ -1703,24 +2397,24 @@ and will return an error message to the sender with the RCODE set to FORMERR. - -RESOLVER_CLTMOSMALL client timeout of %1 is too small + +RESOLVER_CLIENT_TIME_SMALL client timeout of %1 is too small An error indicating that the configuration value specified for the query timeout is too small. - -RESOLVER_CONFIGCHAN configuration channel created + +RESOLVER_CONFIG_CHANNEL configuration channel created A debug message, output when the resolver has successfully established a connection to the configuration channel. - -RESOLVER_CONFIGERR error in configuration: %1 + +RESOLVER_CONFIG_ERROR error in configuration: %1 An error was detected in a configuration update received by the resolver. This may be in the format of the configuration message (in which case this is a @@ -1730,16 +2424,16 @@ will give more details. - -RESOLVER_CONFIGLOAD configuration loaded + +RESOLVER_CONFIG_LOADED configuration loaded A debug message, output when the resolver configuration has been successfully loaded. - -RESOLVER_CONFIGUPD configuration updated: %1 + +RESOLVER_CONFIG_UPDATED configuration updated: %1 A debug message, the configuration has been updated with the specified information. @@ -1753,16 +2447,16 @@ A debug message, output when the Resolver() object has been created. - -RESOLVER_DNSMSGRCVD DNS message received: %1 + +RESOLVER_DNS_MESSAGE_RECEIVED DNS message received: %1 A debug message, this always precedes some other logging message and is the formatted contents of the DNS packet that the other message refers to. - -RESOLVER_DNSMSGSENT DNS message of %1 bytes sent: %2 + +RESOLVER_DNS_MESSAGE_SENT DNS message of %1 bytes sent: %2 A debug message, this contains details of the response sent back to the querying system. @@ -1777,24 +2471,24 @@ resolver. All it can do is to shut down. - -RESOLVER_FWDADDR setting forward address %1(%2) + +RESOLVER_FORWARD_ADDRESS setting forward address %1(%2) This message may appear multiple times during startup, and it lists the forward addresses used by the resolver when running in forwarding mode. - -RESOLVER_FWDQUERY processing forward query + +RESOLVER_FORWARD_QUERY processing forward query The received query has passed all checks and is being forwarded to upstream servers. - -RESOLVER_HDRERR message received, exception when processing header: %1 + +RESOLVER_HEADER_ERROR message received, exception when processing header: %1 A debug message noting that an exception occurred during the processing of a received packet. The packet has been dropped. @@ -1809,40 +2503,34 @@ and will return an error message to the sender with the RCODE set to NOTIMP. - -RESOLVER_LKTMOSMALL lookup timeout of %1 is too small + +RESOLVER_LOOKUP_TIME_SMALL lookup timeout of %1 is too small An error indicating that the configuration value specified for the lookup timeout is too small. - -RESOLVER_NFYNOTAUTH NOTIFY arrived but server is not authoritative + +RESOLVER_MESSAGE_ERROR error parsing received message: %1 - returning %2 -The resolver received a NOTIFY message. As the server is not authoritative it -cannot process it, so it returns an error message to the sender with the RCODE -set to NOTAUTH. +A debug message noting that the resolver received a message and the +parsing of the body of the message failed due to some error (although +the parsing of the header succeeded). The message parameters give a +textual description of the problem and the RCODE returned. - -RESOLVER_NORMQUERY processing normal query + +RESOLVER_NEGATIVE_RETRIES negative number of retries (%1) specified in the configuration -The received query has passed all checks and is being processed by the resolver. +An error message indicating that the resolver configuration has specified a +negative retry count. Only zero or positive values are valid. - -RESOLVER_NOROOTADDR no root addresses available - -A warning message during startup, indicates that no root addresses have been -set. This may be because the resolver will get them from a priming query. - - - - -RESOLVER_NOTIN non-IN class request received, returning REFUSED message + +RESOLVER_NON_IN_PACKET non-IN class request received, returning REFUSED message A debug message, the resolver has received a DNS packet that was not IN class. The resolver cannot handle such packets, so is returning a REFUSED response to @@ -1850,8 +2538,24 @@ the sender. - -RESOLVER_NOTONEQUES query contained %1 questions, exactly one question was expected + +RESOLVER_NORMAL_QUERY processing normal query + +The received query has passed all checks and is being processed by the resolver. + + + + +RESOLVER_NOTIFY_RECEIVED NOTIFY arrived but server is not authoritative + +The resolver received a NOTIFY message. As the server is not authoritative it +cannot process it, so it returns an error message to the sender with the RCODE +set to NOTAUTH. + + + + +RESOLVER_NOT_ONE_QUESTION query contained %1 questions, exactly one question was expected A debug message, the resolver received a query that contained the number of entires in the question section detailed in the message. This is a malformed @@ -1860,17 +2564,16 @@ return a message to the sender with the RCODE set to FORMERR. - -RESOLVER_OPCODEUNS opcode %1 not supported by the resolver + +RESOLVER_NO_ROOT_ADDRESS no root addresses available -A debug message, the resolver received a message with an unsupported opcode -(it can only process QUERY opcodes). It will return a message to the sender -with the RCODE set to NOTIMP. +A warning message during startup, indicates that no root addresses have been +set. This may be because the resolver will get them from a priming query. - -RESOLVER_PARSEERR error parsing received message: %1 - returning %2 + +RESOLVER_PARSE_ERROR error parsing received message: %1 - returning %2 A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some non-protocol related reason @@ -1879,16 +2582,16 @@ a textual description of the problem and the RCODE returned. - -RESOLVER_PRINTMSG print message command, aeguments are: %1 + +RESOLVER_PRINT_COMMAND print message command, arguments are: %1 This message is logged when a "print_message" command is received over the command channel. - -RESOLVER_PROTERR protocol error parsing received message: %1 - returning %2 + +RESOLVER_PROTOCOL_ERROR protocol error parsing received message: %1 - returning %2 A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some protocol error (although the @@ -1897,28 +2600,70 @@ description of the problem and the RCODE returned. - -RESOLVER_QUSETUP query setup + +RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4 + +A debug message that indicates an incoming query is accepted in terms of +the query ACL. The log message shows the query in the form of +<query name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. + + + + +RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4 + +An informational message that indicates an incoming query is dropped +in terms of the query ACL. Unlike the RESOLVER_QUERY_REJECTED +case, the server does not return any response. The log message +shows the query in the form of <query name>/<query type>/<query +class>, and the client that sends the query in the form of <Source +IP address>#<source port>. + + + + +RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4 + +An informational message that indicates an incoming query is rejected +in terms of the query ACL. This results in a response with an RCODE of +REFUSED. The log message shows the query in the form of <query +name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. + + + + +RESOLVER_QUERY_SETUP query setup A debug message noting that the resolver is creating a RecursiveQuery object. - -RESOLVER_QUSHUT query shutdown + +RESOLVER_QUERY_SHUTDOWN query shutdown A debug message noting that the resolver is destroying a RecursiveQuery object. - -RESOLVER_QUTMOSMALL query timeout of %1 is too small + +RESOLVER_QUERY_TIME_SMALL query timeout of %1 is too small An error indicating that the configuration value specified for the query timeout is too small. + +RESOLVER_RECEIVED_MESSAGE resolver has received a DNS message + +A debug message indicating that the resolver has received a message. Depending +on the debug settings, subsequent log output will indicate the nature of the +message. + + + RESOLVER_RECURSIVE running in recursive mode @@ -1927,43 +2672,18 @@ resolver is running in recursive mode. - -RESOLVER_RECVMSG resolver has received a DNS message - -A debug message indicating that the resolver has received a message. Depending -on the debug settings, subsequent log output will indicate the nature of the -message. - - - - -RESOLVER_RETRYNEG negative number of retries (%1) specified in the configuration - -An error message indicating that the resolver configuration has specified a -negative retry count. Only zero or positive values are valid. - - - - -RESOLVER_ROOTADDR setting root address %1(%2) - -This message may appear multiple times during startup; it lists the root -addresses used by the resolver. - - - - -RESOLVER_SERVICE service object created + +RESOLVER_SERVICE_CREATED service object created A debug message, output when the main service object (which handles the received queries) is created. - -RESOLVER_SETPARAM query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4 + +RESOLVER_SET_PARAMS query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4 -A debug message, lists the parameters associated with the message. These are: +A debug message, lists the parameters being set for the resolver. These are: query timeout: the timeout (in ms) used for queries originated by the resolver to upstream servers. Client timeout: the interval to resolver a query by a client: after this time, the resolver sends back a SERVFAIL to the client @@ -1972,17 +2692,33 @@ resolver gives up trying to resolve a query. Retry count: the number of times the resolver will retry a query to an upstream server if it gets a timeout. The client and lookup timeouts require a bit more explanation. The -resolution of the clent query might require a large number of queries to +resolution of the client query might require a large number of queries to upstream nameservers. Even if none of these queries timeout, the total time taken to perform all the queries may exceed the client timeout. When this happens, a SERVFAIL is returned to the client, but the resolver continues with the resolution process. Data received is added to the cache. However, -there comes a time - the lookup timeout - when even the resolve gives up. +there comes a time - the lookup timeout - when even the resolver gives up. At this point it will wait for pending upstream queries to complete or timeout and drop the query. + +RESOLVER_SET_QUERY_ACL query ACL is configured + +A debug message that appears when a new query ACL is configured for the +resolver. + + + + +RESOLVER_SET_ROOT_ADDRESS setting root address %1(%2) + +This message may appear multiple times during startup; it lists the root +addresses used by the resolver. + + + RESOLVER_SHUTDOWN resolver shutdown complete @@ -2005,12 +2741,385 @@ An informational message, this is output when the resolver starts up. - -RESOLVER_UNEXRESP received unexpected response, ignoring + +RESOLVER_UNEXPECTED_RESPONSE received unexpected response, ignoring A debug message noting that the server has received a response instead of a query and is ignoring it. + + + +RESOLVER_UNSUPPORTED_OPCODE opcode %1 not supported by the resolver + +A debug message, the resolver received a message with an unsupported opcode +(it can only process QUERY opcodes). It will return a message to the sender +with the RCODE set to NOTIMP. + + + + +XFRIN_AXFR_DATABASE_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to a database problem. +The error is shown in the log message. + + + + +XFRIN_AXFR_INTERNAL_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to an internal +problem in the bind10 python wrapper library. +The error is shown in the log message. + + + + +XFRIN_AXFR_TRANSFER_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to a protocol error. +The error is shown in the log message. + + + + +XFRIN_AXFR_TRANSFER_STARTED AXFR transfer of zone %1 started + +A connection to the master server has been made, the serial value in +the SOA record has been checked, and a zone transfer has been started. + + + + +XFRIN_AXFR_TRANSFER_SUCCESS AXFR transfer of zone %1 succeeded + +The AXFR transfer of the given zone was successfully completed. + + + + +XFRIN_BAD_MASTER_ADDR_FORMAT bad format for master address: %1 + +The given master address is not a valid IP address. + + + + +XFRIN_BAD_MASTER_PORT_FORMAT bad format for master port: %1 + +The master port as read from the configuration is not a valid port number. + + + + +XFRIN_BAD_TSIG_KEY_STRING bad TSIG key string: %1 + +The TSIG key string as read from the configuration does not represent +a valid TSIG key. + + + + +XFRIN_BAD_ZONE_CLASS Invalid zone class: %1 + +The zone class as read from the configuration is not a valid DNS class. + + + + +XFRIN_CC_SESSION_ERROR error reading from cc channel: %1 + +There was a problem reading from the command and control channel. The +most likely cause is that xfrin the msgq daemon is not running. + + + + +XFRIN_COMMAND_ERROR error while executing command '%1': %2 + +There was an error while the given command was being processed. The +error is given in the log message. + + + + +XFRIN_CONNECT_MASTER error connecting to master at %1: %2 + +There was an error opening a connection to the master. The error is +shown in the log message. + + + + +XFRIN_IMPORT_DNS error importing python DNS module: %1 + +There was an error importing the python DNS module pydnspp. The most +likely cause is a PYTHONPATH problem. + + + + +XFRIN_MSGQ_SEND_ERROR error while contacting %1 and %2 + +There was a problem sending a message to the xfrout module or the +zone manager. This most likely means that the msgq daemon has quit or +was killed. + + + + +XFRIN_MSGQ_SEND_ERROR_ZONE_MANAGER error while contacting %1 + +There was a problem sending a message to the zone manager. This most +likely means that the msgq daemon has quit or was killed. + + + + +XFRIN_RETRANSFER_UNKNOWN_ZONE got notification to retransfer unknown zone %1 + +There was an internal command to retransfer the given zone, but the +zone is not known to the system. This may indicate that the configuration +for xfrin is incomplete, or there was a typographical error in the +zone name in the configuration. + + + + +XFRIN_STARTING starting resolver with command line '%1' + +An informational message, this is output when the resolver starts up. + + + + +XFRIN_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the xfrin daemon. The +daemon will now shut down. + + + + +XFRIN_UNKNOWN_ERROR unknown error: %1 + +An uncaught exception was raised while running the xfrin daemon. The +exception message is printed in the log message. + + + + +XFROUT_AXFR_TRANSFER_DONE transfer of %1/%2 complete + +The transfer of the given zone has been completed successfully, or was +aborted due to a shutdown event. + + + + +XFROUT_AXFR_TRANSFER_ERROR error transferring zone %1/%2: %3 + +An uncaught exception was encountered while sending the response to +an AXFR query. The error message of the exception is included in the +log message, but this error most likely points to incomplete exception +handling in the code. + + + + +XFROUT_AXFR_TRANSFER_FAILED transfer of %1/%2 failed, rcode: %3 + +A transfer out for the given zone failed. An error response is sent +to the client. The given rcode is the rcode that is set in the error +response. This is either NOTAUTH (we are not authoritative for the +zone), SERVFAIL (our internal database is missing the SOA record for +the zone), or REFUSED (the limit of simultaneous outgoing AXFR +transfers, as specified by the configuration value +Xfrout/max_transfers_out, has been reached). + + + + +XFROUT_AXFR_TRANSFER_STARTED transfer of zone %1/%2 has started + +A transfer out of the given zone has started. + + + + +XFROUT_BAD_TSIG_KEY_STRING bad TSIG key string: %1 + +The TSIG key string as read from the configuration does not represent +a valid TSIG key. + + + + +XFROUT_CC_SESSION_ERROR error reading from cc channel: %1 + +There was a problem reading from the command and control channel. The +most likely cause is that the msgq daemon is not running. + + + + +XFROUT_CC_SESSION_TIMEOUT_ERROR timeout waiting for cc response + +There was a problem reading a response from antoher module over the +command and control channel. The most likely cause is that the +configuration manager b10-cfgmgr is not running. + + + + +XFROUT_FETCH_REQUEST_ERROR socket error while fetching a request from the auth daemon + +There was a socket error while contacting the b10-auth daemon to +fetch a transfer request. The auth daemon may have shutdown. + + + + +XFROUT_HANDLE_QUERY_ERROR error while handling query: %1 + +There was a general error handling an xfrout query. The error is shown +in the message. In principle this error should not appear, and points +to an oversight catching exceptions in the right place. However, to +ensure the daemon keeps running, this error is caught and reported. + + + + +XFROUT_IMPORT error importing python module: %1 + +There was an error importing a python module. One of the modules needed +by xfrout could not be found. This suggests that either some libraries +are missing on the system, or the PYTHONPATH variable is not correct. +The specific place where this library needs to be depends on your +system and your specific installation. + + + + +XFROUT_NEW_CONFIG Update xfrout configuration + +New configuration settings have been sent from the configuration +manager. The xfrout daemon will now apply them. + + + + +XFROUT_NEW_CONFIG_DONE Update xfrout configuration done + +The xfrout daemon is now done reading the new configuration settings +received from the configuration manager. + + + + +XFROUT_NOTIFY_COMMAND received command to send notifies for %1/%2 + +The xfrout daemon received a command on the command channel that +NOTIFY packets should be sent for the given zone. + + + + +XFROUT_PARSE_QUERY_ERROR error parsing query: %1 + +There was a parse error while reading an incoming query. The parse +error is shown in the log message. A remote client sent a packet we +do not understand or support. The xfrout request will be ignored. +In general, this should only occur for unexpected problems like +memory allocation failures, as the query should already have been +parsed by the b10-auth daemon, before it was passed here. + + + + +XFROUT_PROCESS_REQUEST_ERROR error processing transfer request: %2 + +There was an error processing a transfer request. The error is included +in the log message, but at this point no specific information other +than that could be given. This points to incomplete exception handling +in the code. + + + + +XFROUT_RECEIVED_SHUTDOWN_COMMAND shutdown command received + +The xfrout daemon received a shutdown command from the command channel +and will now shut down. + + + + +XFROUT_RECEIVE_FILE_DESCRIPTOR_ERROR error receiving the file descriptor for an XFR connection + +There was an error receiving the file descriptor for the transfer +request. Normally, the request is received by b10-auth, and passed on +to the xfrout daemon, so it can answer directly. However, there was a +problem receiving this file descriptor. The request will be ignored. + + + + +XFROUT_REMOVE_OLD_UNIX_SOCKET_FILE_ERROR error removing unix socket file %1: %2 + +The unix socket file xfrout needs for contact with the auth daemon +already exists, and needs to be removed first, but there is a problem +removing it. It is likely that we do not have permission to remove +this file. The specific error is show in the log message. The xfrout +daemon will shut down. + + + + +XFROUT_REMOVE_UNIX_SOCKET_FILE_ERROR error clearing unix socket file %1: %2 + +When shutting down, the xfrout daemon tried to clear the unix socket +file used for communication with the auth daemon. It failed to remove +the file. The reason for the failure is given in the error message. + + + + +XFROUT_SOCKET_SELECT_ERROR error while calling select() on request socket: %1 + +There was an error while calling select() on the socket that informs +the xfrout daemon that a new xfrout request has arrived. This should +be a result of rare local error such as memory allocation failure and +shouldn't happen under normal conditions. The error is included in the +log message. + + + + +XFROUT_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the xfrout daemon. The +daemon will now shut down. + + + + +XFROUT_STOPPING the xfrout daemon is shutting down + +The current transfer is aborted, as the xfrout daemon is shutting down. + + + + +XFROUT_UNIX_SOCKET_FILE_IN_USE another xfrout process seems to be using the unix socket file %1 + +While starting up, the xfrout daemon tried to clear the unix domain +socket needed for contacting the b10-auth daemon to pass requests +on, but the file is in use. The most likely cause is that another +xfrout daemon process is still running. This xfrout daemon (the one +printing this message) will not start. + diff --git a/src/bin/auth/auth.spec.pre.in b/src/bin/auth/auth.spec.pre.in index d88ffb5e3e..2ce044e440 100644 --- a/src/bin/auth/auth.spec.pre.in +++ b/src/bin/auth/auth.spec.pre.in @@ -122,6 +122,24 @@ } ] } + ], + "statistics": [ + { + "item_name": "queries.tcp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries TCP ", + "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" + }, + { + "item_name": "queries.udp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries UDP", + "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially" + } ] } } diff --git a/src/bin/auth/b10-auth.8 b/src/bin/auth/b10-auth.8 index 0356683b11..aedadeefb0 100644 --- a/src/bin/auth/b10-auth.8 +++ b/src/bin/auth/b10-auth.8 @@ -2,12 +2,12 @@ .\" Title: b10-auth .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.75.2 -.\" Date: March 8, 2011 +.\" Date: August 11, 2011 .\" Manual: BIND10 .\" Source: BIND10 .\" Language: English .\" -.TH "B10\-AUTH" "8" "March 8, 2011" "BIND10" "BIND10" +.TH "B10\-AUTH" "8" "August 11, 2011" "BIND10" "BIND10" .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- @@ -70,18 +70,6 @@ defines the path to the SQLite3 zone file when using the sqlite datasource\&. Th /usr/local/var/bind10\-devel/zone\&.sqlite3\&. .PP -\fIlisten_on\fR -is a list of addresses and ports for -\fBb10\-auth\fR -to listen on\&. The list items are the -\fIaddress\fR -string and -\fIport\fR -number\&. By default, -\fBb10\-auth\fR -listens on port 53 on the IPv6 (::) and IPv4 (0\&.0\&.0\&.0) wildcard addresses\&. -.PP - \fIdatasources\fR configures data sources\&. The list items include: \fItype\fR @@ -114,6 +102,18 @@ In this development version, currently this is only used for the memory data sou .RE .PP +\fIlisten_on\fR +is a list of addresses and ports for +\fBb10\-auth\fR +to listen on\&. The list items are the +\fIaddress\fR +string and +\fIport\fR +number\&. By default, +\fBb10\-auth\fR +listens on port 53 on the IPv6 (::) and IPv4 (0\&.0\&.0\&.0) wildcard addresses\&. +.PP + \fIstatistics\-interval\fR is the timer interval in seconds for \fBb10\-auth\fR @@ -164,6 +164,25 @@ immediately\&. \fBshutdown\fR exits \fBb10\-auth\fR\&. (Note that the BIND 10 boss process will restart this service\&.) +.SH "STATISTICS DATA" +.PP +The statistics data collected by the +\fBb10\-stats\fR +daemon include: +.PP +auth\&.queries\&.tcp +.RS 4 +Total count of queries received by the +\fBb10\-auth\fR +server over TCP since startup\&. +.RE +.PP +auth\&.queries\&.udp +.RS 4 +Total count of queries received by the +\fBb10\-auth\fR +server over UDP since startup\&. +.RE .SH "FILES" .PP diff --git a/src/bin/auth/b10-auth.xml b/src/bin/auth/b10-auth.xml index 2b533947d1..636f437993 100644 --- a/src/bin/auth/b10-auth.xml +++ b/src/bin/auth/b10-auth.xml @@ -20,7 +20,7 @@ - March 8, 2011 + August 11, 2011 @@ -131,15 +131,6 @@ /usr/local/var/bind10-devel/zone.sqlite3. - - listen_on is a list of addresses and ports for - b10-auth to listen on. - The list items are the address string - and port number. - By default, b10-auth listens on port 53 - on the IPv6 (::) and IPv4 (0.0.0.0) wildcard addresses. - - datasources configures data sources. The list items include: @@ -164,6 +155,15 @@ + + listen_on is a list of addresses and ports for + b10-auth to listen on. + The list items are the address string + and port number. + By default, b10-auth listens on port 53 + on the IPv6 (::) and IPv4 (0.0.0.0) wildcard addresses. + + statistics-interval is the timer interval in seconds for b10-auth to share its @@ -208,6 +208,34 @@ + + STATISTICS DATA + + + The statistics data collected by the b10-stats + daemon include: + + + + + + auth.queries.tcp + Total count of queries received by the + b10-auth server over TCP since startup. + + + + + auth.queries.udp + Total count of queries received by the + b10-auth server over UDP since startup. + + + + + + + FILES diff --git a/src/bin/auth/query.cc b/src/bin/auth/query.cc index 05bcd894c8..3fe03c802a 100644 --- a/src/bin/auth/query.cc +++ b/src/bin/auth/query.cc @@ -31,7 +31,7 @@ namespace isc { namespace auth { void -Query::getAdditional(const ZoneFinder& zone, const RRset& rrset) const { +Query::getAdditional(ZoneFinder& zone, const RRset& rrset) const { RdataIteratorPtr rdata_iterator(rrset.getRdataIterator()); for (; !rdata_iterator->isLast(); rdata_iterator->next()) { const Rdata& rdata(rdata_iterator->getCurrent()); @@ -47,7 +47,7 @@ Query::getAdditional(const ZoneFinder& zone, const RRset& rrset) const { } void -Query::findAddrs(const ZoneFinder& zone, const Name& qname, +Query::findAddrs(ZoneFinder& zone, const Name& qname, const ZoneFinder::FindOptions options) const { // Out of zone name @@ -86,7 +86,7 @@ Query::findAddrs(const ZoneFinder& zone, const Name& qname, } void -Query::putSOA(const ZoneFinder& zone) const { +Query::putSOA(ZoneFinder& zone) const { ZoneFinder::FindResult soa_result(zone.find(zone.getOrigin(), RRType::SOA())); if (soa_result.code != ZoneFinder::SUCCESS) { @@ -104,7 +104,7 @@ Query::putSOA(const ZoneFinder& zone) const { } void -Query::getAuthAdditional(const ZoneFinder& zone) const { +Query::getAuthAdditional(ZoneFinder& zone) const { // Fill in authority and addtional sections. ZoneFinder::FindResult ns_result = zone.find(zone.getOrigin(), RRType::NS()); diff --git a/src/bin/auth/query.h b/src/bin/auth/query.h index fa023fe6f6..13523e8b58 100644 --- a/src/bin/auth/query.h +++ b/src/bin/auth/query.h @@ -69,7 +69,7 @@ private: /// Adds a SOA of the zone into the authority zone of response_. /// Can throw NoSOA. /// - void putSOA(const isc::datasrc::ZoneFinder& zone) const; + void putSOA(isc::datasrc::ZoneFinder& zone) const; /// \brief Look up additional data (i.e., address records for the names /// included in NS or MX records). @@ -85,7 +85,7 @@ private: /// query is to be found. /// \param rrset The RRset (i.e., NS or MX rrset) which require additional /// processing. - void getAdditional(const isc::datasrc::ZoneFinder& zone, + void getAdditional(isc::datasrc::ZoneFinder& zone, const isc::dns::RRset& rrset) const; /// \brief Find address records for a specified name. @@ -104,7 +104,7 @@ private: /// be found. /// \param qname The name in rrset RDATA. /// \param options The search options. - void findAddrs(const isc::datasrc::ZoneFinder& zone, + void findAddrs(isc::datasrc::ZoneFinder& zone, const isc::dns::Name& qname, const isc::datasrc::ZoneFinder::FindOptions options = isc::datasrc::ZoneFinder::FIND_DEFAULT) const; @@ -127,7 +127,7 @@ private: /// /// \param zone The \c ZoneFinder through which the NS and additional data /// for the query are to be found. - void getAuthAdditional(const isc::datasrc::ZoneFinder& zone) const; + void getAuthAdditional(isc::datasrc::ZoneFinder& zone) const; public: /// Constructor from query parameters. diff --git a/src/bin/auth/tests/query_unittest.cc b/src/bin/auth/tests/query_unittest.cc index 6a75856eee..68f0a1d57a 100644 --- a/src/bin/auth/tests/query_unittest.cc +++ b/src/bin/auth/tests/query_unittest.cc @@ -122,12 +122,12 @@ public: masterLoad(zone_stream, origin_, rrclass_, boost::bind(&MockZoneFinder::loadRRset, this, _1)); } - virtual const isc::dns::Name& getOrigin() const { return (origin_); } - virtual const isc::dns::RRClass& getClass() const { return (rrclass_); } + virtual isc::dns::Name getOrigin() const { return (origin_); } + virtual isc::dns::RRClass getClass() const { return (rrclass_); } virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, RRsetList* target = NULL, - const FindOptions options = FIND_DEFAULT) const; + const FindOptions options = FIND_DEFAULT); // If false is passed, it makes the zone broken as if it didn't have the // SOA. @@ -165,7 +165,7 @@ private: ZoneFinder::FindResult MockZoneFinder::find(const Name& name, const RRType& type, - RRsetList* target, const FindOptions options) const + RRsetList* target, const FindOptions options) { // Emulating a broken zone: mandatory apex RRs are missing if specifically // configured so (which are rare cases). diff --git a/src/bin/bind10/bind10.xml b/src/bin/bind10/bind10.xml index 1128264ece..b101ba8227 100644 --- a/src/bin/bind10/bind10.xml +++ b/src/bin/bind10/bind10.xml @@ -2,7 +2,7 @@ "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd" []> + + + STATISTICS DATA + + + The statistics data collected by the b10-stats + daemon include: + + + + + + bind10.boot_time + + The date and time that the bind10 + process started. + This is represented in ISO 8601 format. + + + + + + + + - Enabled verbose mode. This enables diagnostic messages to - STDERR. + Enable verbose mode. + This sets logging to the maximum debugging level. @@ -146,6 +149,22 @@ once that is merged you can for instance do 'config add Resolver/forward_address + + + + + + + query_acl is a list of query access control + rules. The list items are the action string + and the from or key strings. + The possible actions are ACCEPT, REJECT and DROP. + The from is a remote (source) IPv4 or IPv6 + address or special keyword. + The key is a TSIG key name. + The default configuration accepts queries from 127.0.0.1 and ::1. + + retries is the number of times to retry (resend query) after a query timeout @@ -234,7 +253,8 @@ once that is merged you can for instance do 'config add Resolver/forward_address The b10-resolver daemon was first coded in September 2010. The initial implementation only provided forwarding. Iteration was introduced in January 2011. - + Caching was implemented in February 2011. + Access control was introduced in June 2011. diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index f0c472dd29..1164711a8e 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -20,7 +20,7 @@ - Oct 15, 2010 + August 11, 2011 @@ -67,6 +67,7 @@ it. b10-stats invokes "sendstats" command for bind10 after its initial starting because it's sure to collect statistics data from bind10. + @@ -86,6 +87,123 @@ + + CONFIGURATION AND COMMANDS + + + The b10-stats command does not have any + configurable settings. + + + + + The configuration commands are: + + + + + remove removes the named statistics name and data. + + + + + reset will reset all statistics data to + default values except for constant names. + This may re-add previously removed statistics names. + + + + set + + + + + show will send the statistics data + in JSON format. + By default, it outputs all the statistics data it has collected. + An optional item name may be specified to receive individual output. + + + + + + shutdown will shutdown the + b10-stats process. + (Note that the bind10 parent may restart it.) + + + + status simply indicates that the daemon is + running. + + + + + + STATISTICS DATA + + + The b10-stats daemon contains these statistics: + + + + + + report_time + + The latest report date and time in + ISO 8601 format. + + + + stats.boot_time + The date and time when this daemon was + started in ISO 8601 format. + This is a constant which can't be reset except by restarting + b10-stats. + + + + + stats.last_update_time + The date and time (in ISO 8601 format) + when this daemon last received data from another component. + + + + + stats.lname + This is the name used for the + b10-msgq command-control channel. + (This is a constant which can't be reset except by restarting + b10-stats.) + + + + + stats.start_time + This is the date and time (in ISO 8601 format) + when this daemon started collecting data. + + + + + stats.timestamp + The current date and time represented in + seconds since UNIX epoch (1970-01-01T0 0:00:00Z) with + precision (delimited with a period) up to + one hundred thousandth of second. + + + + + + See other manual pages for explanations for their statistics + that are kept track by b10-stats. + + + + FILES /usr/local/share/bind10-devel/stats.spec @@ -126,7 +244,7 @@ HISTORY The b10-stats daemon was initially designed - and implemented by Naoki Kambe of JPRS in Oct 2010. + and implemented by Naoki Kambe of JPRS in October 2010. + The configuration commands are: diff --git a/src/lib/cache/cache_messages.mes b/src/lib/cache/cache_messages.mes index 2a68cc23bf..7f593ec6e6 100644 --- a/src/lib/cache/cache_messages.mes +++ b/src/lib/cache/cache_messages.mes @@ -124,7 +124,7 @@ the message will not be cached. Debug message. The requested data was found in the RRset cache. However, it is expired, so the cache removed it and is going to pretend nothing was found. -% CACHE_RRSET_INIT initializing RRset cache for %2 RRsets of class %1 +% CACHE_RRSET_INIT initializing RRset cache for %1 RRsets of class %2 Debug message. The RRset cache to hold at most this many RRsets for the given class is being created. diff --git a/src/lib/cc/session.cc b/src/lib/cc/session.cc index 97d5cf14d0..e0e24cf922 100644 --- a/src/lib/cc/session.cc +++ b/src/lib/cc/session.cc @@ -119,7 +119,7 @@ private: void SessionImpl::establish(const char& socket_file) { try { - LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISH).arg(socket_file); + LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISH).arg(&socket_file); socket_.connect(asio::local::stream_protocol::endpoint(&socket_file), error_); LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISHED); diff --git a/src/lib/config/module_spec.cc b/src/lib/config/module_spec.cc index 306c7954f4..bebe695023 100644 --- a/src/lib/config/module_spec.cc +++ b/src/lib/config/module_spec.cc @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium. +// Copyright (C) 2010, 2011 Internet Systems Consortium. // // Permission to use, copy, modify, and distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -87,6 +87,61 @@ check_config_item_list(ConstElementPtr spec) { } } +// checks whether the given element is a valid statistics specification +// returns false if the specification is bad +bool +check_format(ConstElementPtr value, ConstElementPtr format_name) { + typedef std::map format_types; + format_types time_formats; + // TODO: should be added other format types if necessary + time_formats.insert( + format_types::value_type("date-time", "%Y-%m-%dT%H:%M:%SZ") ); + time_formats.insert( + format_types::value_type("date", "%Y-%m-%d") ); + time_formats.insert( + format_types::value_type("time", "%H:%M:%S") ); + BOOST_FOREACH (const format_types::value_type& f, time_formats) { + if (format_name->stringValue() == f.first) { + struct tm tm; + std::vector buf(32); + memset(&tm, 0, sizeof(tm)); + // reverse check + return (strptime(value->stringValue().c_str(), + f.second.c_str(), &tm) != NULL + && strftime(&buf[0], buf.size(), + f.second.c_str(), &tm) != 0 + && strncmp(value->stringValue().c_str(), + &buf[0], buf.size()) == 0); + } + } + return (false); +} + +void check_statistics_item_list(ConstElementPtr spec); + +void +check_statistics_item_list(ConstElementPtr spec) { + if (spec->getType() != Element::list) { + throw ModuleSpecError("statistics is not a list of elements"); + } + BOOST_FOREACH(ConstElementPtr item, spec->listValue()) { + check_config_item(item); + // additional checks for statistics + check_leaf_item(item, "item_title", Element::string, true); + check_leaf_item(item, "item_description", Element::string, true); + check_leaf_item(item, "item_format", Element::string, false); + // checks name of item_format and validation of item_default + if (item->contains("item_format") + && item->contains("item_default")) { + if(!check_format(item->get("item_default"), + item->get("item_format"))) { + throw ModuleSpecError( + "item_default not valid type of item_format"); + } + } + } +} + void check_command(ConstElementPtr spec) { check_leaf_item(spec, "command_name", Element::string, true); @@ -116,6 +171,9 @@ check_data_specification(ConstElementPtr spec) { if (spec->contains("commands")) { check_command_list(spec->get("commands")); } + if (spec->contains("statistics")) { + check_statistics_item_list(spec->get("statistics")); + } } // checks whether the given element is a valid module specification @@ -165,6 +223,15 @@ ModuleSpec::getConfigSpec() const { } } +ConstElementPtr +ModuleSpec::getStatisticsSpec() const { + if (module_specification->contains("statistics")) { + return (module_specification->get("statistics")); + } else { + return (ElementPtr()); + } +} + const std::string ModuleSpec::getModuleName() const { return (module_specification->get("module_name")->stringValue()); @@ -185,6 +252,12 @@ ModuleSpec::validateConfig(ConstElementPtr data, const bool full) const { return (validateSpecList(spec, data, full, ElementPtr())); } +bool +ModuleSpec::validateStatistics(ConstElementPtr data, const bool full) const { + ConstElementPtr spec = module_specification->find("statistics"); + return (validateSpecList(spec, data, full, ElementPtr())); +} + bool ModuleSpec::validateCommand(const std::string& command, ConstElementPtr args, @@ -223,6 +296,14 @@ ModuleSpec::validateConfig(ConstElementPtr data, const bool full, return (validateSpecList(spec, data, full, errors)); } +bool +ModuleSpec::validateStatistics(ConstElementPtr data, const bool full, + ElementPtr errors) const +{ + ConstElementPtr spec = module_specification->find("statistics"); + return (validateSpecList(spec, data, full, errors)); +} + ModuleSpec moduleSpecFromFile(const std::string& file_name, const bool check) throw(JSONError, ModuleSpecError) @@ -343,6 +424,14 @@ ModuleSpec::validateItem(ConstElementPtr spec, ConstElementPtr data, } } } + if (spec->contains("item_format")) { + if (!check_format(data, spec->get("item_format"))) { + if (errors) { + errors->add(Element::create("Format mismatch")); + } + return (false); + } + } return (true); } diff --git a/src/lib/config/module_spec.h b/src/lib/config/module_spec.h index ab6e273edd..ce3762f203 100644 --- a/src/lib/config/module_spec.h +++ b/src/lib/config/module_spec.h @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium. +// Copyright (C) 2010, 2011 Internet Systems Consortium. // // Permission to use, copy, modify, and distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -71,6 +71,12 @@ namespace isc { namespace config { /// part of the specification isc::data::ConstElementPtr getConfigSpec() const; + /// Returns the statistics part of the specification as an + /// ElementPtr + /// \return ElementPtr Shared pointer to the statistics + /// part of the specification + isc::data::ConstElementPtr getStatisticsSpec() const; + /// Returns the full module specification as an ElementPtr /// \return ElementPtr Shared pointer to the specification isc::data::ConstElementPtr getFullSpec() const { @@ -95,6 +101,17 @@ namespace isc { namespace config { bool validateConfig(isc::data::ConstElementPtr data, const bool full = false) const; + // returns true if the given element conforms to this data + // statistics specification + /// Validates the given statistics data for this specification. + /// \param data The base \c Element of the data to check + /// \param full If true, all non-optional statistics parameters + /// must be specified. + /// \return true if the data conforms to the specification, + /// false otherwise. + bool validateStatistics(isc::data::ConstElementPtr data, + const bool full = false) const; + /// Validates the arguments for the given command /// /// This checks the command and argument against the @@ -142,6 +159,10 @@ namespace isc { namespace config { bool validateConfig(isc::data::ConstElementPtr data, const bool full, isc::data::ElementPtr errors) const; + /// errors must be of type ListElement + bool validateStatistics(isc::data::ConstElementPtr data, const bool full, + isc::data::ElementPtr errors) const; + private: bool validateItem(isc::data::ConstElementPtr spec, isc::data::ConstElementPtr data, diff --git a/src/lib/config/tests/ccsession_unittests.cc b/src/lib/config/tests/ccsession_unittests.cc index 5ea4f32e3e..793fa30457 100644 --- a/src/lib/config/tests/ccsession_unittests.cc +++ b/src/lib/config/tests/ccsession_unittests.cc @@ -184,7 +184,7 @@ TEST_F(CCSessionTest, session2) { ConstElementPtr msg; std::string group, to; msg = session.getFirstMessage(group, to); - EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\" } ] }", msg->str()); + EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\", \"statistics\": [ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ] } ] }", msg->str()); EXPECT_EQ("ConfigManager", group); EXPECT_EQ("*", to); EXPECT_EQ(0, session.getMsgQueue()->size()); @@ -231,7 +231,7 @@ TEST_F(CCSessionTest, session3) { ConstElementPtr msg; std::string group, to; msg = session.getFirstMessage(group, to); - EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\" } ] }", msg->str()); + EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\", \"statistics\": [ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ] } ] }", msg->str()); EXPECT_EQ("ConfigManager", group); EXPECT_EQ("*", to); EXPECT_EQ(1, session.getMsgQueue()->size()); diff --git a/src/lib/config/tests/module_spec_unittests.cc b/src/lib/config/tests/module_spec_unittests.cc index d642af8286..b2ca7b45f4 100644 --- a/src/lib/config/tests/module_spec_unittests.cc +++ b/src/lib/config/tests/module_spec_unittests.cc @@ -1,4 +1,4 @@ -// Copyright (C) 2009 Internet Systems Consortium, Inc. ("ISC") +// Copyright (C) 2009, 2011 Internet Systems Consortium, Inc. ("ISC") // // Permission to use, copy, modify, and/or distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -18,6 +18,8 @@ #include +#include + #include using namespace isc::data; @@ -57,6 +59,7 @@ TEST(ModuleSpec, ReadingSpecfiles) { dd = moduleSpecFromFile(specfile("spec2.spec")); EXPECT_EQ("[ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ]", dd.getCommandsSpec()->str()); + EXPECT_EQ("[ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ]", dd.getStatisticsSpec()->str()); EXPECT_EQ("Spec2", dd.getModuleName()); EXPECT_EQ("", dd.getModuleDescription()); @@ -64,6 +67,11 @@ TEST(ModuleSpec, ReadingSpecfiles) { EXPECT_EQ("Spec25", dd.getModuleName()); EXPECT_EQ("Just an empty module", dd.getModuleDescription()); EXPECT_THROW(moduleSpecFromFile(specfile("spec26.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec34.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec35.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec36.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec37.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec38.spec")), ModuleSpecError); std::ifstream file; file.open(specfile("spec1.spec").c_str()); @@ -71,6 +79,7 @@ TEST(ModuleSpec, ReadingSpecfiles) { EXPECT_EQ(dd.getFullSpec()->get("module_name") ->stringValue(), "Spec1"); EXPECT_TRUE(isNull(dd.getCommandsSpec())); + EXPECT_TRUE(isNull(dd.getStatisticsSpec())); std::ifstream file2; file2.open(specfile("spec8.spec").c_str()); @@ -114,6 +123,12 @@ TEST(ModuleSpec, SpecfileConfigData) { "commands is not a list of elements"); } +TEST(ModuleSpec, SpecfileStatistics) { + moduleSpecError("spec36.spec", "item_default not valid type of item_format"); + moduleSpecError("spec37.spec", "statistics is not a list of elements"); + moduleSpecError("spec38.spec", "item_default not valid type of item_format"); +} + TEST(ModuleSpec, SpecfileCommands) { moduleSpecError("spec17.spec", "command_name missing in { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\" }"); @@ -136,6 +151,17 @@ dataTest(const ModuleSpec& dd, const std::string& data_file_name) { return (dd.validateConfig(data)); } +bool +statisticsTest(const ModuleSpec& dd, const std::string& data_file_name) { + std::ifstream data_file; + + data_file.open(specfile(data_file_name).c_str()); + ConstElementPtr data = Element::fromJSON(data_file, data_file_name); + data_file.close(); + + return (dd.validateStatistics(data)); +} + bool dataTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, ElementPtr errors) @@ -149,6 +175,19 @@ dataTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, return (dd.validateConfig(data, true, errors)); } +bool +statisticsTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, + ElementPtr errors) +{ + std::ifstream data_file; + + data_file.open(specfile(data_file_name).c_str()); + ConstElementPtr data = Element::fromJSON(data_file, data_file_name); + data_file.close(); + + return (dd.validateStatistics(data, true, errors)); +} + TEST(ModuleSpec, DataValidation) { ModuleSpec dd = moduleSpecFromFile(specfile("spec22.spec")); @@ -175,6 +214,17 @@ TEST(ModuleSpec, DataValidation) { EXPECT_EQ("[ \"Unknown item value_does_not_exist\" ]", errors->str()); } +TEST(ModuleSpec, StatisticsValidation) { + ModuleSpec dd = moduleSpecFromFile(specfile("spec33.spec")); + + EXPECT_TRUE(statisticsTest(dd, "data33_1.data")); + EXPECT_FALSE(statisticsTest(dd, "data33_2.data")); + + ElementPtr errors = Element::createList(); + EXPECT_FALSE(statisticsTestWithErrors(dd, "data33_2.data", errors)); + EXPECT_EQ("[ \"Format mismatch\", \"Format mismatch\", \"Format mismatch\" ]", errors->str()); +} + TEST(ModuleSpec, CommandValidation) { ModuleSpec dd = moduleSpecFromFile(specfile("spec2.spec")); ConstElementPtr arg = Element::fromJSON("{}"); @@ -220,3 +270,109 @@ TEST(ModuleSpec, NamedSetValidation) { EXPECT_FALSE(dataTest(dd, "data32_2.data")); EXPECT_FALSE(dataTest(dd, "data32_3.data")); } + +TEST(ModuleSpec, CheckFormat) { + + const std::string json_begin = "{ \"module_spec\": { \"module_name\": \"Foo\", \"statistics\": [ { \"item_name\": \"dummy_time\", \"item_type\": \"string\", \"item_optional\": true, \"item_title\": \"Dummy Time\", \"item_description\": \"A dummy date time\""; + const std::string json_end = " } ] } }"; + std::string item_default; + std::string item_format; + std::vector specs; + ConstElementPtr el; + + specs.clear(); + item_default = "\"item_default\": \"2011-05-27T19:42:57Z\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-05-27\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"19:42:57\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_format); + item_default = ""; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_format); + item_default = ""; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_format); + + item_default = "\"item_default\": \"a\""; + specs.push_back("," + item_default); + item_default = "\"item_default\": \"b\""; + specs.push_back("," + item_default); + item_default = "\"item_default\": \"c\""; + specs.push_back("," + item_default); + + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_format); + + specs.push_back(""); + + BOOST_FOREACH(std::string s, specs) { + el = Element::fromJSON(json_begin + s + json_end)->get("module_spec"); + EXPECT_NO_THROW(ModuleSpec(el, true)); + } + + specs.clear(); + item_default = "\"item_default\": \"2011-05-27T19:42:57Z\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-05-27\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"19:42:57Z\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"2011-13-99T99:99:99Z\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-13-99\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"99:99:99Z\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + // wrong date-time-type format not ending with "Z" + item_default = "\"item_default\": \"2011-05-27T19:42:57\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + // wrong date-type format ending with "T" + item_default = "\"item_default\": \"2011-05-27T\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + // wrong time-type format ending with "Z" + item_default = "\"item_default\": \"19:42:57Z\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + BOOST_FOREACH(std::string s, specs) { + el = Element::fromJSON(json_begin + s + json_end)->get("module_spec"); + EXPECT_THROW(ModuleSpec(el, true), ModuleSpecError); + } +} diff --git a/src/lib/config/tests/testdata/Makefile.am b/src/lib/config/tests/testdata/Makefile.am index 91d7f04540..0d8b92ecb5 100644 --- a/src/lib/config/tests/testdata/Makefile.am +++ b/src/lib/config/tests/testdata/Makefile.am @@ -25,6 +25,8 @@ EXTRA_DIST += data22_10.data EXTRA_DIST += data32_1.data EXTRA_DIST += data32_2.data EXTRA_DIST += data32_3.data +EXTRA_DIST += data33_1.data +EXTRA_DIST += data33_2.data EXTRA_DIST += spec1.spec EXTRA_DIST += spec2.spec EXTRA_DIST += spec3.spec @@ -57,3 +59,9 @@ EXTRA_DIST += spec29.spec EXTRA_DIST += spec30.spec EXTRA_DIST += spec31.spec EXTRA_DIST += spec32.spec +EXTRA_DIST += spec33.spec +EXTRA_DIST += spec34.spec +EXTRA_DIST += spec35.spec +EXTRA_DIST += spec36.spec +EXTRA_DIST += spec37.spec +EXTRA_DIST += spec38.spec diff --git a/src/lib/config/tests/testdata/data33_1.data b/src/lib/config/tests/testdata/data33_1.data new file mode 100644 index 0000000000..429852c974 --- /dev/null +++ b/src/lib/config/tests/testdata/data33_1.data @@ -0,0 +1,7 @@ +{ + "dummy_str": "Dummy String", + "dummy_int": 118, + "dummy_datetime": "2011-05-27T19:42:57Z", + "dummy_date": "2011-05-27", + "dummy_time": "19:42:57" +} diff --git a/src/lib/config/tests/testdata/data33_2.data b/src/lib/config/tests/testdata/data33_2.data new file mode 100644 index 0000000000..eb0615c1c9 --- /dev/null +++ b/src/lib/config/tests/testdata/data33_2.data @@ -0,0 +1,7 @@ +{ + "dummy_str": "Dummy String", + "dummy_int": 118, + "dummy_datetime": "xxxx", + "dummy_date": "xxxx", + "dummy_time": "xxxx" +} diff --git a/src/lib/config/tests/testdata/spec2.spec b/src/lib/config/tests/testdata/spec2.spec index 59b8ebcbbb..43524224a2 100644 --- a/src/lib/config/tests/testdata/spec2.spec +++ b/src/lib/config/tests/testdata/spec2.spec @@ -66,6 +66,17 @@ "command_description": "Shut down BIND 10", "command_args": [] } + ], + "statistics": [ + { + "item_name": "dummy_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Dummy Time", + "item_description": "A dummy date time", + "item_format": "date-time" + } ] } } diff --git a/src/lib/config/tests/testdata/spec33.spec b/src/lib/config/tests/testdata/spec33.spec new file mode 100644 index 0000000000..3002488b72 --- /dev/null +++ b/src/lib/config/tests/testdata/spec33.spec @@ -0,0 +1,50 @@ +{ + "module_spec": { + "module_name": "Spec33", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String", + "item_description": "A dummy string" + }, + { + "item_name": "dummy_int", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Dummy Integer", + "item_description": "A dummy integer" + }, + { + "item_name": "dummy_datetime", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Dummy DateTime", + "item_description": "A dummy datetime", + "item_format": "date-time" + }, + { + "item_name": "dummy_date", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01", + "item_title": "Dummy Date", + "item_description": "A dummy date", + "item_format": "date" + }, + { + "item_name": "dummy_time", + "item_type": "string", + "item_optional": false, + "item_default": "00:00:00", + "item_title": "Dummy Time", + "item_description": "A dummy time", + "item_format": "time" + } + ] + } +} diff --git a/src/lib/config/tests/testdata/spec34.spec b/src/lib/config/tests/testdata/spec34.spec new file mode 100644 index 0000000000..dd1f3ca952 --- /dev/null +++ b/src/lib/config/tests/testdata/spec34.spec @@ -0,0 +1,14 @@ +{ + "module_spec": { + "module_name": "Spec34", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_description": "A dummy string" + } + ] + } +} diff --git a/src/lib/config/tests/testdata/spec35.spec b/src/lib/config/tests/testdata/spec35.spec new file mode 100644 index 0000000000..86aaf145a0 --- /dev/null +++ b/src/lib/config/tests/testdata/spec35.spec @@ -0,0 +1,15 @@ +{ + "module_spec": { + "module_name": "Spec35", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String" + } + ] + } +} + diff --git a/src/lib/config/tests/testdata/spec36.spec b/src/lib/config/tests/testdata/spec36.spec new file mode 100644 index 0000000000..fb9ce26084 --- /dev/null +++ b/src/lib/config/tests/testdata/spec36.spec @@ -0,0 +1,17 @@ +{ + "module_spec": { + "module_name": "Spec36", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String", + "item_description": "A dummy string", + "item_format": "dummy" + } + ] + } +} + diff --git a/src/lib/config/tests/testdata/spec37.spec b/src/lib/config/tests/testdata/spec37.spec new file mode 100644 index 0000000000..bc444d107c --- /dev/null +++ b/src/lib/config/tests/testdata/spec37.spec @@ -0,0 +1,7 @@ +{ + "module_spec": { + "module_name": "Spec37", + "statistics": 8 + } +} + diff --git a/src/lib/config/tests/testdata/spec38.spec b/src/lib/config/tests/testdata/spec38.spec new file mode 100644 index 0000000000..1892e887fb --- /dev/null +++ b/src/lib/config/tests/testdata/spec38.spec @@ -0,0 +1,17 @@ +{ + "module_spec": { + "module_name": "Spec38", + "statistics": [ + { + "item_name": "dummy_datetime", + "item_type": "string", + "item_optional": false, + "item_default": "11", + "item_title": "Dummy DateTime", + "item_description": "A dummy datetime", + "item_format": "date-time" + } + ] + } +} + diff --git a/src/lib/datasrc/Makefile.am b/src/lib/datasrc/Makefile.am index 261baaeb0b..db67781917 100644 --- a/src/lib/datasrc/Makefile.am +++ b/src/lib/datasrc/Makefile.am @@ -22,6 +22,8 @@ libdatasrc_la_SOURCES += zone.h libdatasrc_la_SOURCES += result.h libdatasrc_la_SOURCES += logger.h logger.cc libdatasrc_la_SOURCES += client.h +libdatasrc_la_SOURCES += database.h database.cc +libdatasrc_la_SOURCES += sqlite3_accessor.h sqlite3_accessor.cc nodist_libdatasrc_la_SOURCES = datasrc_messages.h datasrc_messages.cc libdatasrc_la_LIBADD = $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/datasrc/client.h b/src/lib/datasrc/client.h index a830f00c21..9fe6519532 100644 --- a/src/lib/datasrc/client.h +++ b/src/lib/datasrc/client.h @@ -15,6 +15,8 @@ #ifndef __DATA_SOURCE_CLIENT_H #define __DATA_SOURCE_CLIENT_H 1 +#include + #include namespace isc { diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc new file mode 100644 index 0000000000..04fb44c483 --- /dev/null +++ b/src/lib/datasrc/database.cc @@ -0,0 +1,405 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include +#include +#include +#include +#include + +#include +#include + +#include + +using isc::dns::Name; + +namespace isc { +namespace datasrc { + +DatabaseClient::DatabaseClient(boost::shared_ptr + database) : + database_(database) +{ + if (database_.get() == NULL) { + isc_throw(isc::InvalidParameter, + "No database provided to DatabaseClient"); + } +} + +DataSourceClient::FindResult +DatabaseClient::findZone(const Name& name) const { + std::pair zone(database_->getZone(name)); + // Try exact first + if (zone.first) { + return (FindResult(result::SUCCESS, + ZoneFinderPtr(new Finder(database_, + zone.second, name)))); + } + // Then super domains + // Start from 1, as 0 is covered above + for (size_t i(1); i < name.getLabelCount(); ++i) { + isc::dns::Name superdomain(name.split(i)); + zone = database_->getZone(superdomain); + if (zone.first) { + return (FindResult(result::PARTIALMATCH, + ZoneFinderPtr(new Finder(database_, + zone.second, + superdomain)))); + } + } + // No, really nothing + return (FindResult(result::NOTFOUND, ZoneFinderPtr())); +} + +DatabaseClient::Finder::Finder(boost::shared_ptr + database, int zone_id, + const isc::dns::Name& origin) : + database_(database), + zone_id_(zone_id), + origin_(origin) +{ } + +namespace { +// Adds the given Rdata to the given RRset +// If the rrset is an empty pointer, a new one is +// created with the given name, class, type and ttl +// The type is checked if the rrset exists, but the +// name is not. +// +// Then adds the given rdata to the set +// +// Raises a DataSourceError if the type does not +// match, or if the given rdata string does not +// parse correctly for the given type and class +// +// The DatabaseAccessor is passed to print the +// database name in the log message if the TTL is +// modified +void addOrCreate(isc::dns::RRsetPtr& rrset, + const isc::dns::Name& name, + const isc::dns::RRClass& cls, + const isc::dns::RRType& type, + const isc::dns::RRTTL& ttl, + const std::string& rdata_str, + const DatabaseAccessor& db + ) +{ + if (!rrset) { + rrset.reset(new isc::dns::RRset(name, cls, type, ttl)); + } else { + // This is a check to make sure find() is not messing things up + assert(type == rrset->getType()); + if (ttl != rrset->getTTL()) { + if (ttl < rrset->getTTL()) { + rrset->setTTL(ttl); + } + logger.info(DATASRC_DATABASE_FIND_TTL_MISMATCH) + .arg(db.getDBName()).arg(name).arg(cls) + .arg(type).arg(rrset->getTTL()); + } + } + try { + rrset->addRdata(isc::dns::rdata::createRdata(type, cls, rdata_str)); + } catch (const isc::dns::rdata::InvalidRdataText& ivrt) { + // at this point, rrset may have been initialised for no reason, + // and won't be used. But the caller would drop the shared_ptr + // on such an error anyway, so we don't care. + isc_throw(DataSourceError, + "bad rdata in database for " << name << " " + << type << ": " << ivrt.what()); + } +} + +// This class keeps a short-lived store of RRSIG records encountered +// during a call to find(). If the backend happens to return signatures +// before the actual data, we might not know which signatures we will need +// So if they may be relevant, we store the in this class. +// +// (If this class seems useful in other places, we might want to move +// it to util. That would also provide an opportunity to add unit tests) +class RRsigStore { +public: + // Adds the given signature Rdata to the store + // The signature rdata MUST be of the RRSIG rdata type + // (the caller must make sure of this). + // NOTE: if we move this class to a public namespace, + // we should add a type_covered argument, so as not + // to have to do this cast here. + void addSig(isc::dns::rdata::RdataPtr sig_rdata) { + const isc::dns::RRType& type_covered = + static_cast( + sig_rdata.get())->typeCovered(); + sigs[type_covered].push_back(sig_rdata); + } + + // If the store contains signatures for the type of the given + // rrset, they are appended to it. + void appendSignatures(isc::dns::RRsetPtr& rrset) const { + std::map >::const_iterator + found = sigs.find(rrset->getType()); + if (found != sigs.end()) { + BOOST_FOREACH(isc::dns::rdata::RdataPtr sig, found->second) { + rrset->addRRsig(sig); + } + } + } + +private: + std::map > sigs; +}; +} + +std::pair +DatabaseClient::Finder::getRRset(const isc::dns::Name& name, + const isc::dns::RRType* type, + bool want_cname, bool want_dname, + bool want_ns) +{ + RRsigStore sig_store; + database_->searchForRecords(zone_id_, name.toText()); + bool records_found = false; + isc::dns::RRsetPtr result_rrset; + + std::string columns[DatabaseAccessor::COLUMN_COUNT]; + while (database_->getNextRecord(columns, DatabaseAccessor::COLUMN_COUNT)) { + if (!records_found) { + records_found = true; + } + + try { + const isc::dns::RRType cur_type(columns[DatabaseAccessor:: + TYPE_COLUMN]); + const isc::dns::RRTTL cur_ttl(columns[DatabaseAccessor:: + TTL_COLUMN]); + // Ths sigtype column was an optimization for finding the + // relevant RRSIG RRs for a lookup. Currently this column is + // not used in this revised datasource implementation. We + // should either start using it again, or remove it from use + // completely (i.e. also remove it from the schema and the + // backend implementation). + // Note that because we don't use it now, we also won't notice + // it if the value is wrong (i.e. if the sigtype column + // contains an rrtype that is different from the actual value + // of the 'type covered' field in the RRSIG Rdata). + //cur_sigtype(columns[SIGTYPE_COLUMN]); + + // Check for delegations before checking for the right type. + // This is needed to properly delegate request for the NS + // record itself. + // + // This happens with NS only, CNAME must be alone and DNAME + // is not checked in the exact queried domain. + if (want_ns && cur_type == isc::dns::RRType::NS()) { + if (result_rrset && + result_rrset->getType() != isc::dns::RRType::NS()) { + isc_throw(DataSourceError, "NS found together with data" + " in non-apex domain " + name.toText()); + } + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, + columns[DatabaseAccessor::RDATA_COLUMN], + *database_); + } else if (type != NULL && cur_type == *type) { + if (result_rrset && + result_rrset->getType() == isc::dns::RRType::CNAME()) { + isc_throw(DataSourceError, "CNAME found but it is not " + "the only record for " + name.toText()); + } else if (result_rrset && want_ns && + result_rrset->getType() == isc::dns::RRType::NS()) { + isc_throw(DataSourceError, "NS found together with data" + " in non-apex domain " + name.toText()); + } + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, + columns[DatabaseAccessor::RDATA_COLUMN], + *database_); + } else if (want_cname && cur_type == isc::dns::RRType::CNAME()) { + // There should be no other data, so result_rrset should + // be empty. + if (result_rrset) { + isc_throw(DataSourceError, "CNAME found but it is not " + "the only record for " + name.toText()); + } + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, + columns[DatabaseAccessor::RDATA_COLUMN], + *database_); + } else if (want_dname && cur_type == isc::dns::RRType::DNAME()) { + // There should be max one RR of DNAME present + if (result_rrset && + result_rrset->getType() == isc::dns::RRType::DNAME()) { + isc_throw(DataSourceError, "DNAME with multiple RRs in " + + name.toText()); + } + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, + columns[DatabaseAccessor::RDATA_COLUMN], + *database_); + } else if (cur_type == isc::dns::RRType::RRSIG()) { + // If we get signatures before we get the actual data, we + // can't know which ones to keep and which to drop... + // So we keep a separate store of any signature that may be + // relevant and add them to the final RRset when we are + // done. + // A possible optimization here is to not store them for + // types we are certain we don't need + sig_store.addSig(isc::dns::rdata::createRdata(cur_type, + getClass(), columns[DatabaseAccessor::RDATA_COLUMN])); + } + } catch (const isc::dns::InvalidRRType& irt) { + isc_throw(DataSourceError, "Invalid RRType in database for " << + name << ": " << columns[DatabaseAccessor:: + TYPE_COLUMN]); + } catch (const isc::dns::InvalidRRTTL& irttl) { + isc_throw(DataSourceError, "Invalid TTL in database for " << + name << ": " << columns[DatabaseAccessor:: + TTL_COLUMN]); + } catch (const isc::dns::rdata::InvalidRdataText& ird) { + isc_throw(DataSourceError, "Invalid rdata in database for " << + name << ": " << columns[DatabaseAccessor:: + RDATA_COLUMN]); + } + } + if (result_rrset) { + sig_store.appendSignatures(result_rrset); + } + return (std::pair(records_found, result_rrset)); +} + + +ZoneFinder::FindResult +DatabaseClient::Finder::find(const isc::dns::Name& name, + const isc::dns::RRType& type, + isc::dns::RRsetList*, + const FindOptions options) +{ + // This variable is used to determine the difference between + // NXDOMAIN and NXRRSET + bool records_found = false; + bool glue_ok(options & FIND_GLUE_OK); + isc::dns::RRsetPtr result_rrset; + ZoneFinder::Result result_status = SUCCESS; + std::pair found; + logger.debug(DBG_TRACE_DETAILED, DATASRC_DATABASE_FIND_RECORDS) + .arg(database_->getDBName()).arg(name).arg(type); + + try { + // First, do we have any kind of delegation (NS/DNAME) here? + Name origin(getOrigin()); + size_t origin_label_count(origin.getLabelCount()); + size_t current_label_count(name.getLabelCount()); + // This is how many labels we remove to get origin + size_t remove_labels(current_label_count - origin_label_count); + + // Now go trough all superdomains from origin down + for (int i(remove_labels); i > 0; --i) { + Name superdomain(name.split(i)); + // Look if there's NS or DNAME (but ignore the NS in origin) + found = getRRset(superdomain, NULL, false, true, + i != remove_labels && !glue_ok); + if (found.second) { + // We found something redirecting somewhere else + // (it can be only NS or DNAME here) + result_rrset = found.second; + if (result_rrset->getType() == isc::dns::RRType::NS()) { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DELEGATION). + arg(database_->getDBName()).arg(superdomain); + result_status = DELEGATION; + } else { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DNAME). + arg(database_->getDBName()).arg(superdomain); + result_status = DNAME; + } + // Don't search more + break; + } + } + + if (!result_rrset) { // Only if we didn't find a redirect already + // Try getting the final result and extract it + // It is special if there's a CNAME or NS, DNAME is ignored here + // And we don't consider the NS in origin + found = getRRset(name, &type, true, false, + name != origin && !glue_ok); + records_found = found.first; + result_rrset = found.second; + if (result_rrset && name != origin && !glue_ok && + result_rrset->getType() == isc::dns::RRType::NS()) { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DELEGATION_EXACT). + arg(database_->getDBName()).arg(name); + result_status = DELEGATION; + } else if (result_rrset && type != isc::dns::RRType::CNAME() && + result_rrset->getType() == isc::dns::RRType::CNAME()) { + result_status = CNAME; + } + } + } catch (const DataSourceError& dse) { + logger.error(DATASRC_DATABASE_FIND_ERROR) + .arg(database_->getDBName()).arg(dse.what()); + // call cleanup and rethrow + database_->resetSearch(); + throw; + } catch (const isc::Exception& isce) { + logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR) + .arg(database_->getDBName()).arg(isce.what()); + // cleanup, change it to a DataSourceError and rethrow + database_->resetSearch(); + isc_throw(DataSourceError, isce.what()); + } catch (const std::exception& ex) { + logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ERROR) + .arg(database_->getDBName()).arg(ex.what()); + database_->resetSearch(); + throw; + } + + if (!result_rrset) { + if (records_found) { + logger.debug(DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_NXRRSET) + .arg(database_->getDBName()).arg(name) + .arg(getClass()).arg(type); + result_status = NXRRSET; + } else { + logger.debug(DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_NXDOMAIN) + .arg(database_->getDBName()).arg(name) + .arg(getClass()).arg(type); + result_status = NXDOMAIN; + } + } else { + logger.debug(DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_RRSET) + .arg(database_->getDBName()).arg(*result_rrset); + } + return (FindResult(result_status, result_rrset)); +} + +Name +DatabaseClient::Finder::getOrigin() const { + return (origin_); +} + +isc::dns::RRClass +DatabaseClient::Finder::getClass() const { + // TODO Implement + return isc::dns::RRClass::IN(); +} + +} +} diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h new file mode 100644 index 0000000000..95782ef3bb --- /dev/null +++ b/src/lib/datasrc/database.h @@ -0,0 +1,367 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DATABASE_DATASRC_H +#define __DATABASE_DATASRC_H + +#include + +#include + +namespace isc { +namespace datasrc { + +/** + * \brief Abstraction of lowlevel database with DNS data + * + * This class is defines interface to databases. Each supported database + * will provide methods for accessing the data stored there in a generic + * manner. The methods are meant to be low-level, without much or any knowledge + * about DNS and should be possible to translate directly to queries. + * + * On the other hand, how the communication with database is done and in what + * schema (in case of relational/SQL database) is up to the concrete classes. + * + * This class is non-copyable, as copying connections to database makes little + * sense and will not be needed. + * + * \todo Is it true this does not need to be copied? For example the zone + * iterator might need it's own copy. But a virtual clone() method might + * be better for that than copy constructor. + * + * \note The same application may create multiple connections to the same + * database, having multiple instances of this class. If the database + * allows having multiple open queries at one connection, the connection + * class may share it. + */ +class DatabaseAccessor : boost::noncopyable { +public: + /** + * \brief Destructor + * + * It is empty, but needs a virtual one, since we will use the derived + * classes in polymorphic way. + */ + virtual ~DatabaseAccessor() { } + /** + * \brief Retrieve a zone identifier + * + * This method looks up a zone for the given name in the database. It + * should match only exact zone name (eg. name is equal to the zone's + * apex), as the DatabaseClient will loop trough the labels itself and + * find the most suitable zone. + * + * It is not specified if and what implementation of this method may throw, + * so code should expect anything. + * + * \param name The name of the zone's apex to be looked up. + * \return The first part of the result indicates if a matching zone + * was found. In case it was, the second part is internal zone ID. + * This one will be passed to methods finding data in the zone. + * It is not required to keep them, in which case whatever might + * be returned - the ID is only passed back to the database as + * an opaque handle. + */ + virtual std::pair getZone(const isc::dns::Name& name) const = 0; + + /** + * \brief Starts a new search for records of the given name in the given zone + * + * The data searched by this call can be retrieved with subsequent calls to + * getNextRecord(). + * + * \exception DataSourceError if there is a problem connecting to the + * backend database + * + * \param zone_id The zone to search in, as returned by getZone() + * \param name The name of the records to find + */ + virtual void searchForRecords(int zone_id, const std::string& name) = 0; + + /** + * \brief Retrieves the next record from the search started with searchForRecords() + * + * Returns a boolean specifying whether or not there was more data to read. + * In the case of a database error, a DatasourceError is thrown. + * + * The columns passed is an array of std::strings consisting of + * DatabaseConnection::COLUMN_COUNT elements, the elements of which + * are defined in DatabaseConnection::RecordColumns, in their basic + * string representation. + * + * If you are implementing a derived database connection class, you + * should have this method check the column_count value, and fill the + * array with strings conforming to their description in RecordColumn. + * + * \exception DatasourceError if there was an error reading from the database + * + * \param columns The elements of this array will be filled with the data + * for one record as defined by RecordColumns + * If there was no data, the array is untouched. + * \return true if there was a next record, false if there was not + */ + virtual bool getNextRecord(std::string columns[], size_t column_count) = 0; + + /** + * \brief Resets the current search initiated with searchForRecords() + * + * This method will be called when the called of searchForRecords() and + * getNextRecord() finds bad data, and aborts the current search. + * It should clean up whatever handlers searchForRecords() created, and + * any other state modified or needed by getNextRecord() + * + * Of course, the implementation of getNextRecord may also use it when + * it is done with a search. If it does, the implementation of this + * method should make sure it can handle being called multiple times. + * + * The implementation for this method should make sure it never throws. + */ + virtual void resetSearch() = 0; + + /** + * Definitions of the fields as they are required to be filled in + * by getNextRecord() + * + * When implementing getNextRecord(), the columns array should + * be filled with the values as described in this enumeration, + * in this order, i.e. TYPE_COLUMN should be the first element + * (index 0) of the array, TTL_COLUMN should be the second element + * (index 1), etc. + */ + enum RecordColumns { + TYPE_COLUMN = 0, ///< The RRType of the record (A/NS/TXT etc.) + TTL_COLUMN = 1, ///< The TTL of the record (a + SIGTYPE_COLUMN = 2, ///< For RRSIG records, this contains the RRTYPE + ///< the RRSIG covers. In the current implementation, + ///< this field is ignored. + RDATA_COLUMN = 3 ///< Full text representation of the record's RDATA + }; + + /// The number of fields the columns array passed to getNextRecord should have + static const size_t COLUMN_COUNT = 4; + + /** + * \brief Returns a string identifying this dabase backend + * + * The returned string is mainly intended to be used for + * debugging/logging purposes. + * + * Any implementation is free to choose the exact string content, + * but it is advisable to make it a name that is distinguishable + * from the others. + * + * \return the name of the database + */ + virtual const std::string& getDBName() const = 0; +}; + +/** + * \brief Concrete data source client oriented at database backends. + * + * This class (together with corresponding versions of ZoneFinder, + * ZoneIterator, etc.) translates high-level data source queries to + * low-level calls on DatabaseAccessor. It calls multiple queries + * if necessary and validates data from the database, allowing the + * DatabaseAccessor to be just simple translation to SQL/other + * queries to database. + * + * While it is possible to subclass it for specific database in case + * of special needs, it is not expected to be needed. This should just + * work as it is with whatever DatabaseAccessor. + */ +class DatabaseClient : public DataSourceClient { +public: + /** + * \brief Constructor + * + * It initializes the client with a database. + * + * \exception isc::InvalidParameter if database is NULL. It might throw + * standard allocation exception as well, but doesn't throw anything else. + * + * \param database The database to use to get data. As the parameter + * suggests, the client takes ownership of the database and will + * delete it when itself deleted. + */ + DatabaseClient(boost::shared_ptr database); + /** + * \brief Corresponding ZoneFinder implementation + * + * The zone finder implementation for database data sources. Similarly + * to the DatabaseClient, it translates the queries to methods of the + * database. + * + * Application should not come directly in contact with this class + * (it should handle it trough generic ZoneFinder pointer), therefore + * it could be completely hidden in the .cc file. But it is provided + * to allow testing and for rare cases when a database needs slightly + * different handling, so it can be subclassed. + * + * Methods directly corresponds to the ones in ZoneFinder. + */ + class Finder : public ZoneFinder { + public: + /** + * \brief Constructor + * + * \param database The database (shared with DatabaseClient) to + * be used for queries (the one asked for ID before). + * \param zone_id The zone ID which was returned from + * DatabaseAccessor::getZone and which will be passed to further + * calls to the database. + * \param origin The name of the origin of this zone. It could query + * it from database, but as the DatabaseClient just searched for + * the zone using the name, it should have it. + */ + Finder(boost::shared_ptr database, int zone_id, + const isc::dns::Name& origin); + // The following three methods are just implementations of inherited + // ZoneFinder's pure virtual methods. + virtual isc::dns::Name getOrigin() const; + virtual isc::dns::RRClass getClass() const; + + /** + * \brief Find an RRset in the datasource + * + * Searches the datasource for an RRset of the given name and + * type. If there is a CNAME at the given name, the CNAME rrset + * is returned. + * (this implementation is not complete, and currently only + * does full matches, CNAMES, and the signatures for matches and + * CNAMEs) + * \note target was used in the original design to handle ANY + * queries. This is not implemented yet, and may use + * target again for that, but it might also use something + * different. It is left in for compatibility at the moment. + * \note options are ignored at this moment + * + * \note Maybe counter intuitively, this method is not a const member + * function. This is intentional; some of the underlying implementations + * are expected to use a database backend, and would internally contain + * some abstraction of "database connection". In the most strict sense + * any (even read only) operation might change the internal state of + * such a connection, and in that sense the operation cannot be considered + * "const". In order to avoid giving a false sense of safety to the + * caller, we indicate a call to this method may have a surprising + * side effect. That said, this view may be too strict and it may + * make sense to say the internal database connection doesn't affect + * external behavior in terms of the interface of this method. As + * we gain more experiences with various kinds of backends we may + * revisit the constness. + * + * \exception DataSourceError when there is a problem reading + * the data from the dabase backend. + * This can be a connection, code, or + * data (parse) error. + * + * \param name The name to find + * \param type The RRType to find + * \param target Unused at this moment + * \param options Options about how to search. + * See ZoneFinder::FindOptions. + */ + virtual FindResult find(const isc::dns::Name& name, + const isc::dns::RRType& type, + isc::dns::RRsetList* target = NULL, + const FindOptions options = FIND_DEFAULT); + + /** + * \brief The zone ID + * + * This function provides the stored zone ID as passed to the + * constructor. This is meant for testing purposes and normal + * applications shouldn't need it. + */ + int zone_id() const { return (zone_id_); } + /** + * \brief The database. + * + * This function provides the database stored inside as + * passed to the constructor. This is meant for testing purposes and + * normal applications shouldn't need it. + */ + const DatabaseAccessor& database() const { + return (*database_); + } + private: + boost::shared_ptr database_; + const int zone_id_; + const isc::dns::Name origin_; + /** + * \brief Searches database for an RRset + * + * This method scans RRs of single domain specified by name and finds + * RRset with given type or any of redirection RRsets that are + * requested. + * + * This function is used internally by find(), because this part is + * called multiple times with slightly different parameters. + * + * \param name Which domain name should be scanned. + * \param type The RRType which is requested. This can be NULL, in + * which case the method will look for the redirections only. + * \param want_cname If this is true, CNAME redirection may be returned + * instead of the RRset with given type. If there's CNAME and + * something else or the CNAME has multiple RRs, it throws + * DataSourceError. + * \param want_dname If this is true, DNAME redirection may be returned + * instead. This is with type = NULL only and is not checked in + * other circumstances. If the DNAME has multiple RRs, it throws + * DataSourceError. + * \param want_ns This allows redirection by NS to be returned. If + * any other data is met as well, DataSourceError is thrown. + * \note It may happen that some of the above error conditions are not + * detected in some circumstances. The goal here is not to validate + * the domain in DB, but to avoid bad behaviour resulting from + * broken data. + * \return First part of the result tells if the domain contains any + * RRs. This can be used to decide between NXDOMAIN and NXRRSET. + * The second part is the RRset found (if any) with any relevant + * signatures attached to it. + * \todo This interface doesn't look very elegant. Any better idea + * would be nice. + */ + std::pair getRRset(const isc::dns::Name& + name, + const isc::dns::RRType* + type, + bool want_cname, + bool want_dname, + bool want_ns); + }; + /** + * \brief Find a zone in the database + * + * This queries database's getZone to find the best matching zone. + * It will propagate whatever exceptions are thrown from that method + * (which is not restricted in any way). + * + * \param name Name of the zone or data contained there. + * \return FindResult containing the code and an instance of Finder, if + * anything is found. However, application should not rely on the + * ZoneFinder being instance of Finder (possible subclass of this class + * may return something else and it may change in future versions), it + * should use it as a ZoneFinder only. + */ + virtual FindResult findZone(const isc::dns::Name& name) const; + +private: + /// \brief Our database. + const boost::shared_ptr database_; +}; + +} +} + +#endif diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index 3dc69e070d..190adbe3ac 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -63,6 +63,60 @@ The maximum allowed number of items of the hotspot cache is set to the given number. If there are too many, some of them will be dropped. The size of 0 means no limit. +% DATASRC_DATABASE_FIND_ERROR error retrieving data from datasource %1: %2 +This was an internal error while reading data from a datasource. This can either +mean the specific data source implementation is not behaving correctly, or the +data it provides is invalid. The current search is aborted. +The error message contains specific information about the error. + +% DATASRC_DATABASE_FIND_RECORDS looking in datasource %1 for record %2/%3 +Debug information. The database data source is looking up records with the given +name and type in the database. + +% DATASRC_DATABASE_FIND_TTL_MISMATCH TTL values differ in %1 for elements of %2/%3/%4, setting to %5 +The datasource backend provided resource records for the given RRset with +different TTL values. The TTL of the RRSET is set to the lowest value, which +is printed in the log message. + +% DATASRC_DATABASE_FIND_UNCAUGHT_ERROR uncaught general error retrieving data from datasource %1: %2 +There was an uncaught general exception while reading data from a datasource. +This most likely points to a logic error in the code, and can be considered a +bug. The current search is aborted. Specific information about the exception is +printed in this error message. + +% DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR uncaught error retrieving data from datasource %1: %2 +There was an uncaught ISC exception while reading data from a datasource. This +most likely points to a logic error in the code, and can be considered a bug. +The current search is aborted. Specific information about the exception is +printed in this error message. + +% DATASRC_DATABASE_FOUND_DELEGATION Found delegation at %2 in %1 +When searching for a domain, the program met a delegation to a different zone +at the given domain name. It will return that one instead. + +% DATASRC_DATABASE_FOUND_DELEGATION_EXACT Found delegation at %2 (exact match) in %1 +The program found the domain requested, but it is a delegation point to a +different zone, therefore it is not authoritative for this domain name. +It will return the NS record instead. + +% DATASRC_DATABASE_FOUND_DNAME Found DNAME at %2 in %1 +When searching for a domain, the program met a DNAME redirection to a different +place in the domain space at the given domain name. It will return that one +instead. + +% DATASRC_DATABASE_FOUND_NXDOMAIN search in datasource %1 resulted in NXDOMAIN for %2/%3/%4 +The data returned by the database backend did not contain any data for the given +domain name, class and type. + +% DATASRC_DATABASE_FOUND_NXRRSET search in datasource %1 resulted in NXRRSET for %2/%3/%4 +The data returned by the database backend contained data for the given domain +name and class, but not for the given type. + +% DATASRC_DATABASE_FOUND_RRSET search in datasource %1 resulted in RRset %2 +The data returned by the database backend contained data for the given domain +name, and it either matches the type or has a relevant type. The RRset that is +returned is printed. + % DATASRC_DO_QUERY handling query for '%1/%2' A debug message indicating that a query for the given name and RR type is being processed. @@ -400,12 +454,22 @@ enough information for it. The code is 1 for error, 2 for not implemented. % DATASRC_SQLITE_CLOSE closing SQLite database Debug information. The SQLite data source is closing the database file. + +% DATASRC_SQLITE_CONNOPEN Opening sqlite database file '%1' +The database file is being opened so it can start providing data. + +% DATASRC_SQLITE_CONNCLOSE Closing sqlite database +The database file is no longer needed and is being closed. + % DATASRC_SQLITE_CREATE SQLite data source created Debug information. An instance of SQLite data source is being created. % DATASRC_SQLITE_DESTROY SQLite data source destroyed Debug information. An instance of SQLite data source is being destroyed. +% DATASRC_SQLITE_DROPCONN SQLite3Database is being deinitialized +The object around a database connection is being destroyed. + % DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1' Debug information. The SQLite data source is trying to identify which zone should hold this domain. @@ -458,6 +522,9 @@ source. The SQLite data source was asked to provide a NSEC3 record for given zone. But it doesn't contain that zone. +% DATASRC_SQLITE_NEWCONN SQLite3Database is being initialized +A wrapper object to hold database connection is being initialized. + % DATASRC_SQLITE_OPEN opening SQLite database '%1' Debug information. The SQLite data source is loading an SQLite database in the provided file. @@ -496,4 +563,3 @@ data source. % DATASRC_UNEXPECTED_QUERY_STATE unexpected query state This indicates a programming error. An internal task of unknown type was generated. - diff --git a/src/lib/datasrc/memory_datasrc.cc b/src/lib/datasrc/memory_datasrc.cc index 3d24ce0200..d06cd9ba43 100644 --- a/src/lib/datasrc/memory_datasrc.cc +++ b/src/lib/datasrc/memory_datasrc.cc @@ -606,19 +606,19 @@ InMemoryZoneFinder::~InMemoryZoneFinder() { delete impl_; } -const Name& +Name InMemoryZoneFinder::getOrigin() const { return (impl_->origin_); } -const RRClass& +RRClass InMemoryZoneFinder::getClass() const { return (impl_->zone_class_); } ZoneFinder::FindResult InMemoryZoneFinder::find(const Name& name, const RRType& type, - RRsetList* target, const FindOptions options) const + RRsetList* target, const FindOptions options) { return (impl_->find(name, type, target, options)); } diff --git a/src/lib/datasrc/memory_datasrc.h b/src/lib/datasrc/memory_datasrc.h index 9bed9603c1..0234a916f8 100644 --- a/src/lib/datasrc/memory_datasrc.h +++ b/src/lib/datasrc/memory_datasrc.h @@ -58,10 +58,10 @@ public: //@} /// \brief Returns the origin of the zone. - virtual const isc::dns::Name& getOrigin() const; + virtual isc::dns::Name getOrigin() const; /// \brief Returns the class of the zone. - virtual const isc::dns::RRClass& getClass() const; + virtual isc::dns::RRClass getClass() const; /// \brief Looks up an RRset in the zone. /// @@ -73,7 +73,7 @@ public: virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, - const FindOptions options = FIND_DEFAULT) const; + const FindOptions options = FIND_DEFAULT); /// \brief Inserts an rrset into the zone. /// diff --git a/src/lib/datasrc/sqlite3_accessor.cc b/src/lib/datasrc/sqlite3_accessor.cc new file mode 100644 index 0000000000..817d53087f --- /dev/null +++ b/src/lib/datasrc/sqlite3_accessor.cc @@ -0,0 +1,412 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include +#include +#include +#include + +namespace isc { +namespace datasrc { + +struct SQLite3Parameters { + SQLite3Parameters() : + db_(NULL), version_(-1), + q_zone_(NULL), q_any_(NULL) + /*q_record_(NULL), q_addrs_(NULL), q_referral_(NULL), + q_count_(NULL), q_previous_(NULL), q_nsec3_(NULL), + q_prevnsec3_(NULL) */ + {} + sqlite3* db_; + int version_; + sqlite3_stmt* q_zone_; + sqlite3_stmt* q_any_; + /* + TODO: Yet unneeded statements + sqlite3_stmt* q_record_; + sqlite3_stmt* q_addrs_; + sqlite3_stmt* q_referral_; + sqlite3_stmt* q_count_; + sqlite3_stmt* q_previous_; + sqlite3_stmt* q_nsec3_; + sqlite3_stmt* q_prevnsec3_; + */ +}; + +SQLite3Database::SQLite3Database(const std::string& filename, + const isc::dns::RRClass& rrclass) : + dbparameters_(new SQLite3Parameters), + class_(rrclass.toText()), + database_name_("sqlite3_" + + isc::util::Filename(filename).nameAndExtension()) +{ + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_NEWCONN); + + open(filename); +} + +namespace { + +// This is a helper class to initialize a Sqlite3 DB safely. An object of +// this class encapsulates all temporary resources that are necessary for +// the initialization, and release them in the destructor. Once everything +// is properly initialized, the move() method moves the allocated resources +// to the main object in an exception free manner. This way, the main code +// for the initialization can be exception safe, and can provide the strong +// exception guarantee. +class Initializer { +public: + ~Initializer() { + if (params_.q_zone_ != NULL) { + sqlite3_finalize(params_.q_zone_); + } + if (params_.q_any_ != NULL) { + sqlite3_finalize(params_.q_any_); + } + /* + if (params_.q_record_ != NULL) { + sqlite3_finalize(params_.q_record_); + } + if (params_.q_addrs_ != NULL) { + sqlite3_finalize(params_.q_addrs_); + } + if (params_.q_referral_ != NULL) { + sqlite3_finalize(params_.q_referral_); + } + if (params_.q_count_ != NULL) { + sqlite3_finalize(params_.q_count_); + } + if (params_.q_previous_ != NULL) { + sqlite3_finalize(params_.q_previous_); + } + if (params_.q_nsec3_ != NULL) { + sqlite3_finalize(params_.q_nsec3_); + } + if (params_.q_prevnsec3_ != NULL) { + sqlite3_finalize(params_.q_prevnsec3_); + } + */ + if (params_.db_ != NULL) { + sqlite3_close(params_.db_); + } + } + void move(SQLite3Parameters* dst) { + *dst = params_; + params_ = SQLite3Parameters(); // clear everything + } + SQLite3Parameters params_; +}; + +const char* const SCHEMA_LIST[] = { + "CREATE TABLE schema_version (version INTEGER NOT NULL)", + "INSERT INTO schema_version VALUES (1)", + "CREATE TABLE zones (id INTEGER PRIMARY KEY, " + "name STRING NOT NULL COLLATE NOCASE, " + "rdclass STRING NOT NULL COLLATE NOCASE DEFAULT 'IN', " + "dnssec BOOLEAN NOT NULL DEFAULT 0)", + "CREATE INDEX zones_byname ON zones (name)", + "CREATE TABLE records (id INTEGER PRIMARY KEY, " + "zone_id INTEGER NOT NULL, name STRING NOT NULL COLLATE NOCASE, " + "rname STRING NOT NULL COLLATE NOCASE, ttl INTEGER NOT NULL, " + "rdtype STRING NOT NULL COLLATE NOCASE, sigtype STRING COLLATE NOCASE, " + "rdata STRING NOT NULL)", + "CREATE INDEX records_byname ON records (name)", + "CREATE INDEX records_byrname ON records (rname)", + "CREATE TABLE nsec3 (id INTEGER PRIMARY KEY, zone_id INTEGER NOT NULL, " + "hash STRING NOT NULL COLLATE NOCASE, " + "owner STRING NOT NULL COLLATE NOCASE, " + "ttl INTEGER NOT NULL, rdtype STRING NOT NULL COLLATE NOCASE, " + "rdata STRING NOT NULL)", + "CREATE INDEX nsec3_byhash ON nsec3 (hash)", + NULL +}; + +const char* const q_zone_str = "SELECT id FROM zones WHERE name=?1 AND rdclass = ?2"; + +const char* const q_any_str = "SELECT rdtype, ttl, sigtype, rdata " + "FROM records WHERE zone_id=?1 AND name=?2"; + +/* TODO: Prune the statements, not everything will be needed maybe? +const char* const q_record_str = "SELECT rdtype, ttl, sigtype, rdata " + "FROM records WHERE zone_id=?1 AND name=?2 AND " + "((rdtype=?3 OR sigtype=?3) OR " + "(rdtype='CNAME' OR sigtype='CNAME') OR " + "(rdtype='NS' OR sigtype='NS'))"; + +const char* const q_addrs_str = "SELECT rdtype, ttl, sigtype, rdata " + "FROM records WHERE zone_id=?1 AND name=?2 AND " + "(rdtype='A' OR sigtype='A' OR rdtype='AAAA' OR sigtype='AAAA')"; + +const char* const q_referral_str = "SELECT rdtype, ttl, sigtype, rdata FROM " + "records WHERE zone_id=?1 AND name=?2 AND" + "(rdtype='NS' OR sigtype='NS' OR rdtype='DS' OR sigtype='DS' OR " + "rdtype='DNAME' OR sigtype='DNAME')"; + +const char* const q_count_str = "SELECT COUNT(*) FROM records " + "WHERE zone_id=?1 AND rname LIKE (?2 || '%');"; + +const char* const q_previous_str = "SELECT name FROM records " + "WHERE zone_id=?1 AND rdtype = 'NSEC' AND " + "rname < $2 ORDER BY rname DESC LIMIT 1"; + +const char* const q_nsec3_str = "SELECT rdtype, ttl, rdata FROM nsec3 " + "WHERE zone_id = ?1 AND hash = $2"; + +const char* const q_prevnsec3_str = "SELECT hash FROM nsec3 " + "WHERE zone_id = ?1 AND hash <= $2 ORDER BY hash DESC LIMIT 1"; + */ + +sqlite3_stmt* +prepare(sqlite3* const db, const char* const statement) { + sqlite3_stmt* prepared = NULL; + if (sqlite3_prepare_v2(db, statement, -1, &prepared, NULL) != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not prepare SQLite statement: " << + statement); + } + return (prepared); +} + +void +checkAndSetupSchema(Initializer* initializer) { + sqlite3* const db = initializer->params_.db_; + + sqlite3_stmt* prepared = NULL; + if (sqlite3_prepare_v2(db, "SELECT version FROM schema_version", -1, + &prepared, NULL) == SQLITE_OK && + sqlite3_step(prepared) == SQLITE_ROW) { + initializer->params_.version_ = sqlite3_column_int(prepared, 0); + sqlite3_finalize(prepared); + } else { + logger.info(DATASRC_SQLITE_SETUP); + if (prepared != NULL) { + sqlite3_finalize(prepared); + } + for (int i = 0; SCHEMA_LIST[i] != NULL; ++i) { + if (sqlite3_exec(db, SCHEMA_LIST[i], NULL, NULL, NULL) != + SQLITE_OK) { + isc_throw(SQLite3Error, + "Failed to set up schema " << SCHEMA_LIST[i]); + } + } + } + + initializer->params_.q_zone_ = prepare(db, q_zone_str); + initializer->params_.q_any_ = prepare(db, q_any_str); + /* TODO: Yet unneeded statements + initializer->params_.q_record_ = prepare(db, q_record_str); + initializer->params_.q_addrs_ = prepare(db, q_addrs_str); + initializer->params_.q_referral_ = prepare(db, q_referral_str); + initializer->params_.q_count_ = prepare(db, q_count_str); + initializer->params_.q_previous_ = prepare(db, q_previous_str); + initializer->params_.q_nsec3_ = prepare(db, q_nsec3_str); + initializer->params_.q_prevnsec3_ = prepare(db, q_prevnsec3_str); + */ +} + +} + +void +SQLite3Database::open(const std::string& name) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNOPEN).arg(name); + if (dbparameters_->db_ != NULL) { + // There shouldn't be a way to trigger this anyway + isc_throw(DataSourceError, "Duplicate SQLite open with " << name); + } + + Initializer initializer; + + if (sqlite3_open(name.c_str(), &initializer.params_.db_) != 0) { + isc_throw(SQLite3Error, "Cannot open SQLite database file: " << name); + } + + checkAndSetupSchema(&initializer); + initializer.move(dbparameters_); +} + +SQLite3Database::~SQLite3Database() { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_DROPCONN); + if (dbparameters_->db_ != NULL) { + close(); + } + delete dbparameters_; +} + +void +SQLite3Database::close(void) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNCLOSE); + if (dbparameters_->db_ == NULL) { + isc_throw(DataSourceError, + "SQLite data source is being closed before open"); + } + + // XXX: sqlite3_finalize() could fail. What should we do in that case? + sqlite3_finalize(dbparameters_->q_zone_); + dbparameters_->q_zone_ = NULL; + + sqlite3_finalize(dbparameters_->q_any_); + dbparameters_->q_any_ = NULL; + + /* TODO: Once they are needed or not, uncomment or drop + sqlite3_finalize(dbparameters->q_record_); + dbparameters->q_record_ = NULL; + + sqlite3_finalize(dbparameters->q_addrs_); + dbparameters->q_addrs_ = NULL; + + sqlite3_finalize(dbparameters->q_referral_); + dbparameters->q_referral_ = NULL; + + sqlite3_finalize(dbparameters->q_count_); + dbparameters->q_count_ = NULL; + + sqlite3_finalize(dbparameters->q_previous_); + dbparameters->q_previous_ = NULL; + + sqlite3_finalize(dbparameters->q_prevnsec3_); + dbparameters->q_prevnsec3_ = NULL; + + sqlite3_finalize(dbparameters->q_nsec3_); + dbparameters->q_nsec3_ = NULL; + */ + + sqlite3_close(dbparameters_->db_); + dbparameters_->db_ = NULL; +} + +std::pair +SQLite3Database::getZone(const isc::dns::Name& name) const { + int rc; + + // Take the statement (simple SELECT id FROM zones WHERE...) + // and prepare it (bind the parameters to it) + sqlite3_reset(dbparameters_->q_zone_); + rc = sqlite3_bind_text(dbparameters_->q_zone_, 1, name.toText().c_str(), + -1, SQLITE_TRANSIENT); + if (rc != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind " << name << + " to SQL statement (zone)"); + } + rc = sqlite3_bind_text(dbparameters_->q_zone_, 2, class_.c_str(), -1, + SQLITE_STATIC); + if (rc != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind " << class_ << + " to SQL statement (zone)"); + } + + // Get the data there and see if it found anything + rc = sqlite3_step(dbparameters_->q_zone_); + std::pair result; + if (rc == SQLITE_ROW) { + result = std::pair(true, + sqlite3_column_int(dbparameters_-> + q_zone_, 0)); + } else { + result = std::pair(false, 0); + } + // Free resources + sqlite3_reset(dbparameters_->q_zone_); + + return (result); +} + +void +SQLite3Database::searchForRecords(int zone_id, const std::string& name) { + resetSearch(); + if (sqlite3_bind_int(dbparameters_->q_any_, 1, zone_id) != SQLITE_OK) { + isc_throw(DataSourceError, + "Error in sqlite3_bind_int() for zone_id " << + zone_id << ": " << sqlite3_errmsg(dbparameters_->db_)); + } + // use transient since name is a ref and may disappear + if (sqlite3_bind_text(dbparameters_->q_any_, 2, name.c_str(), -1, + SQLITE_TRANSIENT) != SQLITE_OK) { + isc_throw(DataSourceError, + "Error in sqlite3_bind_text() for name " << + name << ": " << sqlite3_errmsg(dbparameters_->db_)); + } +} + +namespace { +// This helper function converts from the unsigned char* type (used by +// sqlite3) to char* (wanted by std::string). Technically these types +// might not be directly convertable +// In case sqlite3_column_text() returns NULL, we just make it an +// empty string. +// The sqlite3parameters value is only used to check the error code if +// ucp == NULL +const char* +convertToPlainChar(const unsigned char* ucp, + SQLite3Parameters* dbparameters) { + if (ucp == NULL) { + // The field can really be NULL, in which case we return an + // empty string, or sqlite may have run out of memory, in + // which case we raise an error + if (dbparameters != NULL && + sqlite3_errcode(dbparameters->db_) == SQLITE_NOMEM) { + isc_throw(DataSourceError, + "Sqlite3 backend encountered a memory allocation " + "error in sqlite3_column_text()"); + } else { + return (""); + } + } + const void* p = ucp; + return (static_cast(p)); +} +} + +bool +SQLite3Database::getNextRecord(std::string columns[], size_t column_count) { + if (column_count != COLUMN_COUNT) { + isc_throw(DataSourceError, + "Datasource backend caller did not pass a column array " + "of size " << COLUMN_COUNT << " to getNextRecord()"); + } + + sqlite3_stmt* current_stmt = dbparameters_->q_any_; + const int rc = sqlite3_step(current_stmt); + + if (rc == SQLITE_ROW) { + for (int column = 0; column < column_count; ++column) { + try { + columns[column] = convertToPlainChar(sqlite3_column_text( + current_stmt, column), + dbparameters_); + } catch (const std::bad_alloc&) { + isc_throw(DataSourceError, + "bad_alloc in Sqlite3Connection::getNextRecord"); + } + } + return (true); + } else if (rc == SQLITE_DONE) { + // reached the end of matching rows + resetSearch(); + return (false); + } + isc_throw(DataSourceError, "Unexpected failure in sqlite3_step: " << + sqlite3_errmsg(dbparameters_->db_)); + // Compilers might not realize isc_throw always throws + return (false); +} + +void +SQLite3Database::resetSearch() { + sqlite3_reset(dbparameters_->q_any_); + sqlite3_clear_bindings(dbparameters_->q_any_); +} + +} +} diff --git a/src/lib/datasrc/sqlite3_accessor.h b/src/lib/datasrc/sqlite3_accessor.h new file mode 100644 index 0000000000..4c2ec8bfd3 --- /dev/null +++ b/src/lib/datasrc/sqlite3_accessor.h @@ -0,0 +1,160 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + + +#ifndef __DATASRC_SQLITE3_ACCESSOR_H +#define __DATASRC_SQLITE3_ACCESSOR_H + +#include + +#include + +#include + +namespace isc { +namespace dns { +class RRClass; +} + +namespace datasrc { + +/** + * \brief Low-level database error + * + * This exception is thrown when the SQLite library complains about something. + * It might mean corrupt database file, invalid request or that something is + * rotten in the library. + */ +class SQLite3Error : public Exception { +public: + SQLite3Error(const char* file, size_t line, const char* what) : + isc::Exception(file, line, what) {} +}; + +struct SQLite3Parameters; + +/** + * \brief Concrete implementation of DatabaseAccessor for SQLite3 databases + * + * This opens one database file with our schema and serves data from there. + * According to the design, it doesn't interpret the data in any way, it just + * provides unified access to the DB. + */ +class SQLite3Database : public DatabaseAccessor { +public: + /** + * \brief Constructor + * + * This opens the database and becomes ready to serve data from there. + * + * \exception SQLite3Error will be thrown if the given database file + * doesn't work (it is broken, doesn't exist and can't be created, etc). + * + * \param filename The database file to be used. + * \param rrclass Which class of data it should serve (while the database + * file can contain multiple classes of data, single database can + * provide only one class). + */ + SQLite3Database(const std::string& filename, + const isc::dns::RRClass& rrclass); + /** + * \brief Destructor + * + * Closes the database. + */ + ~SQLite3Database(); + /** + * \brief Look up a zone + * + * This implements the getZone from DatabaseAccessor and looks up a zone + * in the data. It looks for a zone with the exact given origin and class + * passed to the constructor. + * + * \exception SQLite3Error if something about the database is broken. + * + * \param name The name of zone to look up + * \return The pair contains if the lookup was successful in the first + * element and the zone id in the second if it was. + */ + virtual std::pair getZone(const isc::dns::Name& name) const; + + /** + * \brief Start a new search for the given name in the given zone. + * + * This implements the searchForRecords from DatabaseConnection. + * This particular implementation does not raise DataSourceError. + * + * \exception DataSourceError when sqlite3_bind_int() or + * sqlite3_bind_text() fails + * + * \param zone_id The zone to seach in, as returned by getZone() + * \param name The name to find records for + */ + virtual void searchForRecords(int zone_id, const std::string& name); + + /** + * \brief Retrieve the next record from the search started with + * searchForRecords + * + * This implements the getNextRecord from DatabaseConnection. + * See the documentation there for more information. + * + * If this method raises an exception, the contents of columns are undefined. + * + * \exception DataSourceError if there is an error returned by sqlite_step() + * When this exception is raised, the current + * search as initialized by searchForRecords() is + * NOT reset, and the caller is expected to take + * care of that. + * \param columns This vector will be cleared, and the fields of the record will + * be appended here as strings (in the order rdtype, ttl, sigtype, + * and rdata). If there was no data (i.e. if this call returns + * false), the vector is untouched. + * \return true if there was a next record, false if there was not + */ + virtual bool getNextRecord(std::string columns[], size_t column_count); + + /** + * \brief Resets any state created by searchForRecords + * + * This implements the resetSearch from DatabaseConnection. + * See the documentation there for more information. + * + * This function never throws. + */ + virtual void resetSearch(); + + /// The SQLite3 implementation of this method returns a string starting + /// with a fixed prefix of "sqlite3_" followed by the DB file name + /// removing any path name. For example, for the DB file + /// /somewhere/in/the/system/bind10.sqlite3, this method will return + /// "sqlite3_bind10.sqlite3". + virtual const std::string& getDBName() const { return (database_name_); } + +private: + /// \brief Private database data + SQLite3Parameters* dbparameters_; + /// \brief The class for which the queries are done + const std::string class_; + /// \brief Opens the database + void open(const std::string& filename); + /// \brief Closes the database + void close(); + const std::string database_name_; +}; + +} +} + +#endif diff --git a/src/lib/datasrc/tests/Makefile.am b/src/lib/datasrc/tests/Makefile.am index ffedb75f02..1a65f82849 100644 --- a/src/lib/datasrc/tests/Makefile.am +++ b/src/lib/datasrc/tests/Makefile.am @@ -28,6 +28,8 @@ run_unittests_SOURCES += rbtree_unittest.cc run_unittests_SOURCES += zonetable_unittest.cc run_unittests_SOURCES += memory_datasrc_unittest.cc run_unittests_SOURCES += logger_unittest.cc +run_unittests_SOURCES += database_unittest.cc +run_unittests_SOURCES += sqlite3_accessor_unittest.cc run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc new file mode 100644 index 0000000000..8fad14b377 --- /dev/null +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -0,0 +1,943 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include +#include +#include +#include + +#include +#include +#include + +#include + +#include + +using namespace isc::datasrc; +using namespace std; +using namespace boost; +using isc::dns::Name; + +namespace { + +/* + * A virtual database database that pretends it contains single zone -- + * example.org. + */ +class MockAccessor : public DatabaseAccessor { +public: + MockAccessor() : search_running_(false), + database_name_("mock_database") + { + fillData(); + } + + virtual std::pair getZone(const Name& name) const { + if (name == Name("example.org")) { + return (std::pair(true, 42)); + } else { + return (std::pair(false, 0)); + } + } + + virtual void searchForRecords(int zone_id, const std::string& name) { + search_running_ = true; + + // 'hardcoded' name to trigger exceptions (for testing + // the error handling of find() (the other on is below in + // if the name is "exceptiononsearch" it'll raise an exception here + if (name == "dsexception.in.search.") { + isc_throw(DataSourceError, "datasource exception on search"); + } else if (name == "iscexception.in.search.") { + isc_throw(isc::Exception, "isc exception on search"); + } else if (name == "basicexception.in.search.") { + throw std::exception(); + } + searched_name_ = name; + + // we're not aiming for efficiency in this test, simply + // copy the relevant vector from records + cur_record = 0; + if (zone_id == 42) { + if (records.count(name) > 0) { + cur_name = records.find(name)->second; + } else { + cur_name.clear(); + } + } else { + cur_name.clear(); + } + }; + + virtual bool getNextRecord(std::string columns[], size_t column_count) { + if (searched_name_ == "dsexception.in.getnext.") { + isc_throw(DataSourceError, "datasource exception on getnextrecord"); + } else if (searched_name_ == "iscexception.in.getnext.") { + isc_throw(isc::Exception, "isc exception on getnextrecord"); + } else if (searched_name_ == "basicexception.in.getnext.") { + throw std::exception(); + } + + if (column_count != DatabaseAccessor::COLUMN_COUNT) { + isc_throw(DataSourceError, "Wrong column count in getNextRecord"); + } + if (cur_record < cur_name.size()) { + for (size_t i = 0; i < column_count; ++i) { + columns[i] = cur_name[cur_record][i]; + } + cur_record++; + return (true); + } else { + resetSearch(); + return (false); + } + }; + + virtual void resetSearch() { + search_running_ = false; + }; + + bool searchRunning() const { + return (search_running_); + } + + virtual const std::string& getDBName() const { + return (database_name_); + } +private: + std::map > > records; + // used as internal index for getNextRecord() + size_t cur_record; + // used as temporary storage after searchForRecord() and during + // getNextRecord() calls, as well as during the building of the + // fake data + std::vector< std::vector > cur_name; + + // This boolean is used to make sure find() calls resetSearch + // when it encounters an error + bool search_running_; + + // We store the name passed to searchForRecords, so we can + // hardcode some exceptions into getNextRecord + std::string searched_name_; + + const std::string database_name_; + + // Adds one record to the current name in the database + // The actual data will not be added to 'records' until + // addCurName() is called + void addRecord(const std::string& name, + const std::string& type, + const std::string& sigtype, + const std::string& rdata) { + std::vector columns; + columns.push_back(name); + columns.push_back(type); + columns.push_back(sigtype); + columns.push_back(rdata); + cur_name.push_back(columns); + } + + // Adds all records we just built with calls to addRecords + // to the actual fake database. This will clear cur_name, + // so we can immediately start adding new records. + void addCurName(const std::string& name) { + ASSERT_EQ(0, records.count(name)); + records[name] = cur_name; + cur_name.clear(); + } + + // Fills the database with zone data. + // This method constructs a number of resource records (with addRecord), + // which will all be added for one domain name to the fake database + // (with addCurName). So for instance the first set of calls create + // data for the name 'www.example.org', which will consist of one A RRset + // of one record, and one AAAA RRset of two records. + // The order in which they are added is the order in which getNextRecord() + // will return them (so we can test whether find() etc. support data that + // might not come in 'normal' order) + // It shall immediately fail if you try to add the same name twice. + void fillData() { + // some plain data + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addRecord("AAAA", "3600", "", "2001:db8::2"); + addCurName("www.example.org."); + + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addRecord("A", "3600", "", "192.0.2.2"); + addCurName("www2.example.org."); + + addRecord("CNAME", "3600", "", "www.example.org."); + addCurName("cname.example.org."); + + // some DNSSEC-'signed' data + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addRecord("AAAA", "3600", "", "2001:db8::2"); + addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("signed1.example.org."); + addRecord("CNAME", "3600", "", "www.example.org."); + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("signedcname1.example.org."); + // special case might fail; sig is for cname, which isn't there (should be ignored) + // (ignoring of 'normal' other type is done above by www.) + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("acnamesig1.example.org."); + + // let's pretend we have a database that is not careful + // about the order in which it returns data + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("AAAA", "3600", "", "2001:db8::2"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addCurName("signed2.example.org."); + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("CNAME", "3600", "", "www.example.org."); + addCurName("signedcname2.example.org."); + + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("acnamesig2.example.org."); + + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("acnamesig3.example.org."); + + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("A", "360", "", "192.0.2.2"); + addCurName("ttldiff1.example.org."); + addRecord("A", "360", "", "192.0.2.1"); + addRecord("A", "3600", "", "192.0.2.2"); + addCurName("ttldiff2.example.org."); + + // also add some intentionally bad data + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("CNAME", "3600", "", "www.example.org."); + addCurName("badcname1.example.org."); + + addRecord("CNAME", "3600", "", "www.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("badcname2.example.org."); + + addRecord("CNAME", "3600", "", "www.example.org."); + addRecord("CNAME", "3600", "", "www.example2.org."); + addCurName("badcname3.example.org."); + + addRecord("A", "3600", "", "bad"); + addCurName("badrdata.example.org."); + + addRecord("BAD_TYPE", "3600", "", "192.0.2.1"); + addCurName("badtype.example.org."); + + addRecord("A", "badttl", "", "192.0.2.1"); + addCurName("badttl.example.org."); + + addRecord("A", "badttl", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 somebaddata 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("badsig.example.org."); + + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "TXT", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("badsigtype.example.org."); + + // Data for testing delegation (with NS and DNAME) + addRecord("NS", "3600", "", "ns.example.com."); + addRecord("NS", "3600", "", "ns.delegation.example.org."); + addRecord("RRSIG", "3600", "", "NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("delegation.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("ns.delegation.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("deep.below.delegation.example.org."); + + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("DNAME", "3600", "", "dname.example.com."); + addRecord("RRSIG", "3600", "", "DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("dname.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("below.dname.example.org."); + + // Broken NS + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("NS", "3600", "", "ns.example.com."); + addCurName("brokenns1.example.org."); + addRecord("NS", "3600", "", "ns.example.com."); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("brokenns2.example.org."); + + // Now double DNAME, to test failure mode + addRecord("DNAME", "3600", "", "dname1.example.com."); + addRecord("DNAME", "3600", "", "dname2.example.com."); + addCurName("baddname.example.org."); + + // Put some data into apex (including NS) so we can check our NS + // doesn't break anything + addRecord("NS", "3600", "", "ns.example.com."); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("example.org."); + } +}; + +class DatabaseClientTest : public ::testing::Test { +public: + DatabaseClientTest() { + createClient(); + } + /* + * We initialize the client from a function, so we can call it multiple + * times per test. + */ + void createClient() { + current_database_ = new MockAccessor(); + client_.reset(new DatabaseClient(shared_ptr( + current_database_))); + } + // Will be deleted by client_, just keep the current value for comparison. + MockAccessor* current_database_; + shared_ptr client_; + const std::string database_name_; + + /** + * Check the zone finder is a valid one and references the zone ID and + * database available here. + */ + void checkZoneFinder(const DataSourceClient::FindResult& zone) { + ASSERT_NE(ZoneFinderPtr(), zone.zone_finder) << "No zone finder"; + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + ASSERT_NE(shared_ptr(), finder) << + "Wrong type of finder"; + EXPECT_EQ(42, finder->zone_id()); + EXPECT_EQ(current_database_, &finder->database()); + } + + shared_ptr getFinder() { + DataSourceClient::FindResult zone( + client_->findZone(Name("example.org"))); + EXPECT_EQ(result::SUCCESS, zone.code); + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + EXPECT_EQ(42, finder->zone_id()); + EXPECT_FALSE(current_database_->searchRunning()); + + return (finder); + } + + std::vector expected_rdatas_; + std::vector expected_sig_rdatas_; +}; + +TEST_F(DatabaseClientTest, zoneNotFound) { + DataSourceClient::FindResult zone(client_->findZone(Name("example.com"))); + EXPECT_EQ(result::NOTFOUND, zone.code); +} + +TEST_F(DatabaseClientTest, exactZone) { + DataSourceClient::FindResult zone(client_->findZone(Name("example.org"))); + EXPECT_EQ(result::SUCCESS, zone.code); + checkZoneFinder(zone); +} + +TEST_F(DatabaseClientTest, superZone) { + DataSourceClient::FindResult zone(client_->findZone(Name( + "sub.example.org"))); + EXPECT_EQ(result::PARTIALMATCH, zone.code); + checkZoneFinder(zone); +} + +TEST_F(DatabaseClientTest, noAccessorException) { + // We need a dummy variable here; some compiler would regard it a mere + // declaration instead of an instantiation and make the test fail. + EXPECT_THROW(DatabaseClient dummy((shared_ptr())), + isc::InvalidParameter); +} + +namespace { +// checks if the given rrset matches the +// given name, class, type and rdatas +void +checkRRset(isc::dns::ConstRRsetPtr rrset, + const isc::dns::Name& name, + const isc::dns::RRClass& rrclass, + const isc::dns::RRType& rrtype, + const isc::dns::RRTTL& rrttl, + const std::vector& rdatas) { + isc::dns::RRsetPtr expected_rrset( + new isc::dns::RRset(name, rrclass, rrtype, rrttl)); + for (unsigned int i = 0; i < rdatas.size(); ++i) { + expected_rrset->addRdata( + isc::dns::rdata::createRdata(rrtype, rrclass, + rdatas[i])); + } + isc::testutils::rrsetCheck(expected_rrset, rrset); +} + +void +doFindTest(shared_ptr finder, + const isc::dns::Name& name, + const isc::dns::RRType& type, + const isc::dns::RRType& expected_type, + const isc::dns::RRTTL expected_ttl, + ZoneFinder::Result expected_result, + const std::vector& expected_rdatas, + const std::vector& expected_sig_rdatas, + const isc::dns::Name& expected_name = isc::dns::Name::ROOT_NAME(), + const ZoneFinder::FindOptions options = ZoneFinder::FIND_DEFAULT) +{ + SCOPED_TRACE("doFindTest " + name.toText() + " " + type.toText()); + ZoneFinder::FindResult result = + finder->find(name, type, NULL, options); + ASSERT_EQ(expected_result, result.code) << name << " " << type; + if (expected_rdatas.size() > 0) { + checkRRset(result.rrset, expected_name != Name(".") ? expected_name : + name, finder->getClass(), expected_type, expected_ttl, + expected_rdatas); + + if (expected_sig_rdatas.size() > 0) { + checkRRset(result.rrset->getRRsig(), expected_name != Name(".") ? + expected_name : name, finder->getClass(), + isc::dns::RRType::RRSIG(), expected_ttl, + expected_sig_rdatas); + } else { + EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset->getRRsig()); + } + } else { + EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset); + } +} +} // end anonymous namespace + +TEST_F(DatabaseClientTest, find) { + shared_ptr finder(getFinder()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + doFindTest(finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_rdatas_.push_back("192.0.2.2"); + doFindTest(finder, isc::dns::Name("www2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("2001:db8::1"); + expected_rdatas_.push_back("2001:db8::2"); + doFindTest(finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + doFindTest(finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + isc::dns::RRTTL(3600), + ZoneFinder::NXRRSET, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); + doFindTest(finder, isc::dns::Name("cname.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), + ZoneFinder::CNAME, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); + doFindTest(finder, isc::dns::Name("cname.example.org."), + isc::dns::RRType::CNAME(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + doFindTest(finder, isc::dns::Name("doesnotexist.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::NXDOMAIN, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("2001:db8::1"); + expected_rdatas_.push_back("2001:db8::2"); + expected_sig_rdatas_.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + doFindTest(finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + isc::dns::RRTTL(3600), + ZoneFinder::NXRRSET, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); + expected_sig_rdatas_.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("signedcname1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), + ZoneFinder::CNAME, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("2001:db8::2"); + expected_rdatas_.push_back("2001:db8::1"); + expected_sig_rdatas_.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + doFindTest(finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + isc::dns::RRTTL(3600), + ZoneFinder::NXRRSET, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); + expected_sig_rdatas_.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("signedcname2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), + ZoneFinder::CNAME, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("acnamesig1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("acnamesig2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("acnamesig3.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_rdatas_.push_back("192.0.2.2"); + doFindTest(finder, isc::dns::Name("ttldiff1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(360), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_rdatas_.push_back("192.0.2.2"); + doFindTest(finder, isc::dns::Name("ttldiff2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(360), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + + EXPECT_THROW(finder->find(isc::dns::Name("badcname1.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badcname2.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badcname3.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badrdata.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badtype.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badttl.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badsig.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + + // Trigger the hardcoded exceptions and see if find() has cleaned up + EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.search."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("iscexception.in.search."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("basicexception.in.search."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + std::exception); + EXPECT_FALSE(current_database_->searchRunning()); + + EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.getnext."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("iscexception.in.getnext."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("basicexception.in.getnext."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + std::exception); + EXPECT_FALSE(current_database_->searchRunning()); + + // This RRSIG has the wrong sigtype field, which should be + // an error if we decide to keep using that field + // Right now the field is ignored, so it does not error + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("badsigtype.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); +} + +TEST_F(DatabaseClientTest, findDelegation) { + shared_ptr finder(getFinder()); + + // The apex should not be considered delegation point and we can access + // data + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + doFindTest(finder, isc::dns::Name("example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + expected_rdatas_.clear(); + expected_rdatas_.push_back("ns.example.com."); + expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 20000201000000 " + "12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("example.org."), + isc::dns::RRType::NS(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + // Check when we ask for something below delegation point, we get the NS + // (Both when the RRset there exists and doesn't) + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("ns.example.com."); + expected_rdatas_.push_back("ns.delegation.example.org."); + expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 20000201000000 " + "12345 example.org. FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); + EXPECT_FALSE(current_database_->searchRunning()); + doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); + doFindTest(finder, isc::dns::Name("deep.below.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); + EXPECT_FALSE(current_database_->searchRunning()); + + // Even when we check directly at the delegation point, we should get + // the NS + doFindTest(finder, isc::dns::Name("delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + // And when we ask direcly for the NS, we should still get delegation + doFindTest(finder, isc::dns::Name("delegation.example.org."), + isc::dns::RRType::NS(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + // Now test delegation. If it is below the delegation point, we should get + // the DNAME (the one with data under DNAME is invalid zone, but we test + // the behaviour anyway just to make sure) + expected_rdatas_.clear(); + expected_rdatas_.push_back("dname.example.com."); + expected_sig_rdatas_.clear(); + expected_sig_rdatas_.push_back("DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); + EXPECT_FALSE(current_database_->searchRunning()); + doFindTest(finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); + EXPECT_FALSE(current_database_->searchRunning()); + doFindTest(finder, isc::dns::Name("really.deep.below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); + EXPECT_FALSE(current_database_->searchRunning()); + + // Asking direcly for DNAME should give SUCCESS + doFindTest(finder, isc::dns::Name("dname.example.org."), + isc::dns::RRType::DNAME(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); + + // But we don't delegate at DNAME point + expected_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.clear(); + doFindTest(finder, isc::dns::Name("dname.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + expected_rdatas_.clear(); + doFindTest(finder, isc::dns::Name("dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, expected_rdatas_, + expected_sig_rdatas_); + EXPECT_FALSE(current_database_->searchRunning()); + + // This is broken dname, it contains two targets + EXPECT_THROW(finder->find(isc::dns::Name("below.baddname.example.org."), + isc::dns::RRType::A(), NULL, + ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + + // Broken NS - it lives together with something else + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("brokenns1.example.org."), + isc::dns::RRType::A(), NULL, + ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("brokenns2.example.org."), + isc::dns::RRType::A(), NULL, + ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_database_->searchRunning()); +} + +// Glue-OK mode. Just go trough NS delegations down (but not trough +// DNAME) and pretend it is not there. +TEST_F(DatabaseClientTest, glueOK) { + shared_ptr finder(getFinder()); + + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, + expected_rdatas_, expected_sig_rdatas_, + isc::dns::Name("ns.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + doFindTest(finder, isc::dns::Name("nothere.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::NXDOMAIN, + expected_rdatas_, expected_sig_rdatas_, + isc::dns::Name("nothere.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + expected_rdatas_.push_back("192.0.2.1"); + doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_, + isc::dns::Name("ns.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + expected_rdatas_.clear(); + expected_rdatas_.push_back("ns.example.com."); + expected_rdatas_.push_back("ns.delegation.example.org."); + expected_sig_rdatas_.clear(); + expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + // When we request the NS, it should be SUCCESS, not DELEGATION + // (different in GLUE_OK) + doFindTest(finder, isc::dns::Name("delegation.example.org."), + isc::dns::RRType::NS(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, + expected_rdatas_, expected_sig_rdatas_, + isc::dns::Name("delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + expected_rdatas_.clear(); + expected_rdatas_.push_back("dname.example.com."); + expected_sig_rdatas_.clear(); + expected_sig_rdatas_.push_back("DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org."), + ZoneFinder::FIND_GLUE_OK); + EXPECT_FALSE(current_database_->searchRunning()); + doFindTest(finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org."), + ZoneFinder::FIND_GLUE_OK); + EXPECT_FALSE(current_database_->searchRunning()); +} + +TEST_F(DatabaseClientTest, getOrigin) { + DataSourceClient::FindResult zone(client_->findZone(Name("example.org"))); + ASSERT_EQ(result::SUCCESS, zone.code); + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + EXPECT_EQ(42, finder->zone_id()); + EXPECT_EQ(isc::dns::Name("example.org"), finder->getOrigin()); +} + +} diff --git a/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc new file mode 100644 index 0000000000..097c821330 --- /dev/null +++ b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc @@ -0,0 +1,245 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. +#include + +#include + +#include + +#include +#include + +using namespace isc::datasrc; +using isc::data::ConstElementPtr; +using isc::data::Element; +using isc::dns::RRClass; +using isc::dns::Name; + +namespace { +// Some test data +std::string SQLITE_DBFILE_EXAMPLE = TEST_DATA_DIR "/test.sqlite3"; +std::string SQLITE_DBFILE_EXAMPLE2 = TEST_DATA_DIR "/example2.com.sqlite3"; +std::string SQLITE_DBNAME_EXAMPLE2 = "sqlite3_example2.com.sqlite3"; +std::string SQLITE_DBFILE_EXAMPLE_ROOT = TEST_DATA_DIR "/test-root.sqlite3"; +std::string SQLITE_DBNAME_EXAMPLE_ROOT = "sqlite3_test-root.sqlite3"; +std::string SQLITE_DBFILE_BROKENDB = TEST_DATA_DIR "/brokendb.sqlite3"; +std::string SQLITE_DBFILE_MEMORY = ":memory:"; + +// The following file must be non existent and must be non"creatable"; +// the sqlite3 library will try to create a new DB file if it doesn't exist, +// so to test a failure case the create operation should also fail. +// The "nodir", a non existent directory, is inserted for this purpose. +std::string SQLITE_DBFILE_NOTEXIST = TEST_DATA_DIR "/nodir/notexist"; + +// Opening works (the content is tested in different tests) +TEST(SQLite3Open, common) { + EXPECT_NO_THROW(SQLite3Database db(SQLITE_DBFILE_EXAMPLE, + RRClass::IN())); +} + +// The file can't be opened +TEST(SQLite3Open, notExist) { + EXPECT_THROW(SQLite3Database db(SQLITE_DBFILE_NOTEXIST, + RRClass::IN()), SQLite3Error); +} + +// It rejects broken DB +TEST(SQLite3Open, brokenDB) { + EXPECT_THROW(SQLite3Database db(SQLITE_DBFILE_BROKENDB, + RRClass::IN()), SQLite3Error); +} + +// Test we can create the schema on the fly +TEST(SQLite3Open, memoryDB) { + EXPECT_NO_THROW(SQLite3Database db(SQLITE_DBFILE_MEMORY, + RRClass::IN())); +} + +// Test fixture for querying the db +class SQLite3Access : public ::testing::Test { +public: + SQLite3Access() { + initAccessor(SQLITE_DBFILE_EXAMPLE, RRClass::IN()); + } + // So it can be re-created with different data + void initAccessor(const std::string& filename, const RRClass& rrclass) { + db.reset(new SQLite3Database(filename, rrclass)); + } + // The tested dbection + boost::scoped_ptr db; +}; + +// This zone exists in the data, so it should be found +TEST_F(SQLite3Access, getZone) { + std::pair result(db->getZone(Name("example.com"))); + EXPECT_TRUE(result.first); + EXPECT_EQ(1, result.second); +} + +// But it should find only the zone, nothing below it +TEST_F(SQLite3Access, subZone) { + EXPECT_FALSE(db->getZone(Name("sub.example.com")).first); +} + +// This zone is not there at all +TEST_F(SQLite3Access, noZone) { + EXPECT_FALSE(db->getZone(Name("example.org")).first); +} + +// This zone is there, but in different class +TEST_F(SQLite3Access, noClass) { + initAccessor(SQLITE_DBFILE_EXAMPLE, RRClass::CH()); + EXPECT_FALSE(db->getZone(Name("example.com")).first); +} + +TEST(SQLite3Open, getDBNameExample2) { + SQLite3Database db(SQLITE_DBFILE_EXAMPLE2, RRClass::IN()); + EXPECT_EQ(SQLITE_DBNAME_EXAMPLE2, db.getDBName()); +} + +TEST(SQLite3Open, getDBNameExampleROOT) { + SQLite3Database db(SQLITE_DBFILE_EXAMPLE_ROOT, RRClass::IN()); + EXPECT_EQ(SQLITE_DBNAME_EXAMPLE_ROOT, db.getDBName()); +} + +// Simple function to cound the number of records for +// any name +void +checkRecordRow(const std::string columns[], + const std::string& field0, + const std::string& field1, + const std::string& field2, + const std::string& field3) +{ + EXPECT_EQ(field0, columns[0]); + EXPECT_EQ(field1, columns[1]); + EXPECT_EQ(field2, columns[2]); + EXPECT_EQ(field3, columns[3]); +} + +TEST_F(SQLite3Access, getRecords) { + const std::pair zone_info(db->getZone(Name("example.com"))); + ASSERT_TRUE(zone_info.first); + + const int zone_id = zone_info.second; + ASSERT_EQ(1, zone_id); + + const size_t column_count = DatabaseAccessor::COLUMN_COUNT; + std::string columns[column_count]; + + // without search, getNext() should return false + EXPECT_FALSE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "", "", "", ""); + + db->searchForRecords(zone_id, "foo.bar."); + EXPECT_FALSE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "", "", "", ""); + + db->searchForRecords(zone_id, ""); + EXPECT_FALSE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "", "", "", ""); + + // Should error on a bad number of columns + EXPECT_THROW(db->getNextRecord(columns, 3), DataSourceError); + EXPECT_THROW(db->getNextRecord(columns, 5), DataSourceError); + + // now try some real searches + db->searchForRecords(zone_id, "foo.example.com."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "CNAME", "3600", "", + "cnametest.example.org."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "CNAME", + "CNAME 5 3 3600 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NSEC", "7200", "", + "mail.example.com. CNAME RRSIG NSEC"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 3 7200 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE"); + EXPECT_FALSE(db->getNextRecord(columns, column_count)); + // with no more records, the array should not have been modified + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 3 7200 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE"); + + db->searchForRecords(zone_id, "example.com."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "SOA", "3600", "", + "master.example.com. admin.example.com. " + "1234 3600 1800 2419200 7200"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "SOA", + "SOA 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NS", "1200", "", "dns01.example.com."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NS", "3600", "", "dns02.example.com."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NS", "1800", "", "dns03.example.com."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "NS", + "NS 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "MX", "3600", "", "10 mail.example.com."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "MX", "3600", "", + "20 mail.subzone.example.com."); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "MX", + "MX 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NSEC", "7200", "", + "cname-ext.example.com. NS SOA MX RRSIG NSEC DNSKEY"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 2 7200 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "DNSKEY", "3600", "", + "256 3 5 AwEAAcOUBllYc1hf7ND9uDy+Yz1BF3sI0m4q NGV7W" + "cTD0WEiuV7IjXgHE36fCmS9QsUxSSOV o1I/FMxI2PJVqTYHkX" + "FBS7AzLGsQYMU7UjBZ SotBJ6Imt5pXMu+lEDNy8TOUzG3xm7g" + "0qcbW YF6qCEfvZoBtAqi5Rk7Mlrqs8agxYyMx"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "DNSKEY", "3600", "", + "257 3 5 AwEAAe5WFbxdCPq2jZrZhlMj7oJdff3W7syJ tbvzg" + "62tRx0gkoCDoBI9DPjlOQG0UAbj+xUV 4HQZJStJaZ+fHU5AwV" + "NT+bBZdtV+NujSikhd THb4FYLg2b3Cx9NyJvAVukHp/91HnWu" + "G4T36 CzAFrfPwsHIrBz9BsaIQ21VRkcmj7DswfI/i DGd8j6b" + "qiODyNZYQ+ZrLmF0KIJ2yPN3iO6Zq 23TaOrVTjB7d1a/h31OD" + "fiHAxFHrkY3t3D5J R9Nsl/7fdRmSznwtcSDgLXBoFEYmw6p86" + "Acv RyoYNcL1SXjaKVLG5jyU3UR+LcGZT5t/0xGf oIK/aKwEN" + "rsjcKZZj660b1M="); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "4456 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(db->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + EXPECT_FALSE(db->getNextRecord(columns, column_count)); + // getnextrecord returning false should mean array is not altered + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); +} + +} // end anonymous namespace diff --git a/src/lib/datasrc/zone.h b/src/lib/datasrc/zone.h index 69785f0227..0dacc5da55 100644 --- a/src/lib/datasrc/zone.h +++ b/src/lib/datasrc/zone.h @@ -131,10 +131,10 @@ public: /// These methods should never throw an exception. //@{ /// Return the origin name of the zone. - virtual const isc::dns::Name& getOrigin() const = 0; + virtual isc::dns::Name getOrigin() const = 0; /// Return the RR class of the zone. - virtual const isc::dns::RRClass& getClass() const = 0; + virtual isc::dns::RRClass getClass() const = 0; //@} /// @@ -197,7 +197,7 @@ public: const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, const FindOptions options - = FIND_DEFAULT) const = 0; + = FIND_DEFAULT) = 0; //@} }; diff --git a/src/lib/dns/Makefile.am b/src/lib/dns/Makefile.am index 8ffd162076..c85879d4c2 100644 --- a/src/lib/dns/Makefile.am +++ b/src/lib/dns/Makefile.am @@ -51,8 +51,13 @@ EXTRA_DIST += rdata/generic/soa_6.cc EXTRA_DIST += rdata/generic/soa_6.h EXTRA_DIST += rdata/generic/txt_16.cc EXTRA_DIST += rdata/generic/txt_16.h +<<<<<<< HEAD EXTRA_DIST += rdata/generic/minfo_14.cc EXTRA_DIST += rdata/generic/minfo_14.h +======= +EXTRA_DIST += rdata/generic/afsdb_18.cc +EXTRA_DIST += rdata/generic/afsdb_18.h +>>>>>>> master EXTRA_DIST += rdata/hs_4/a_1.cc EXTRA_DIST += rdata/hs_4/a_1.h EXTRA_DIST += rdata/in_1/a_1.cc diff --git a/src/lib/dns/rdata/generic/afsdb_18.cc b/src/lib/dns/rdata/generic/afsdb_18.cc new file mode 100644 index 0000000000..dd7fa5f861 --- /dev/null +++ b/src/lib/dns/rdata/generic/afsdb_18.cc @@ -0,0 +1,170 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include +#include + +#include +#include +#include +#include + +#include + +using namespace std; +using namespace isc::util::str; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +/// \brief Constructor from string. +/// +/// \c afsdb_str must be formatted as follows: +/// \code +/// \endcode +/// where server name field must represent a valid domain name. +/// +/// An example of valid string is: +/// \code "1 server.example.com." \endcode +/// +/// Exceptions +/// +/// \exception InvalidRdataText The number of RDATA fields (must be 2) is +/// incorrect. +/// \exception std::bad_alloc Memory allocation fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the string is invalid. +AFSDB::AFSDB(const std::string& afsdb_str) : + subtype_(0), server_(Name::ROOT_NAME()) +{ + istringstream iss(afsdb_str); + + try { + const uint32_t subtype = tokenToNum(getToken(iss)); + const Name servername(getToken(iss)); + string server; + + if (!iss.eof()) { + isc_throw(InvalidRdataText, "Unexpected input for AFSDB" + "RDATA: " << afsdb_str); + } + + subtype_ = subtype; + server_ = servername; + + } catch (const StringTokenError& ste) { + isc_throw(InvalidRdataText, "Invalid AFSDB text: " << + ste.what() << ": " << afsdb_str); + } +} + +/// \brief Constructor from wire-format data. +/// +/// This constructor doesn't check the validity of the second parameter (rdata +/// length) for parsing. +/// If necessary, the caller will check consistency. +/// +/// \exception std::bad_alloc Memory allocation fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the wire is invalid. +AFSDB::AFSDB(InputBuffer& buffer, size_t) : + subtype_(buffer.readUint16()), server_(buffer) +{} + +/// \brief Copy constructor. +/// +/// \exception std::bad_alloc Memory allocation fails in copying internal +/// member variables (this should be very rare). +AFSDB::AFSDB(const AFSDB& other) : + Rdata(), subtype_(other.subtype_), server_(other.server_) +{} + +AFSDB& +AFSDB::operator=(const AFSDB& source) { + subtype_ = source.subtype_; + server_ = source.server_; + + return (*this); +} + +/// \brief Convert the \c AFSDB to a string. +/// +/// The output of this method is formatted as described in the "from string" +/// constructor (\c AFSDB(const std::string&))). +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \return A \c string object that represents the \c AFSDB object. +string +AFSDB::toText() const { + return (boost::lexical_cast(subtype_) + " " + server_.toText()); +} + +/// \brief Render the \c AFSDB in the wire format without name compression. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param buffer An output buffer to store the wire data. +void +AFSDB::toWire(OutputBuffer& buffer) const { + buffer.writeUint16(subtype_); + server_.toWire(buffer); +} + +/// \brief Render the \c AFSDB in the wire format with taking into account +/// compression. +/// +/// As specified in RFC3597, TYPE AFSDB is not "well-known", the server +/// field (domain name) will not be compressed. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param renderer DNS message rendering context that encapsulates the +/// output buffer and name compression information. +void +AFSDB::toWire(AbstractMessageRenderer& renderer) const { + renderer.writeUint16(subtype_); + renderer.writeName(server_, false); +} + +/// \brief Compare two instances of \c AFSDB RDATA. +/// +/// See documentation in \c Rdata. +int +AFSDB::compare(const Rdata& other) const { + const AFSDB& other_afsdb = dynamic_cast(other); + if (subtype_ < other_afsdb.subtype_) { + return (-1); + } else if (subtype_ > other_afsdb.subtype_) { + return (1); + } + + return (compareNames(server_, other_afsdb.server_)); +} + +const Name& +AFSDB::getServer() const { + return (server_); +} + +uint16_t +AFSDB::getSubtype() const { + return (subtype_); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/afsdb_18.h b/src/lib/dns/rdata/generic/afsdb_18.h new file mode 100644 index 0000000000..4a4677502c --- /dev/null +++ b/src/lib/dns/rdata/generic/afsdb_18.h @@ -0,0 +1,74 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include + +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +/// \brief \c rdata::AFSDB class represents the AFSDB RDATA as defined %in +/// RFC1183. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// AFSDB RDATA. +class AFSDB : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + /// \brief Assignment operator. + /// + /// This method never throws an exception. + AFSDB& operator=(const AFSDB& source); + /// + /// Specialized methods + /// + + /// \brief Return the value of the server field. + /// + /// \return A reference to a \c Name class object corresponding to the + /// internal server name. + /// + /// This method never throws an exception. + const Name& getServer() const; + + /// \brief Return the value of the subtype field. + /// + /// This method never throws an exception. + uint16_t getSubtype() const; + +private: + uint16_t subtype_; + Name server_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/rrsig_46.cc b/src/lib/dns/rdata/generic/rrsig_46.cc index 0c82406895..fc8e3400c9 100644 --- a/src/lib/dns/rdata/generic/rrsig_46.cc +++ b/src/lib/dns/rdata/generic/rrsig_46.cc @@ -243,5 +243,10 @@ RRSIG::compare(const Rdata& other) const { } } +const RRType& +RRSIG::typeCovered() { + return (impl_->covered_); +} + // END_RDATA_NAMESPACE // END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/rrsig_46.h b/src/lib/dns/rdata/generic/rrsig_46.h index 19acc40c81..b8e630631e 100644 --- a/src/lib/dns/rdata/generic/rrsig_46.h +++ b/src/lib/dns/rdata/generic/rrsig_46.h @@ -38,6 +38,9 @@ public: // END_COMMON_MEMBERS RRSIG& operator=(const RRSIG& source); ~RRSIG(); + + // specialized methods + const RRType& typeCovered(); private: RRSIGImpl* impl_; }; diff --git a/src/lib/dns/tests/Makefile.am b/src/lib/dns/tests/Makefile.am index 3921f27131..caa26e5736 100644 --- a/src/lib/dns/tests/Makefile.am +++ b/src/lib/dns/tests/Makefile.am @@ -32,6 +32,7 @@ run_unittests_SOURCES += rdata_ns_unittest.cc rdata_soa_unittest.cc run_unittests_SOURCES += rdata_txt_unittest.cc rdata_mx_unittest.cc run_unittests_SOURCES += rdata_ptr_unittest.cc rdata_cname_unittest.cc run_unittests_SOURCES += rdata_dname_unittest.cc +run_unittests_SOURCES += rdata_afsdb_unittest.cc run_unittests_SOURCES += rdata_opt_unittest.cc run_unittests_SOURCES += rdata_dnskey_unittest.cc run_unittests_SOURCES += rdata_ds_unittest.cc diff --git a/src/lib/dns/tests/rdata_afsdb_unittest.cc b/src/lib/dns/tests/rdata_afsdb_unittest.cc new file mode 100644 index 0000000000..7df8d83659 --- /dev/null +++ b/src/lib/dns/tests/rdata_afsdb_unittest.cc @@ -0,0 +1,210 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; + +const char* const afsdb_text = "1 afsdb.example.com."; +const char* const afsdb_text2 = "0 root.example.com."; +const char* const too_long_label("012345678901234567890123456789" + "0123456789012345678901234567890123"); + +namespace { +class Rdata_AFSDB_Test : public RdataTest { +protected: + Rdata_AFSDB_Test() : + rdata_afsdb(string(afsdb_text)), rdata_afsdb2(string(afsdb_text2)) + {} + + const generic::AFSDB rdata_afsdb; + const generic::AFSDB rdata_afsdb2; + vector expected_wire; +}; + + +TEST_F(Rdata_AFSDB_Test, createFromText) { + EXPECT_EQ(1, rdata_afsdb.getSubtype()); + EXPECT_EQ(Name("afsdb.example.com."), rdata_afsdb.getServer()); + + EXPECT_EQ(0, rdata_afsdb2.getSubtype()); + EXPECT_EQ(Name("root.example.com."), rdata_afsdb2.getServer()); +} + +TEST_F(Rdata_AFSDB_Test, badText) { + // subtype is too large + EXPECT_THROW(const generic::AFSDB rdata_afsdb("99999999 afsdb.example.com."), + InvalidRdataText); + // incomplete text + EXPECT_THROW(const generic::AFSDB rdata_afsdb("10"), InvalidRdataText); + EXPECT_THROW(const generic::AFSDB rdata_afsdb("SPOON"), InvalidRdataText); + EXPECT_THROW(const generic::AFSDB rdata_afsdb("1root.example.com."), InvalidRdataText); + // number of fields (must be 2) is incorrect + EXPECT_THROW(const generic::AFSDB rdata_afsdb("10 afsdb. example.com."), + InvalidRdataText); + // bad name + EXPECT_THROW(const generic::AFSDB rdata_afsdb("1 afsdb.example.com." + + string(too_long_label)), TooLongLabel); +} + +TEST_F(Rdata_AFSDB_Test, assignment) { + generic::AFSDB copy((string(afsdb_text2))); + copy = rdata_afsdb; + EXPECT_EQ(0, copy.compare(rdata_afsdb)); + + // Check if the copied data is valid even after the original is deleted + generic::AFSDB* copy2 = new generic::AFSDB(rdata_afsdb); + generic::AFSDB copy3((string(afsdb_text2))); + copy3 = *copy2; + delete copy2; + EXPECT_EQ(0, copy3.compare(rdata_afsdb)); + + // Self assignment + copy = copy; + EXPECT_EQ(0, copy.compare(rdata_afsdb)); +} + +TEST_F(Rdata_AFSDB_Test, createFromWire) { + // uncompressed names + EXPECT_EQ(0, rdata_afsdb.compare( + *rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire1.wire"))); + // compressed name + EXPECT_EQ(0, rdata_afsdb.compare( + *rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire2.wire", 13))); + // RDLENGTH is too short + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire3.wire"), + InvalidRdataLength); + // RDLENGTH is too long + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire4.wire"), + InvalidRdataLength); + // bogus server name, the error should be detected in the name + // constructor + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire5.wire"), + DNSMessageFORMERR); +} + +TEST_F(Rdata_AFSDB_Test, toWireBuffer) { + // construct actual data + rdata_afsdb.toWire(obuffer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire1.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + &expected_wire[0], expected_wire.size()); + + // clear buffer for the next test + obuffer.clear(); + + // construct actual data + Name("example.com.").toWire(obuffer); + rdata_afsdb2.toWire(obuffer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire2.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + &expected_wire[0], expected_wire.size()); +} + +TEST_F(Rdata_AFSDB_Test, toWireRenderer) { + // similar to toWireBuffer, but names in RDATA could be compressed due to + // preceding names. Actually they must not be compressed according to + // RFC3597, and this test checks that. + + // construct actual data + rdata_afsdb.toWire(renderer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire1.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + renderer.getData(), renderer.getLength(), + &expected_wire[0], expected_wire.size()); + + // clear renderer for the next test + renderer.clear(); + + // construct actual data + Name("example.com.").toWire(obuffer); + rdata_afsdb2.toWire(renderer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire2.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + renderer.getData(), renderer.getLength(), + &expected_wire[0], expected_wire.size()); +} + +TEST_F(Rdata_AFSDB_Test, toText) { + EXPECT_EQ(afsdb_text, rdata_afsdb.toText()); + EXPECT_EQ(afsdb_text2, rdata_afsdb2.toText()); +} + +TEST_F(Rdata_AFSDB_Test, compare) { + // check reflexivity + EXPECT_EQ(0, rdata_afsdb.compare(rdata_afsdb)); + + // name must be compared in case-insensitive manner + EXPECT_EQ(0, rdata_afsdb.compare(generic::AFSDB("1 " + "AFSDB.example.com."))); + + const generic::AFSDB small1("10 afsdb.example.com"); + const generic::AFSDB large1("65535 afsdb.example.com"); + const generic::AFSDB large2("256 afsdb.example.com"); + + // confirm these are compared as unsigned values + EXPECT_GT(0, rdata_afsdb.compare(large1)); + EXPECT_LT(0, large1.compare(rdata_afsdb)); + + // confirm these are compared in network byte order + EXPECT_GT(0, small1.compare(large2)); + EXPECT_LT(0, large2.compare(small1)); + + // another AFSDB whose server name is larger than that of rdata_afsdb. + const generic::AFSDB large3("256 zzzzz.example.com"); + EXPECT_GT(0, large2.compare(large3)); + EXPECT_LT(0, large3.compare(large2)); + + // comparison attempt between incompatible RR types should be rejected + EXPECT_THROW(rdata_afsdb.compare(*rdata_nomatch), bad_cast); +} +} diff --git a/src/lib/dns/tests/rdata_rrsig_unittest.cc b/src/lib/dns/tests/rdata_rrsig_unittest.cc index 903021fb5e..3324b99de1 100644 --- a/src/lib/dns/tests/rdata_rrsig_unittest.cc +++ b/src/lib/dns/tests/rdata_rrsig_unittest.cc @@ -47,7 +47,7 @@ TEST_F(Rdata_RRSIG_Test, fromText) { "f49t+sXKPzbipN9g+s1ZPiIyofc="); generic::RRSIG rdata_rrsig(rrsig_txt); EXPECT_EQ(rrsig_txt, rdata_rrsig.toText()); - + EXPECT_EQ(isc::dns::RRType::A(), rdata_rrsig.typeCovered()); } TEST_F(Rdata_RRSIG_Test, badText) { diff --git a/src/lib/dns/tests/testdata/Makefile.am b/src/lib/dns/tests/testdata/Makefile.am index 3dac8f2b4c..3aa49375d8 100644 --- a/src/lib/dns/tests/testdata/Makefile.am +++ b/src/lib/dns/tests/testdata/Makefile.am @@ -36,6 +36,10 @@ BUILT_SOURCES += rdata_rp_fromWire1.wire rdata_rp_fromWire2.wire BUILT_SOURCES += rdata_rp_fromWire3.wire rdata_rp_fromWire4.wire BUILT_SOURCES += rdata_rp_fromWire5.wire rdata_rp_fromWire6.wire BUILT_SOURCES += rdata_rp_toWire1.wire rdata_rp_toWire2.wire +BUILT_SOURCES += rdata_afsdb_fromWire1.wire rdata_afsdb_fromWire2.wire +BUILT_SOURCES += rdata_afsdb_fromWire3.wire rdata_afsdb_fromWire4.wire +BUILT_SOURCES += rdata_afsdb_fromWire5.wire +BUILT_SOURCES += rdata_afsdb_toWire1.wire rdata_afsdb_toWire2.wire BUILT_SOURCES += rdata_soa_toWireUncompressed.wire BUILT_SOURCES += rdata_txt_fromWire2.wire rdata_txt_fromWire3.wire BUILT_SOURCES += rdata_txt_fromWire4.wire rdata_txt_fromWire5.wire @@ -105,6 +109,10 @@ EXTRA_DIST += rdata_rp_fromWire1.spec rdata_rp_fromWire2.spec EXTRA_DIST += rdata_rp_fromWire3.spec rdata_rp_fromWire4.spec EXTRA_DIST += rdata_rp_fromWire5.spec rdata_rp_fromWire6.spec EXTRA_DIST += rdata_rp_toWire1.spec rdata_rp_toWire2.spec +EXTRA_DIST += rdata_afsdb_fromWire1.spec rdata_afsdb_fromWire2.spec +EXTRA_DIST += rdata_afsdb_fromWire3.spec rdata_afsdb_fromWire4.spec +EXTRA_DIST += rdata_afsdb_fromWire5.spec +EXTRA_DIST += rdata_afsdb_toWire1.spec rdata_afsdb_toWire2.spec EXTRA_DIST += rdata_soa_fromWire rdata_soa_toWireUncompressed.spec EXTRA_DIST += rdata_srv_fromWire EXTRA_DIST += rdata_minfo_fromWire1.spec rdata_minfo_fromWire2.spec diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec new file mode 100644 index 0000000000..f831313827 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec @@ -0,0 +1,3 @@ +[custom] +sections: afsdb +[afsdb] diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec new file mode 100644 index 0000000000..f33e768589 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec @@ -0,0 +1,6 @@ +[custom] +sections: name:afsdb +[name] +name: example.com +[afsdb] +server: afsdb.ptr=0 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec new file mode 100644 index 0000000000..993032f605 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: 3 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec new file mode 100644 index 0000000000..37abf134c5 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: 80 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec new file mode 100644 index 0000000000..0ea79dd173 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +server: "01234567890123456789012345678901234567890123456789012345678901234" diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec b/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec new file mode 100644 index 0000000000..19464589e1 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec b/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec new file mode 100644 index 0000000000..c80011a488 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec @@ -0,0 +1,8 @@ +[custom] +sections: name:afsdb +[name] +name: example.com. +[afsdb] +subtype: 0 +server: root.example.com +rdlen: -1 diff --git a/src/lib/python/isc/config/ccsession.py b/src/lib/python/isc/config/ccsession.py index 4fa9d58f9c..ba7724ce55 100644 --- a/src/lib/python/isc/config/ccsession.py +++ b/src/lib/python/isc/config/ccsession.py @@ -91,6 +91,7 @@ COMMAND_CONFIG_UPDATE = "config_update" COMMAND_MODULE_SPECIFICATION_UPDATE = "module_specification_update" COMMAND_GET_COMMANDS_SPEC = "get_commands_spec" +COMMAND_GET_STATISTICS_SPEC = "get_statistics_spec" COMMAND_GET_CONFIG = "get_config" COMMAND_SET_CONFIG = "set_config" COMMAND_GET_MODULE_SPEC = "get_module_spec" diff --git a/src/lib/python/isc/config/cfgmgr.py b/src/lib/python/isc/config/cfgmgr.py index 18e001c306..1db9fd389f 100644 --- a/src/lib/python/isc/config/cfgmgr.py +++ b/src/lib/python/isc/config/cfgmgr.py @@ -267,6 +267,19 @@ class ConfigManager: commands[module_name] = self.module_specs[module_name].get_commands_spec() return commands + def get_statistics_spec(self, name = None): + """Returns a dict containing 'module_name': statistics_spec for + all modules. If name is specified, only that module will + be included""" + statistics = {} + if name: + if name in self.module_specs: + statistics[name] = self.module_specs[name].get_statistics_spec() + else: + for module_name in self.module_specs.keys(): + statistics[module_name] = self.module_specs[module_name].get_statistics_spec() + return statistics + def read_config(self): """Read the current configuration from the file specificied at init()""" try: @@ -457,6 +470,8 @@ class ConfigManager: if cmd: if cmd == ccsession.COMMAND_GET_COMMANDS_SPEC: answer = ccsession.create_answer(0, self.get_commands_spec()) + elif cmd == ccsession.COMMAND_GET_STATISTICS_SPEC: + answer = ccsession.create_answer(0, self.get_statistics_spec()) elif cmd == ccsession.COMMAND_GET_MODULE_SPEC: answer = self._handle_get_module_spec(arg) elif cmd == ccsession.COMMAND_GET_CONFIG: diff --git a/src/lib/python/isc/config/module_spec.py b/src/lib/python/isc/config/module_spec.py index 9aa49e03e7..b79f928237 100644 --- a/src/lib/python/isc/config/module_spec.py +++ b/src/lib/python/isc/config/module_spec.py @@ -23,6 +23,7 @@ import json import sys +import time import isc.cc.data @@ -91,7 +92,7 @@ class ModuleSpec: return _validate_spec_list(data_def, full, data, errors) else: # no spec, always bad - if errors != None: + if errors is not None: errors.append("No config_data specification") return False @@ -117,6 +118,26 @@ class ModuleSpec: return False + def validate_statistics(self, full, stat, errors = None): + """Check whether the given piece of data conforms to this + data definition. If so, it returns True. If not, it will + return false. If errors is given, and is an array, a string + describing the error will be appended to it. The current + version stops as soon as there is one error so this list + will not be exhaustive. If 'full' is true, it also errors on + non-optional missing values. Set this to False if you want to + validate only a part of a statistics tree (like a list of + non-default values). Also it checks 'item_format' in case + of time""" + stat_spec = self.get_statistics_spec() + if stat_spec is not None: + return _validate_spec_list(stat_spec, full, stat, errors) + else: + # no spec, always bad + if errors is not None: + errors.append("No statistics specification") + return False + def get_module_name(self): """Returns a string containing the name of the module as specified by the specification given at __init__()""" @@ -152,6 +173,14 @@ class ModuleSpec: else: return None + def get_statistics_spec(self): + """Returns a dict representation of the statistics part of the + specification, or None if there is none.""" + if 'statistics' in self._module_spec: + return self._module_spec['statistics'] + else: + return None + def __str__(self): """Returns a string representation of the full specification""" return self._module_spec.__str__() @@ -160,8 +189,9 @@ def _check(module_spec): """Checks the full specification. This is a dict that contains the element "module_spec", which is in itself a dict that must contain at least a "module_name" (string) and optionally - a "config_data" and a "commands" element, both of which are lists - of dicts. Raises a ModuleSpecError if there is a problem.""" + a "config_data", a "commands" and a "statistics" element, all + of which are lists of dicts. Raises a ModuleSpecError if there + is a problem.""" if type(module_spec) != dict: raise ModuleSpecError("data specification not a dict") if "module_name" not in module_spec: @@ -173,6 +203,8 @@ def _check(module_spec): _check_config_spec(module_spec["config_data"]) if "commands" in module_spec: _check_command_spec(module_spec["commands"]) + if "statistics" in module_spec: + _check_statistics_spec(module_spec["statistics"]) def _check_config_spec(config_data): # config data is a list of items represented by dicts that contain @@ -263,34 +295,75 @@ def _check_item_spec(config_item): if type(map_item) != dict: raise ModuleSpecError("map_item_spec element is not a dict") _check_item_spec(map_item) + if 'item_format' in config_item and 'item_default' in config_item: + item_format = config_item["item_format"] + item_default = config_item["item_default"] + if not _check_format(item_default, item_format): + raise ModuleSpecError( + "Wrong format for " + str(item_default) + " in " + str(item_name)) +def _check_statistics_spec(statistics): + # statistics is a list of items represented by dicts that contain + # things like "item_name", depending on the type they can have + # specific subitems + """Checks a list that contains the statistics part of the + specification. Raises a ModuleSpecError if there is a + problem.""" + if type(statistics) != list: + raise ModuleSpecError("statistics is of type " + str(type(statistics)) + + ", not a list of items") + for stat_item in statistics: + _check_item_spec(stat_item) + # Additionally checks if there are 'item_title' and + # 'item_description' + for item in [ 'item_title', 'item_description' ]: + if item not in stat_item: + raise ModuleSpecError("no " + item + " in statistics item") + +def _check_format(value, format_name): + """Check if specified value and format are correct. Return True if + is is correct.""" + # TODO: should be added other format types if necessary + time_formats = { 'date-time' : "%Y-%m-%dT%H:%M:%SZ", + 'date' : "%Y-%m-%d", + 'time' : "%H:%M:%S" } + for fmt in time_formats: + if format_name == fmt: + try: + # reverse check + return value == time.strftime( + time_formats[fmt], + time.strptime(value, time_formats[fmt])) + except (ValueError, TypeError): + break + return False def _validate_type(spec, value, errors): """Returns true if the value is of the correct type given the specification""" data_type = spec['item_type'] if data_type == "integer" and type(value) != int: - if errors != None: + if errors is not None: errors.append(str(value) + " should be an integer") return False elif data_type == "real" and type(value) != float: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a real") return False elif data_type == "boolean" and type(value) != bool: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a boolean") return False elif data_type == "string" and type(value) != str: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a string") return False elif data_type == "list" and type(value) != list: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a list") return False elif data_type == "map" and type(value) != dict: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a map") return False elif data_type == "named_set" and type(value) != dict: @@ -300,6 +373,18 @@ def _validate_type(spec, value, errors): else: return True +def _validate_format(spec, value, errors): + """Returns true if the value is of the correct format given the + specification. And also return true if no 'item_format'""" + if "item_format" in spec: + item_format = spec['item_format'] + if not _check_format(value, item_format): + if errors is not None: + errors.append("format type of " + str(value) + + " should be " + item_format) + return False + return True + def _validate_item(spec, full, data, errors): if not _validate_type(spec, data, errors): return False @@ -308,6 +393,8 @@ def _validate_item(spec, full, data, errors): for data_el in data: if not _validate_type(list_spec, data_el, errors): return False + if not _validate_format(list_spec, data_el, errors): + return False if list_spec['item_type'] == "map": if not _validate_item(list_spec, full, data_el, errors): return False @@ -322,6 +409,8 @@ def _validate_item(spec, full, data, errors): return False if not _validate_item(named_set_spec, full, data_el, errors): return False + elif not _validate_format(spec, data, errors): + return False return True def _validate_spec(spec, full, data, errors): @@ -333,7 +422,7 @@ def _validate_spec(spec, full, data, errors): elif item_name in data: return _validate_item(spec, full, data[item_name], errors) elif full and not item_optional: - if errors != None: + if errors is not None: errors.append("non-optional item " + item_name + " missing") return False else: @@ -358,7 +447,7 @@ def _validate_spec_list(module_spec, full, data, errors): if spec_item["item_name"] == item_name: found = True if not found and item_name != "version": - if errors != None: + if errors is not None: errors.append("unknown item " + item_name) validated = False return validated diff --git a/src/lib/python/isc/config/tests/cfgmgr_test.py b/src/lib/python/isc/config/tests/cfgmgr_test.py index 0a9e2d3e44..eacc425dd5 100644 --- a/src/lib/python/isc/config/tests/cfgmgr_test.py +++ b/src/lib/python/isc/config/tests/cfgmgr_test.py @@ -219,6 +219,25 @@ class TestConfigManager(unittest.TestCase): commands_spec = self.cm.get_commands_spec('Spec2') self.assertEqual(commands_spec['Spec2'], module_spec.get_commands_spec()) + def test_get_statistics_spec(self): + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec, {}) + module_spec = isc.config.module_spec.module_spec_from_file(self.data_path + os.sep + "spec1.spec") + self.assert_(module_spec.get_module_name() not in self.cm.module_specs) + self.cm.set_module_spec(module_spec) + self.assert_(module_spec.get_module_name() in self.cm.module_specs) + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec, { 'Spec1': None }) + self.cm.remove_module_spec('Spec1') + module_spec = isc.config.module_spec.module_spec_from_file(self.data_path + os.sep + "spec2.spec") + self.assert_(module_spec.get_module_name() not in self.cm.module_specs) + self.cm.set_module_spec(module_spec) + self.assert_(module_spec.get_module_name() in self.cm.module_specs) + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec['Spec2'], module_spec.get_statistics_spec()) + statistics_spec = self.cm.get_statistics_spec('Spec2') + self.assertEqual(statistics_spec['Spec2'], module_spec.get_statistics_spec()) + def test_read_config(self): self.assertEqual(self.cm.config.data, {'version': config_data.BIND10_CONFIG_DATA_VERSION}) self.cm.read_config() @@ -241,6 +260,7 @@ class TestConfigManager(unittest.TestCase): self._handle_msg_helper("", { 'result': [ 1, 'Unknown message format: ']}) self._handle_msg_helper({ "command": [ "badcommand" ] }, { 'result': [ 1, "Unknown command: badcommand"]}) self._handle_msg_helper({ "command": [ "get_commands_spec" ] }, { 'result': [ 0, {} ]}) + self._handle_msg_helper({ "command": [ "get_statistics_spec" ] }, { 'result': [ 0, {} ]}) self._handle_msg_helper({ "command": [ "get_module_spec" ] }, { 'result': [ 0, {} ]}) self._handle_msg_helper({ "command": [ "get_module_spec", { "module_name": "Spec2" } ] }, { 'result': [ 0, {} ]}) #self._handle_msg_helper({ "command": [ "get_module_spec", { "module_name": "nosuchmodule" } ] }, @@ -329,6 +349,7 @@ class TestConfigManager(unittest.TestCase): { "module_name" : "Spec2" } ] }, { 'result': [ 0, self.spec.get_full_spec() ] }) self._handle_msg_helper({ "command": [ "get_commands_spec" ] }, { 'result': [ 0, { self.spec.get_module_name(): self.spec.get_commands_spec() } ]}) + self._handle_msg_helper({ "command": [ "get_statistics_spec" ] }, { 'result': [ 0, { self.spec.get_module_name(): self.spec.get_statistics_spec() } ]}) # re-add this once we have new way to propagate spec changes (1 instead of the current 2 messages) #self.assertEqual(len(self.fake_session.message_queue), 2) # the name here is actually wrong (and hardcoded), but needed in the current version @@ -450,6 +471,7 @@ class TestConfigManager(unittest.TestCase): def test_run(self): self.fake_session.group_sendmsg({ "command": [ "get_commands_spec" ] }, "ConfigManager") + self.fake_session.group_sendmsg({ "command": [ "get_statistics_spec" ] }, "ConfigManager") self.fake_session.group_sendmsg({ "command": [ "shutdown" ] }, "ConfigManager") self.cm.run() pass diff --git a/src/lib/python/isc/config/tests/module_spec_test.py b/src/lib/python/isc/config/tests/module_spec_test.py index be862c5012..fc53d23221 100644 --- a/src/lib/python/isc/config/tests/module_spec_test.py +++ b/src/lib/python/isc/config/tests/module_spec_test.py @@ -81,6 +81,11 @@ class TestModuleSpec(unittest.TestCase): self.assertRaises(ModuleSpecError, self.read_spec_file, "spec20.spec") self.assertRaises(ModuleSpecError, self.read_spec_file, "spec21.spec") self.assertRaises(ModuleSpecError, self.read_spec_file, "spec26.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec34.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec35.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec36.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec37.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec38.spec") def validate_data(self, specfile_name, datafile_name): dd = self.read_spec_file(specfile_name); @@ -123,6 +128,17 @@ class TestModuleSpec(unittest.TestCase): self.assertEqual(False, self.validate_command_params("spec27.spec", "data22_8.data", 'cmd1')) self.assertEqual(False, self.validate_command_params("spec27.spec", "data22_8.data", 'cmd2')) + def test_statistics_validation(self): + def _validate_stat(specfile_name, datafile_name): + dd = self.read_spec_file(specfile_name); + data_file = open(self.spec_file(datafile_name)) + data_str = data_file.read() + data = isc.cc.data.parse_value_str(data_str) + return dd.validate_statistics(True, data, []) + self.assertFalse(self.read_spec_file("spec1.spec").validate_statistics(True, None, None)); + self.assertTrue(_validate_stat("spec33.spec", "data33_1.data")) + self.assertFalse(_validate_stat("spec33.spec", "data33_2.data")) + def test_init(self): self.assertRaises(ModuleSpecError, ModuleSpec, 1) module_spec = isc.config.module_spec_from_file(self.spec_file("spec1.spec"), False) @@ -269,6 +285,80 @@ class TestModuleSpec(unittest.TestCase): } ) + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_datetime", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "date-time" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_date", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "date" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_time", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "time" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_datetime", + 'item_type': "string", + 'item_optional': False, + 'item_default': "2011-05-27T19:42:57Z", + 'item_format': "dummy-format" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_date", + 'item_type': "string", + 'item_optional': False, + 'item_default': "2011-05-27", + 'item_format': "dummy-format" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_time", + 'item_type': "string", + 'item_optional': False, + 'item_default': "19:42:57Z", + 'item_format': "dummy-format" + } + ) + + def test_check_format(self): + self.assertTrue(isc.config.module_spec._check_format('2011-05-27T19:42:57Z', 'date-time')) + self.assertTrue(isc.config.module_spec._check_format('2011-05-27', 'date')) + self.assertTrue(isc.config.module_spec._check_format('19:42:57', 'time')) + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T19:42:57Z', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('2011-05-27', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('19:42:57', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('2011-13-99T99:99:99Z', 'date-time')) + self.assertFalse(isc.config.module_spec._check_format('2011-13-99', 'date')) + self.assertFalse(isc.config.module_spec._check_format('99:99:99', 'time')) + self.assertFalse(isc.config.module_spec._check_format('', 'date-time')) + self.assertFalse(isc.config.module_spec._check_format(None, 'date-time')) + self.assertFalse(isc.config.module_spec._check_format(None, None)) + # wrong date-time-type format not ending with "Z" + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T19:42:57', 'date-time')) + # wrong date-type format ending with "T" + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T', 'date')) + # wrong time-type format ending with "Z" + self.assertFalse(isc.config.module_spec._check_format('19:42:57Z', 'time')) + def test_validate_type(self): errors = [] self.assertEqual(True, isc.config.module_spec._validate_type({ 'item_type': 'integer' }, 1, errors)) @@ -306,6 +396,25 @@ class TestModuleSpec(unittest.TestCase): self.assertEqual(False, isc.config.module_spec._validate_type({ 'item_type': 'map' }, 1, errors)) self.assertEqual(['1 should be a map'], errors) + def test_validate_format(self): + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "2011-05-27T19:42:57Z", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "a", errors)) + self.assertEqual(['format type of a should be date-time'], errors) + + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "2011-05-27", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "a", errors)) + self.assertEqual(['format type of a should be date'], errors) + + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "19:42:57", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "a", errors)) + self.assertEqual(['format type of a should be time'], errors) + def test_validate_spec(self): spec = { 'item_name': "an_item", 'item_type': "string", diff --git a/src/lib/util/filename.h b/src/lib/util/filename.h index c9874ce220..f6259386ef 100644 --- a/src/lib/util/filename.h +++ b/src/lib/util/filename.h @@ -103,6 +103,11 @@ public: return (extension_); } + /// \return Name + extension of Given File Name + std::string nameAndExtension() const { + return (name_ + extension_); + } + /// \brief Expand Name with Default /// /// A default file specified is supplied and used to fill in any missing diff --git a/src/lib/util/python/gen_wiredata.py.in b/src/lib/util/python/gen_wiredata.py.in index e35b37b994..8bd2b3c2a6 100755 --- a/src/lib/util/python/gen_wiredata.py.in +++ b/src/lib/util/python/gen_wiredata.py.in @@ -844,6 +844,27 @@ class MINFO(RR): f.write('# RMAILBOX=%s EMAILBOX=%s\n' % (self.rmailbox, self.emailbox)) f.write('%s %s\n' % (rmailbox_wire, emailbox_wire)) +class AFSDB(RR): + '''Implements rendering AFSDB RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - subtype (16 bit int): The subtype field. + - server (string): The server field. + The string must be interpreted as a valid domain name. + ''' + subtype = 1 + server = 'afsdb.example.com' + def dump(self, f): + server_wire = encode_name(self.server) + if self.rdlen is None: + self.rdlen = 2 + len(server_wire) / 2 + else: + self.rdlen = int(self.rdlen) + self.dump_header(f, self.rdlen) + f.write('# SUBTYPE=%d SERVER=%s\n' % (self.subtype, self.server)) + f.write('%04x %s\n' % (self.subtype, server_wire)) + class NSECBASE(RR): '''Implements rendering NSEC/NSEC3 type bitmaps commonly used for these RRs. The NSEC and NSEC3 classes will be inherited from this diff --git a/src/lib/util/tests/filename_unittest.cc b/src/lib/util/tests/filename_unittest.cc index be29ff18ea..07f3525a18 100644 --- a/src/lib/util/tests/filename_unittest.cc +++ b/src/lib/util/tests/filename_unittest.cc @@ -51,42 +51,49 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/alpha/beta/", fname.directory()); EXPECT_EQ("gamma", fname.name()); EXPECT_EQ(".delta", fname.extension()); + EXPECT_EQ("gamma.delta", fname.nameAndExtension()); // Directory only fname.setName("/gamma/delta/"); EXPECT_EQ("/gamma/delta/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // Filename only fname.setName("epsilon"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("epsilon", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("epsilon", fname.nameAndExtension()); // Extension only fname.setName(".zeta"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".zeta", fname.extension()); + EXPECT_EQ(".zeta", fname.nameAndExtension()); // Missing directory fname.setName("eta.theta"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("eta", fname.name()); EXPECT_EQ(".theta", fname.extension()); + EXPECT_EQ("eta.theta", fname.nameAndExtension()); // Missing filename fname.setName("/iota/.kappa"); EXPECT_EQ("/iota/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".kappa", fname.extension()); + EXPECT_EQ(".kappa", fname.nameAndExtension()); // Missing extension fname.setName("lambda/mu/nu"); EXPECT_EQ("lambda/mu/", fname.directory()); EXPECT_EQ("nu", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("nu", fname.nameAndExtension()); // Check that the decomposition can occur in the presence of leading and // trailing spaces @@ -94,18 +101,21 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("lambda/mu/", fname.directory()); EXPECT_EQ("nu", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("nu", fname.nameAndExtension()); // Empty string fname.setName(""); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // ... and just spaces fname.setName(" "); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // Check corner cases - where separators are present, but strings are // absent. @@ -113,16 +123,19 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); fname.setName("."); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); fname.setName("/."); EXPECT_EQ("/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); // Note that the space is a valid filename here; only leading and trailing // spaces should be trimmed. @@ -130,11 +143,13 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/", fname.directory()); EXPECT_EQ(" ", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(" .", fname.nameAndExtension()); fname.setName(" / . "); EXPECT_EQ("/", fname.directory()); EXPECT_EQ(" ", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(" .", fname.nameAndExtension()); } // Check that the expansion with a default works.