The kasp system test has a call to sed to retrieve the key identifier
without leading zeros. The sed call could not handle key id 0.
Update the kasp test to also correctly deal with this case.
The autosign test has a test case where a DNSSEC maintaiend zone
has a set of DNSSEC keys without any timing metadata set. It
tests if named picks up the key for publication and signing if a
delayed dnssec-settime/loadkeys event has occured.
The test failed intermittently despite the fact it sleeps for 5
seconds but the triggered key reconfigure action should happen after
3 seconds.
However, the test output showed that the test query came in before
the key reconfigure action was complete (see excerpts below).
The loadkeys command is received:
15:38:36 received control channel command 'loadkeys delay.example.'
The reconfiguring zone keys action is triggered after 3 seconds:
15:38:39 zone delay.example/IN: reconfiguring zone keys
15:38:39 DNSKEY delay.example/NSEC3RSASHA1/7484 (ZSK) is now published
15:38:39 DNSKEY delay.example/NSEC3RSASHA1/7455 (KSK) is now published
15:38:39 writing to journal
Two seconds later the test query comes in:
15:38:41 client @0x7f1b8c0562b0 10.53.0.1#44177: query
15:38:41 client @0x7f1b8c0562b0 10.53.0.1#44177: endrequest
And 6 more seconds later the reconfigure keys action is complete:
15:38:47 zone delay.example/IN: next key event: 05-Dec-2019 15:48:39
This commit fixes the test by checking the "next key event" log has
been seen before executing the test query, making sure that the
reconfigure keys action has been complete.
This commit however does not fix, nor explain why it took such a long
time (8 seconds) to reconfigure the keys.
Clarify in the ARM that TTL-style options can also now take ISO
8601 durations.
Mention the built-in dnssec policies "default" and "none". Mention
that "none" is the default.
Add a file documenting the default dnssec-policy configuration options.
Fix dnssec-policy syntax in ARM (dnssec-policy.grammar.xml).
The first step in all existing setup.sh scripts is to call clean.sh. To
reduce code duplication and ensure all system tests added in the future
behave consistently with existing ones, invoke clean.sh from run.sh
before calling setup.sh.
At the end of each system test suite run, the system test framework
collects all existing test.output files from system test subdirectories
and produces bin/tests/system/systests.output from those files.
However, it does not check whether a test.output file was found for
every executed test. Thus, if the test.output file is accidentally
deleted by the system test itself (e.g. due to an overly broad file
removal wildcard present in clean.sh), its output will not be included
in bin/tests/system/systests.output. Since the result of each system
test suite run is determined by bin/tests/system/testsummary.sh, which
only operates on the contents of bin/tests/system/systests.output, this
can lead to test failures being ignored. Fix by ensuring the number of
test results found in bin/tests/system/systests.output is equal to the
number of tests run and triggering a system test suite failure in case
of a discrepancy between these two values.
Since the role of the bin/tests/system/clean.sh script has now been
reduced to calling a given system test's clean.sh script, remove the
former altogether and replace its only use with a direct invocation of
the latter.
Since files containing system test output are no longer stored in test
subdirectories, bin/tests/system/clean.sh no longer needs to take care
of removing the test.output file for a given test as testsummary.sh
already takes care of that and even if a test suite terminates
abnormally and another one is started, tee invoked without the -a
command line switch overwrites the destination file if it exists, so
leftover test.output.* files from previous test suite runs are not a
concern. Remove the -r command line switch and the code associated with
it from the relevant scripts.
Some clean.sh scripts contain overly broad file deletion wildcards which
cause the test.output file (used by the system test framework for
collecting output) in a given system test's directory to be erroneously
removed immediately after the test is started (due to setup.sh scripts
calling clean.sh at the beginning). This prevents the test's output
from being placed in bin/tests/system/systests.output at the end of a
test suite run and thus can lead to test failures being ignored. Fix by
storing each test's output in a test.output.<test-name> file in
bin/tests/system/, which prevents clean.sh scripts from removing it (as
they should only ever affect files contained in a given system test's
directory).
When a function returns void, it can be used as an argument to return in
function returning also void, e.g.:
void in(void) {
return;
}
void out(void) {
return (in());
}
while this is legal, it should be rewritten as:
void out(void) {
in();
return;
}
The semantic patch just find the occurrences, and they need to be fixed
by hand.
Since the introduction of durations, all ttlval configuration types
are replaced with durations. Duration is an ISO 8601 duration, a
TTL-style value, or a number. These two references were missed and
are now also replaced.
This commit makes some minor changes to the trust anchor code:
1. Replace the undescriptive n1, n2 and n3 identifiers with slightly
better rdata1, rdata2, and rdata3.
2. Fix an occurrence where in the error log message a static number
32 was printed, rather than the rdata3 length.
3. Add a default case to the switch statement checking DS digest
algorithms to catch unknown algorithms.
Previously, the fetchlimit tested the recursive-clients soft limit
that's defined as 90% of the hard limit (the actual configured value).
This worked previously because the reaping of the oldest recursive
client was put on the same event queue as the current TCP client, thus
the cleaning has happened before the new TCP client established a new
connection.
With the change in BIND 9.14 that added a multiple event queues the
cleaning of the oldests clients is no longer synchronous and could
happen stochastically making the soft limit testing fail often. The
situation became even worse with the new networking manager, thus we
change the system test to fail only if the hard limit bound is not
honored.
Changing the accounting of the already reaped TCP clients so the soft
limit testing is possible again is out of the scope for this change.
These two tests were failing basically because in order for prefetching to
happen, the TTL for a given DNS record must be greater than or equal to
the prefetch config value + 9.
The previous TTL for both records was 10, while prefetch value in
configuration was 3, thus making only records with TTL >= 12 elligible
for prefetching.
TTL value for both records was adjusted to the value 13, and prefetch
value was set to 4 (inc by 1), so records with TTL (4 + 9) >= 13 are
elligible for prefetching.
Adjusting prefetch value to 4 gives the test 1 second more to avoid time
problems when sharing resources on a heavy loaded PC.
Also prefetch value in settings is now read by the script and used
by it to corrrectly calculate the amount of time needed to delay before
sending a request to trigger prefetch, adding a bit of flexibility to
fine tune the test in the future.
The previous test had two problems:
1. It wasn't written specifically for testing what it was supposed to:
prefetch disabled.
2. It could fail in some circunstances if the computer's load is too
high, due to sleeps not taking parallel tests and cpu load into account.
The new test is testing prefetch disabled as follows:
1. It asks for a txt record for a given domain and takes note of the
record's TTL (which is 10).
2. It sleeps for (TTL - 5) = 5 seconds, having a window of 5 seconds to
issue new queries before the record expires from cache.
3. Three(3) queries are executed in a row, with a interval of 1 second
between them, and for each query we verify that the TTL in response is
less than the previous one, thus ensuring that prefetch is disabled (if
it were enabled this record would have been refreshed already and TTL
would be >= the first TTL).
Having a window of 5 seconds to perform 3 queries with a interval of 1
second between them gives the test a reasonable amount of time
to not suffer from a machine with heavy load.