2
0
mirror of https://gitlab.isc.org/isc-projects/bind9 synced 2025-08-28 13:08:06 +00:00

179 Commits

Author SHA1 Message Date
Ondřej Surý
091d738c72 Convert all categories and modules into static lists
Remove the complicated mechanism that could be (in theory) used by
external libraries to register new categories and modules with
statically defined lists in <isc/log.h>.  This is similar to what we
have done for <isc/result.h> result codes.  All the libraries are now
internal to BIND 9, so we don't need to provide a mechanism to register
extra categories and modules.
2024-08-20 12:50:39 +00:00
Ondřej Surý
8506102216 Remove logging context (isc_log_t) from the public namespace
Now that the logging uses single global context, remove the isc_log_t
from the public namespace.
2024-08-20 12:50:39 +00:00
Ondřej Surý
5beae5faf9
Fix the glue table in the QP and RBT zone databases
When adding glue to the header, we add header to the wait-free stack to
be cleaned up later which sets wfc_node->next to non-NULL value.  When
the actual cleaning happens we would only cleanup the .glue_list, but
since the database isn't locked for the time being, the headers could be
reused while cleaning the existing glue entries, which creates a data
race between database versions.

Revert the code back to use per-database-version hashtable where keys
are the node pointers.  This allows each database version to have
independent glue cache table that doesn't affect nodes or headers that
could already "belong" to the future database version.
2024-08-05 15:36:54 +02:00
Ondřej Surý
52b3d86ef0
Add a limit to the number of RR types for single name
Previously, the number of RR types for a single owner name was limited
only by the maximum number of the types (64k).  As the data structure
that holds the RR types for the database node is just a linked list, and
there are places where we just walk through the whole list (again and
again), adding a large number of RR types for a single owner named with
would slow down processing of such name (database node).

Add a configurable limit to cap the number of the RR types for a single
owner.  This is enforced at the database (rbtdb, qpzone, qpcache) level
and configured with new max-types-per-name configuration option that
can be configured globally, per-view and per-zone.
2024-06-10 16:55:09 +02:00
Ondřej Surý
32af7299eb
Add a limit to the number of RRs in RRSets
Previously, the number of RRs in the RRSets were internally unlimited.
As the data structure that holds the RRs is just a linked list, and
there are places where we just walk through all of the RRs, adding an
RRSet with huge number of RRs inside would slow down processing of said
RRSets.

Add a configurable limit to cap the number of the RRs in a single RRSet.
This is enforced at the database (rbtdb, qpzone, qpcache) level and
configured with new max-records-per-type configuration option that can
be configured globally, per-view and per-zone.
2024-06-10 16:55:07 +02:00
Aydın Mercan
03a59cbb04 reinsert accidentally removed + in db trace
It only affects development when using `DNS_DB_TRACE`.
2024-05-17 18:11:23 -07:00
Ondřej Surý
6c54337f52 avoid a race in the qpzone getsigningtime() implementation
the previous commit introduced a possible race in getsigningtime()
where the rdataset header could change between being found on the
heap and being bound.

getsigningtime() now looks at the first element of the heap, gathers the
locknum, locks the respective lock, and retrieves the header from the
heap again.  If the locknum has changed, it will rinse and repeat.
Theoretically, this could spin forever, but practically, it almost never
will as the heap changes on the zone are very rare.

we simplify matters further by changing the dns_db_getsigningtime()
API call. instead of passing back a bound rdataset, we pass back the
information the caller actually needed: the resigning time, owner name
and type of the rdataset that was first on the heap.
2024-04-25 15:48:43 -07:00
Evan Hunt
5709f7bad9 rename qpdb to qpcache
move qpdb.c to qpcache.c and rename the "qp" database implementation
to "qpcache", in order to make it more clearly distinguishable from
"qpzone".
2024-03-08 15:36:56 -08:00
Evan Hunt
be24feb252 stub dns_qpmulti-based zone database implementation
created files for a dns_qpmulti-based zone database, "qpzone".
currently this only has create and destroy functions.
2024-03-06 20:57:31 -08:00
Evan Hunt
89c4c1aa87 add dns_db_nodefullname()
the dyndb test requires a mechanism to retrieve the name associated
with a database node, and since the database no longer uses RBT for
its underlying storage, dns_rbt_fullnamefromnode() doesn't work.
addressed this by adding dns_db_nodefullname() to the database API.
2024-03-06 10:49:02 +01:00
Evan Hunt
bb4464181a switch database defaults from "rbt" to "qp"
replace the string "rbt" throughout BIND with "qp" so that
qpdb databases will be used by default instead of rbtdb.
rbtdb databases can still be used by specifying "database rbt;"
in a zone statement.
2024-03-06 09:57:24 +01:00
Evan Hunt
845f832308 rename dns_rbtdb to dns_qpdb
this commit renames all variables and macros with the string "rbtdb"
or "RBDTB" to "qpdb" or "QPDB".
2024-03-06 09:57:24 +01:00
Matthijs Mekking
2edf73dc05 Begin replacement of rbt with qp in rbtdb
- Copy rbtdb.c, rbt-zonedb.c and rbt-cachedb.c to qp-*.
- Added qpmethods.
- Added a new structure dns_qpdata that will replace dns_rbtnode.
- Replaced normal, nsec, and nsec3 dns_rbt trees with dns_qp tries.
- Replaced dns_rbt_create() calls with dns_qp_create().
- Replaced the dns_rbt_destroy() call with dns_qp_destroy().
- Create a dns_qpdata struct and create/destroy methods.

This commit will not build.
2024-03-06 09:57:24 +01:00
Evan Hunt
e40fd4ed06 fix several bugs in the RBTDB dbiterator implementation
- the DNS_DB_NSEC3ONLY and DNS_DB_NONSEC3 flags are mutually
  exclusive; it never made sense to set both at the same time.
  to enforce this, it is now a fatal error to do so.  the
  dbiterator implementation has been cleaned up to remove
  code that treated the two as independent: if nonsec3 is
  true, we can be certain nsec3only is false, and vice versa.
- previously, iterating a database backwards omitted
  NSEC3 records even if DNS_DB_NONSEC3 had not been set. this
  has been corrected.
- when an iterator reaches the origin node of the NSEC3 tree, we
  need to skip over it and go to the next node in the sequence.
  the NSEC3 origin node is there for housekeeping purposes and
  never contains data.
- the dbiterator_test unit test has been expanded, several
  incorrect expectations have been fixed. (for example, the
  expected number of iterations has been reduced by one; we were
  previously counting the NSEC3 origin node and we should not
  have been doing so.)
2024-02-15 10:15:50 -08:00
Evan Hunt
27c862d953 separate generic DB helpers into db_p.h
when the QPDB is implemented, we will need to have both qpdb_p.h and
rbtdb_p.h. in order to prevent name collisions or code duplication,
this commit adds a generic private header file, db_p.h, containing
structures and macros that will be used by both databases.

some functions and structs have been renamed to more specifically refer
to the RBT database, in order to avoid namespace collision with similar
things that will be needed by the QPDB later.
2024-02-14 09:00:27 +01:00
Ondřej Surý
a1afa31a5a
Use cds_lfht for updatenotify mechanism in dns_db unit
The updatenotify mechanism in dns_db relied on unlocked ISC_LIST for
adding and removing the "listeners".  The mechanism relied on the
exclusive mode - it should have been updated only during reconfiguration
of the server.  This turned not to be true anymore in the dns_catz - the
updatenotify list could have been updated during offloaded work as the
offloaded threads are not subject to the exclusive mode.

Change the update_listeners to be cds_lfht (lock-free hash-table), and
slightly refactor how register and unregister the callbacks - the calls
are now idempotent (the register call already was and the return value
of the unregister function was mostly ignored by the callers).
2023-07-31 18:11:34 +02:00
Evan Hunt
445ef1d033
move slab rdataset implementation to rdataslab.c
ultimately we want the slab implementation of dns_rdataset to
be usable by more database implementaions than just rbtdb. this
commit moves rdataset_methods to rdataslab.c, renamed
dns_rdataslab_rdatasetmethods.

new database methods have been added: locknode, unlocknode,
addglue, expiredata, and deletedata, allowing external functions to
perform functions that previously required internal access to the
database implementation.

database and heap pointers are now stored in the dns_slabheader object
so that header is the only thing that needs to be passed to some
functions; this will simplify moving functions that process slabheaders
out of rbtdb.c so they can be used by other database implementations.
2023-07-17 14:50:25 +02:00
Evan Hunt
17f85f6c93
move prototypes for common functions to rbtdb_p.h
rename the existing rbtdb.h to rbtdb_p.h, and start putting
macros and declarations of dns__rbtdb functions into it.
2023-07-17 14:50:25 +02:00
Evan Hunt
4db150437e
clean up unused dns_db methods
to reduce the amount of common code that will need to be shared
between the separated cache and zone database implementations,
clean up unused portions of dns_db.

the methods dns_db_dump(), dns_db_isdnssec(), dns_db_printnode(),
dns_db_resigned(), dns_db_expirenode() and dns_db_overmem() were
either never called or were only implemented as nonoperational stub
functions: they have now been removed.

dns_db_nodefullname() was only used in one place, which turned out
to be unnecessary, so it has also been removed.

dns_db_ispersistent() and dns_db_transfernode() are used, but only
the default implementation in db.c was ever actually called. since
they were never overridden by database methods, there's no need to
retain methods for them.

in rbtdb.c, beginload() and endload() methods are no longer defined for
the cache database, because that was never used (except in a few unit
tests which can easily be modified to use the zone implementation
instead).  issecure() is also no longer defined for the cache database,
as the cache is always insecure and the default implementation of
dns_db_issecure() returns false.

for similar reasons, hashsize() is no longer defined for zone databases.

implementation functions that are shared between zone and cache are now
prepended with 'dns__rbtdb_' so they can become nonstatic.

serve_stale_ttl is now a common member of dns_db.
2023-07-17 14:50:25 +02:00
Ondřej Surý
cd632ad31d
Implement dns_db node tracing
This implements node reference tracing that passes all the internal
layers from dns_db API (and friends) to increment_reference() and
decrement_reference().

It can be enabled by #defining DNS_DB_NODETRACE in <dns/trace.h> header.

The output then looks like this:

    incr:node:check_address_records:rootns.c:409:0x7f67f5a55a40->references = 1
    decr:node:check_address_records:rootns.c:449:0x7f67f5a55a40->references = 0

    incr:nodelock:check_address_records:rootns.c:409:0x7f67f5a55a40:0x7f68304d7040->references = 1
    decr:nodelock:check_address_records:rootns.c:449:0x7f67f5a55a40:0x7f68304d7040->references = 0

There's associated python script to find the missing detach located at:
https://gitlab.isc.org/isc-projects/bind9/-/snippets/1038
2023-02-28 11:44:15 +01:00
Evan Hunt
77e7eac54c enable detailed db tracing
move database attach/detach functions to db.c, instead of
requiring them to be implemented for every database type.
instead, they must implement a 'destroy' function that is
called when references go to zero.

this enables us to use ISC_REFCOUNT_IMPL for databases,
with detailed tracing enabled by setting DNS_DB_TRACE to 1.
2023-02-21 10:13:10 -08:00
Evan Hunt
8da43bb7f5 simplify dns_sdb API
SDB is currently (and foreseeably) only used by the named
builtin databases, so it only needs as much of its API as
those databases use.

- removed three flags defined for the SDB API that were always
  set the same by builtin databases.

- there were two different types of lookup functions defined for
  SDB, using slightly different function signatures. since backward
  compatibility is no longer a concern, we can eliminate the 'lookup'
  entry point and rename 'lookup2' to 'lookup'.

- removed the 'allnodes' entry point and all database iterator
  implementation code

- removed dns_sdb_putnamedrr() and dns_sdb_putnamedrdata() since
  they were never used.
2023-02-21 10:13:10 -08:00
Evan Hunt
8036412aaa make fewer dns_db functions mandatory-to-implement
some dns_db functions would have crashed if the DB implementation failed
to implement them, requiring the implementations to add functions that
did nothing but return ISC_R_NOTIMPLEMENTED or some obvious default
value. we can just have the dns_db wrapper functions themselves return
those values, and clean up the implementations accordingly.
2023-02-21 10:13:10 -08:00
Tony Finch
97b64f4970 Remove deprecated dns_db_rpz_*() methods
As well as the function wrappers, their slots have been removed from
the dns_dbmethods table.
2023-02-15 15:35:50 +00:00
Ondřej Surý
6ffda5920e
Add the reader-writer synchronization with modified C-RW-WP
This changes the internal isc_rwlock implementation to:

  Irina Calciu, Dave Dice, Yossi Lev, Victor Luchangco, Virendra
  J. Marathe, and Nir Shavit.  2013.  NUMA-aware reader-writer locks.
  SIGPLAN Not. 48, 8 (August 2013), 157–166.
  DOI:https://doi.org/10.1145/2517327.24425

(The full article available from:
  http://mcg.cs.tau.ac.il/papers/ppopp2013-rwlocks.pdf)

The implementation is based on the The Writer-Preference Lock (C-RW-WP)
variant (see the 3.4 section of the paper for the rationale).

The implemented algorithm has been modified for simplicity and for usage
patterns in rbtdb.c.

The changes compared to the original algorithm:

  * We haven't implemented the cohort locks because that would require a
    knowledge of NUMA nodes, instead a simple atomic_bool is used as
    synchronization point for writer lock.

  * The per-thread reader counters are not being used - this would
    require the internal thread id (isc_tid_v) to be always initialized,
    even in the utilities; the change has a slight performance penalty,
    so we might revisit this change in the future.  However, this change
    also saves a lot of memory, because cache-line aligned counters were
    used, so on 32-core machine, the rwlock would be 4096+ bytes big.

  * The readers use a writer_barrier that will raise after a while when
    readers lock can't be acquired to prevent readers starvation.

  * Separate ingress and egress readers counters queues to reduce both
    inter and intra-thread contention.
2023-02-15 09:30:04 +01:00
Mark Andrews
7695c36a5d Extend dns_db_allrdatasets to control interation results
Add an options parameter to control what rdatasets are returned when
iteratating over the node.  Specific modes will be added later.
2022-12-07 22:20:02 +00:00
Mark Andrews
f13e71e551 Suppress duplicate dns_db_updatenotify_register registrations
Duplicate dns_db_updatenotify_register registrations need to be
suppressed to ensure that dns_db_updatenotify_unregister is successful.
2022-12-07 09:04:08 +11:00
Evan Hunt
09ee254514 change dns_db_settask() to _setloop()
The mechanism for associating a worker task to a database now
uses loops rather than tasks.

For this reason, the parameters to dns_cache_create() have been
updated to take a loop manager rather than a task manager.
2022-11-30 11:47:35 -08:00
Michal Nowak
afdb41a5aa
Update sources to Clang 15 formatting 2022-11-29 08:54:34 +01:00
Ondřej Surý
cedfc97974 Improve reporting for pthread_once errors
Replace all uses of RUNTIME_CHECK() in lib/isc/include/isc/once.h with
PTHEADS_RUNTIME_CHECK(), in order to improve error reporting for any
once-related run-time failures (by augmenting error messages with
file/line/caller information and the error string corresponding to
errno).
2022-10-14 16:39:21 +02:00
Ondřej Surý
20f0936cf2 Remove use of the inline keyword used as suggestion to compiler
Historically, the inline keyword was a strong suggestion to the compiler
that it should inline the function marked inline.  As compilers became
better at optimising, this functionality has receded, and using inline
as a suggestion to inline a function is obsolete.  The compiler will
happily ignore it and inline something else entirely if it finds that's
a better optimisation.

Therefore, remove all the occurences of the inline keyword with static
functions inside single compilation unit and leave the decision whether
to inline a function or not entirely on the compiler

NOTE: We keep the usage the inline keyword when the purpose is to change
the linkage behaviour.
2022-03-25 08:33:43 +01:00
Ondřej Surý
58bd26b6cf Update the copyright information in all files in the repository
This commit converts the license handling to adhere to the REUSE
specification.  It specifically:

1. Adds used licnses to LICENSES/ directory

2. Add "isc" template for adding the copyright boilerplate

3. Changes all source files to include copyright and SPDX license
   header, this includes all the C sources, documentation, zone files,
   configuration files.  There are notes in the doc/dev/copyrights file
   on how to add correct headers to the new files.

4. Handle the rest that can't be modified via .reuse/dep5 file.  The
   binary (or otherwise unmodifiable) files could have license places
   next to them in <foo>.license file, but this would lead to cluttered
   repository and most of the files handled in the .reuse/dep5 file are
   system test files.
2022-01-11 09:05:02 +01:00
Mark Andrews
85bfcaeb2e
Extend dns_db_nodecount to access auxilary rbt node counts
dns_db_nodecount can now be used to get counts from the auxilary
rbt databases.  The existing node count is returned by
tree=dns_dbtree_main.  The nsec and nsec3 node counts by dns_dbtree_nsec
and dns_dbtree_nsec3 respectively.
2021-12-02 14:18:41 +01:00
Ondřej Surý
8c819ec366 dns/rbt.c: Implement incremental hash table resizing
Originally, the hash table used in RBT database would be resized when it
reached certain number of elements (defined by overcommit).  This was
causing resolution brownouts for busy resolvers, because the rehashing
could take several seconds to complete.  This was mitigated by
pre-allocating the hash table in the RBT database used for caching to be
large-enough as determined by max-cache-size.  The downside of this
solution was that the pre-allocated hash table could take a significant
chunk of the memory even when the resolver cache would be otherwise
empty because the default value for max-cache-size is 90% of available
memory.

Implement incremental resizing[1] to perform the rehashing gradually:

 1. During the resize, allocate the new hash table, but keep the old
    table unchanged.
 2. In each lookup or delete operation, check both tables.
 3. Perform insertion operations only in the new table.
 4. At each insertion also move r elements from the old table to the new
    table.
 5. When all elements are removed from the old table, deallocate it.

To ensure that the old table is completely copied over before the new
table itself needs to be enlarged, it is necessary to increase the
size of the table by a factor of at least (r + 1)/r during resizing.

In our implementation r is equal to 1.

The downside of this approach is that the old table and the new table
could stay in memory for longer when there are no new insertions into
the hash table for prolonged periods of time as the incremental
rehashing happens only during the insertions.

The upside of this approach is that it's no longer necessary to
pre-allocate large hash table, because the RBT hash table rehashing
doesn't cause resolution brownouts anymore and thus we can use the
memory as needed.

1. https://en.m.wikipedia.org/wiki/Hash_table#Dynamic_resizing
2021-10-12 15:01:53 +02:00
Ondřej Surý
2e3a2eecfe Make isc_result a static enum
Remove the dynamic registration of result codes.  Convert isc_result_t
from unsigned + #defines into 32-bit enum type in grand unified
<isc/result.h> header.  Keep the existing values of the result codes
even at the expense of the description and identifier tables being
unnecessary large.

Additionally, add couple of:

    switch (result) {
    [...]
    default:
        break;
    }

statements where compiler now complains about missing enum values in the
switch statement.
2021-10-06 11:22:20 +02:00
Evan Hunt
cd8a081a4f Remove libdns init/shutdown functions
as libdns is no longer exported, it's not necessary to have
init and shutdown functions. the only purpose they served
was to create a private mctx and run dst_lib_init(), which
can be called directly instead.
2021-10-04 13:57:32 -07:00
Ondřej Surý
edee9440d0 Remove the mastefile-format map option
As previously announced, this commit removes the masterfile-format
format 'map' from named, all the tools, the documentation and the
system tests.
2021-09-17 07:09:50 +02:00
Mark Andrews
3b11bacbb7 Cleanup redundant isc_rwlock_init() result checks 2021-02-03 12:22:33 +11:00
Diego Fronza
4827ad0ec4 Add stale-refresh-time option
Before this update, BIND would attempt to do a full recursive resolution
process for each query received if the requested rrset had its ttl
expired. If the resolution fails for any reason, only then BIND would
check for stale rrset in cache (if 'stale-cache-enable' and
'stale-answer-enable' is on).

The problem with this approach is that if an authoritative server is
unreachable or is failing to respond, it is very unlikely that the
problem will be fixed in the next seconds.

A better approach to improve performance in those cases, is to mark the
moment in which a resolution failed, and if new queries arrive for that
same rrset, try to respond directly from the stale cache, and do that
for a window of time configured via 'stale-refresh-time'.

Only when this interval expires we then try to do a normal refresh of
the rrset.

The logic behind this commit is as following:

- In query.c / query_gotanswer(), if the test of 'result' variable falls
  to the default case, an error is assumed to have happened, and a call
  to 'query_usestale()' is made to check if serving of stale rrset is
  enabled in configuration.

- If serving of stale answers is enabled, a flag will be turned on in
  the query context to look for stale records:
  query.c:6839
  qctx->client->query.dboptions |= DNS_DBFIND_STALEOK;

- A call to query_lookup() will be made again, inside it a call to
  'dns_db_findext()' is made, which in turn will invoke rbdb.c /
  cache_find().

- In rbtdb.c / cache_find() the important bits of this change is the
  call to 'check_stale_header()', which is a function that yields true
  if we should skip the stale entry, or false if we should consider it.

- In check_stale_header() we now check if the DNS_DBFIND_STALEOK option
  is set, if that is the case we know that this new search for stale
  records was made due to a failure in a normal resolution, so we keep
  track of the time in which the failured occured in rbtdb.c:4559:
  header->last_refresh_fail_ts = search->now;

- In check_stale_header(), if DNS_DBFIND_STALEOK is not set, then we
  know this is a normal lookup, if the record is stale and the query
  time is between last failure time + stale-refresh-time window, then
  we return false so cache_find() knows it can consider this stale
  rrset entry to return as a response.

The last additions are two new methods to the database interface:
- setservestale_refresh
- getservestale_refresh

Those were added so rbtdb can be aware of the value set in configuration
option, since in that level we have no access to the view object.
2020-11-11 12:53:23 -03:00
Evan Hunt
dcee985b7f update all copyright headers to eliminate the typo 2020-09-14 16:20:40 -07:00
Ondřej Surý
e24bc324b4 Fix the rbt hashtable and grow it when setting max-cache-size
There were several problems with rbt hashtable implementation:

1. Our internal hashing function returns uint64_t value, but it was
   silently truncated to unsigned int in dns_name_hash() and
   dns_name_fullhash() functions.  As the SipHash 2-4 higher bits are
   more random, we need to use the upper half of the return value.

2. The hashtable implementation in rbt.c was using modulo to pick the
   slot number for the hash table.  This has several problems because
   modulo is: a) slow, b) oblivious to patterns in the input data.  This
   could lead to very uneven distribution of the hashed data in the
   hashtable.  Combined with the single-linked lists we use, it could
   really hog-down the lookup and removal of the nodes from the rbt
   tree[a].  The Fibonacci Hashing is much better fit for the hashtable
   function here.  For longer description, read "Fibonacci Hashing: The
   Optimization that the World Forgot"[b] or just look at the Linux
   kernel.  Also this will make Diego very happy :).

3. The hashtable would rehash every time the number of nodes in the rbt
   tree would exceed 3 * (hashtable size).  The overcommit will make the
   uneven distribution in the hashtable even worse, but the main problem
   lies in the rehashing - every time the database grows beyond the
   limit, each subsequent rehashing will be much slower.  The mitigation
   here is letting the rbt know how big the cache can grown and
   pre-allocate the hashtable to be big enough to actually never need to
   rehash.  This will consume more memory at the start, but since the
   size of the hashtable is capped to `1 << 32` (e.g. 4 mio entries), it
   will only consume maximum of 32GB of memory for hashtable in the
   worst case (and max-cache-size would need to be set to more than
   4TB).  Calling the dns_db_adjusthashsize() will also cap the maximum
   size of the hashtable to the pre-computed number of bits, so it won't
   try to consume more gigabytes of memory than available for the
   database.

   FIXME: What is the average size of the rbt node that gets hashed?  I
   chose the pagesize (4k) as initial value to precompute the size of
   the hashtable, but the value is based on feeling and not any real
   data.

For future work, there are more places where we use result of the hash
value modulo some small number and that would benefit from Fibonacci
Hashing to get better distribution.

Notes:
a. A doubly linked list should be used here to speedup the removal of
   the entries from the hashtable.
b. https://probablydance.com/2018/06/16/fibonacci-hashing-the-optimization-that-the-world-forgot-or-a-better-alternative-to-integer-modulo/
2020-07-21 08:44:26 +02:00
Evan Hunt
68a1c9d679 change 'expr == true' to 'expr' in conditionals 2020-05-25 16:09:57 -07:00
Evan Hunt
e851ed0bb5 apply the modified style 2020-02-13 15:05:06 -08:00
Ondřej Surý
056e133c4c Use clang-tidy to add curly braces around one-line statements
The command used to reformat the files in this commit was:

./util/run-clang-tidy \
	-clang-tidy-binary clang-tidy-11
	-clang-apply-replacements-binary clang-apply-replacements-11 \
	-checks=-*,readability-braces-around-statements \
	-j 9 \
	-fix \
	-format \
	-style=file \
	-quiet
clang-format -i --style=format $(git ls-files '*.c' '*.h')
uncrustify -c .uncrustify.cfg --replace --no-backup $(git ls-files '*.c' '*.h')
clang-format -i --style=format $(git ls-files '*.c' '*.h')
2020-02-13 22:07:21 +01:00
Ondřej Surý
36c6105e4f Use coccinelle to add braces to nested single line statement
Both clang-tidy and uncrustify chokes on statement like this:

for (...)
	if (...)
		break;

This commit uses a very simple semantic patch (below) to add braces around such
statements.

Semantic patch used:

@@
statement S;
expression E;
@@

while (...)
- if (E) S
+ { if (E) { S } }

@@
statement S;
expression E;
@@

for (...;...;...)
- if (E) S
+ { if (E) { S } }

@@
statement S;
expression E;
@@

if (...)
- if (E) S
+ { if (E) { S } }
2020-02-13 21:58:55 +01:00
Ondřej Surý
f50b1e0685 Use clang-format to reformat the source files 2020-02-12 15:04:17 +01:00
Ondřej Surý
a6dcdc535c Replace usage of isc_mem_put+isc_mem_detach with isc_mem_putanddetach
Using isc_mem_put(mctx, ...) + isc_mem_detach(mctx) required juggling with the
local variables when mctx was part of the freed object. The isc_mem_putanddetach
function can handle this case internally, but it wasn't used everywhere.  This
commit apply the semantic patching plus bit of manual work to replace all such
occurrences with proper usage of isc_mem_putanddetach().
2019-07-31 10:26:40 +02:00
Ondřej Surý
ae83801e2b Remove blocks checking whether isc_mem_get() failed using the coccinelle 2019-07-23 15:32:35 -04:00
Ondřej Surý
78d0cb0a7d Use coccinelle to remove explicit '#include <config.h>' from the source files 2019-03-08 15:15:05 +01:00
Witold Kręcicki
70a1ba20ec QNAME miminimization should create a separate fetch context for each fetch -
this makes the cache more efficient and eliminates duplicates queries.
2018-10-23 12:15:04 +00:00