mirror of
https://gitlab.isc.org/isc-projects/bind9
synced 2025-08-22 18:19:42 +00:00
Update BIND 9 Memory Explained - minor grammar edits
parent
148b6bc185
commit
d77a4b04df
@ -1,35 +1,36 @@
|
||||
## The Basics
|
||||
|
||||
BIND 9 basic memory management object is a memory context, the application can have as many as it is practical. There are two reasons for a separate memory context: a) logical separation - this includes both separate accounting, and different configuration, b) contention and speed - access to a memory context pinned on a specific thread will not be blocked by different threads.
|
||||
The basic BIND 9 memory management object is a memory context: the application can have as many as it is practical. There are two reasons for a separate memory context: a) logical separation - this includes both separate accounting, and different configuration, and b) contention and speed - access to a memory context pinned on a specific thread will not be blocked by different threads.
|
||||
|
||||
## Limiting the memory use
|
||||
## Limiting memory use
|
||||
|
||||
The configuration options `max-cache-size` only affects the memory context in the cache and ADB (address database) - all other memory context are unrestrained. This means setting the `max-cache-size` to 100% would lead to OOM Reaper finding your BIND 9 process and killing it.
|
||||
The configuration option `max-cache-size` only affects the memory context in the cache and ADB (address database). All other memory contexts are unrestrained. This means setting the `max-cache-size` to 100% would lead to OOM Reaper finding your BIND 9 process and killing it.
|
||||
|
||||
### BIND 9.11 and BIND 9.16 memory differences
|
||||
### BIND 9.16 uses more memory than BIND 9.11
|
||||
|
||||
There are two reasons why BIND 9.16 uses more memory than BIND 9.11:
|
||||
There are two reasons for this:
|
||||
|
||||
The networking model has changed - in BIND 9.11 there was a single "listener" that would distribute the incoming work between idle threads. This simpler model would be slower, but because there was a single listener, it would also consume less memory.
|
||||
Hybrid networking model - BIND 9.16 uses new networking code to receive and process incoming DNS messages (from clients), but it still uses old networking code for sending and processing outgoing DNS messages (to other servers). This means it needs to run double number of threads - there's a threadpool of workers for each function.
|
||||
1. The networking model has changed. In BIND 9.11 there was a single "listener" that distributed the incoming work between idle threads. This simpler model was slower, but because there was a single listener, it also consumed less memory.
|
||||
2. BIND 9.16 uses a hybrid of the new and old networking memory. BIND 9.16 uses the new networking code to receive and process incoming DNS messages (from clients), but it still uses the older networking code for sending and processing outgoing DNS messages (to other servers). This means it needs to run twice as many threads - there's a threadpool of workers for each function.
|
||||
|
||||
### BIND 9.16 and BIND 9.18
|
||||
### BIND 9.18 uses less memory than BIND 9.16
|
||||
|
||||
The memory usage in BIND 9.18 is again lower and similar to what the memory usage was in 9.11 because the part that sends and processes outgoing DNS message was refactored to use the new networking code and therefore uses half the number of threads that BIND 9.16 was using.
|
||||
BIND 9.18 uses less memory than 9.16, similar to the memory usage in 9.11. The part that sends and processes outgoing DNS messages (server side) was refactored to use the new networking code and therefore uses half as many threads as BIND 9.16 used.
|
||||
|
||||
The other major change implemented in BIND 9.18 was replacement of the internal memory allocator with the jemalloc memory allocator. The internal memory allocator would keep pools of memory chunks for later reuse and would never free up the reserved memory. The jemalloc memory allocator is much better suited to the memory usage patterns that BIND 9 exhibits and is able to be both fast and memory efficient.
|
||||
The other major change implemented in BIND 9.18 was the replacement of the internal memory allocator with the jemalloc memory allocator. The internal memory allocator would keep pools of memory chunks for later reuse and would never free up the reserved memory. The jemalloc memory allocator is much better suited to the memory usage patterns that BIND 9 exhibits and is able to be both fast and memory efficient.
|
||||
|
||||
Our general recommendation for all deployments is to use jemalloc even with BIND 9.16 by forcing linkage via extra LDFLAGS (./configure LDFLAGS="-ljemalloc" should do the trick).
|
||||
|
||||
## Measuring Memory
|
||||
|
||||
Measuring the real memory usage can be tricky, but fortunately, there are some tools to help with that.
|
||||
Measuring real memory usage can be tricky, but fortunately, there are some tools to help with that.
|
||||
|
||||
### Measuring Memory Internally
|
||||
|
||||
The statistics channel exposes counters about the memory contexts. The important values are 'InUse' and 'Malloced'. The difference between the two is that the InUse counter shows the memory used "externally" and 'Malloced' is the memory including the management overhead (the more memory contexts the more overhead there is).
|
||||
The statistics channel exposes counters about the memory contexts. The important values are 'InUse' and 'Malloced'. The difference between the two is that the 'InUse' counter shows the memory used "externally" and 'Malloced' is the memory including the management overhead (the more memory contexts the more overhead there is).
|
||||
|
||||
You can use attached [memory-json.py](uploads/e9398e64964bbd68e7715c594dadbd3e/memory-json.py) script to parse the statistics channel output to receive following data (this is from `main` branch):
|
||||
|
||||
```
|
||||
OpenSSL: 268.8KiB 277.0KiB
|
||||
uv: 6.1KiB 14.3KiB
|
||||
@ -49,13 +50,14 @@ MALLOCED: 13.3MiB == 13.3MiB
|
||||
|
||||
### Measuring Memory Externally
|
||||
|
||||
The rule of thumb is "Don't use top command" - there are better tools that are less misleading. There are two tools the are easily available on modern Linux systems - pmap and smem.
|
||||
The rule of thumb is "Don't use the 'top' command" - there are better tools that are less misleading. There are two tools the are easily available on modern Linux systems - pmap and smem.
|
||||
|
||||
#### pmap
|
||||
|
||||
`pmap` provides detailed statistics, but can be too chatty - the basic usage is `pmap -x -p <pid>`. It prints information about all pages used by the program which includes shared libraries, the program itself and heap. The important number is the last one "Dirty" - it shows the memory "used" by the BIND 9.
|
||||
`pmap` provides detailed statistics, but can be too chatty - the basic usage is `pmap -x -p <pid>`. It prints information about all pages used by the program which includes shared libraries, the program itself and the heap. The important number is the last one "Dirty" - it shows the memory "used" by the BIND 9.
|
||||
|
||||
Example `pmap` output could look like this:
|
||||
|
||||
```
|
||||
$ pmap -x -p $(pidof named)
|
||||
3301879: /usr/sbin/named -4 -g -c named.conf
|
||||
@ -71,7 +73,7 @@ total kB 760180 74324 60708
|
||||
|
||||
#### smem
|
||||
|
||||
`smem` provides less details, so if you want only "single" number run `smem -P named` and look for the USS column - this provides the information about memory used by the program sans the shared library. The PSS column adds shared libraries divided by the number of programs using those libraries, and RSS is the normal Resident Size.
|
||||
`smem` provides fewer details, so if you want only a single number, run `smem -P named` and look for the USS column - this provides the information about memory used by the program sans the shared library. The PSS column adds shared libraries divided by the number of programs using those libraries, and RSS is the normal Resident Size.
|
||||
|
||||
```
|
||||
$ smem -P named -a
|
||||
@ -83,21 +85,25 @@ $ smem -P named -a
|
||||
|
||||
There are couple of explanations why the numbers reported by the BIND 9 statistics channel might differ from the memory usage reported by the operating system.
|
||||
|
||||
External libraries - BIND 9 uses several external libraries - OpenSSL, libuv, libxml2, json-c and possibly others. All these also need memory from the operating system to operate. The difference should not be large, but it's also not negligible. If the difference between the used memory reported by the internal statistics channel and USS is large (on a busy server), then congratulations, you've found a leak in external library. (NOTE: BIND 9.19 - the development version - provides own memory context for OpenSSL, libuv and libxml2 if the library versions are recent enough.)
|
||||
Memory fragmentation - there's quite a churn in the memory allocations and deallocations on the busy server, and memory gets fragmented - the default Linux allocator isn't particulary good with the BIND 9 memory usage patters. Using jemalloc is strongly recommended as it handles the memory fragmentation much better and is also faster.
|
||||
External libraries
|
||||
BIND 9 uses several external libraries - OpenSSL, libuv, libxml2, json-c and possibly others. All these also need memory from the operating system to operate. The difference should not be large, but it's also not negligible. If the difference between the used memory reported by the internal statistics channel and USS is large (on a busy server), then congratulations, you've found a leak in an external library. (NOTE: BIND 9.19 - the development version - provides own memory context for OpenSSL, libuv and libxml2 if the library versions are recent enough.)
|
||||
|
||||
Memory fragmentation
|
||||
There's quite a lot of churn in the memory allocations and deallocations on a busy server, and memory gets fragmented - the default Linux allocator isn't particularly good with the BIND 9 memory usage patterns. Using jemalloc is strongly recommended as it handles memory fragmentation much better and is also faster.
|
||||
|
||||
## Memory Profiling
|
||||
|
||||
When compiled (or even linked using `LD_PRELOAD`), `jemalloc` can produce **heap** snapshots based on triggers (time, size, ...). This can be later analysed using `jeprof` tool to see where did the memory went.
|
||||
|
||||
The very basics would be:
|
||||
|
||||
```
|
||||
export MALLOC_CONF="abort_conf:true,prof:true,lg_prof_interval:19,lg_prof_sample:19,prof_prefix:jeprof"
|
||||
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2 # you don't need that if compiled with jemalloc
|
||||
/usr/sbin/named # use your normal options and configuration that you use in production
|
||||
```
|
||||
|
||||
You'll most probably need to fine tune the `lg_prof_interval` and `lg_prof_sample` numbers (it's **log base 2**) if there's too much file or too little.
|
||||
You'll most probably need to fine tune the `lg_prof_interval` and `lg_prof_sample` numbers (it's **log base 2**) to get the desired file size.
|
||||
|
||||
After running the benchmark or the regular workload, you should end up with bunch of `jeprof.<pid>.<m>.i<n>.heap` files. Pick the latest and run:
|
||||
|
||||
@ -115,13 +121,11 @@ More options can be found in [jeprof](https://manpages.ubuntu.com/manpages/impis
|
||||
|
||||
### Resolver Benchmarks
|
||||
|
||||
To support what has been said in this article, here are some basic graphs comparing 9.11, 9.16, 9.18, and 9.19 (codenamed as `main`).
|
||||
Here are some basic graphs comparing memory usage in BIND 9.11, 9.16, 9.18, and 9.19 (aka as `main`).
|
||||
|
||||
As you can see, the 9.18 and 9.19 memory usage is in the same ballpark as 9.11, but the latency has improved greatly. The 9.16 memory usage is double, as described above (double number of worker threads).
|
||||
As you can see, 9.18 and 9.19 memory usage is in the same ballpark as 9.11, but the latency has improved greatly. The 9.16 memory usage is double, as described above (double number of worker threads).
|
||||
|
||||

|
||||

|
||||

|
||||
  
|
||||
|
||||
### Catalog Zones Memory Profiling
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user