2
0
mirror of https://github.com/vdukhovni/postfix synced 2025-08-29 13:18:12 +00:00

postfix-2.5-20071215

This commit is contained in:
Wietse Venema 2007-12-15 00:00:00 -05:00 committed by Viktor Dukhovni
parent 70635b3cdd
commit 67eec35c07
13 changed files with 153 additions and 79 deletions

View File

@ -13972,12 +13972,12 @@ Apologies for any names omitted.
Cleanup: the queue manager and SMTP client now distinguish Cleanup: the queue manager and SMTP client now distinguish
between connection cache store and retrieve hints. Once the between connection cache store and retrieve hints. Once the
queue manager enables enables connection caching (store and queue manager enables connection caching (store and load)
load) hints on a per-destination queue, it keeps sending hints on a per-destination queue, it keeps sending connection
connection cache retrieve hints to the delivery agent even cache retrieve hints to the delivery agent even after it
after it stops sending connection cache store hints. This stops sending connection cache store hints. This prevents
prevents the SMTP client from making a new connection without the SMTP client from making a new connection without checking
checking the connection cache first. Victor Duchovni. Files: the connection cache first. Victor Duchovni. Files:
*qmgr/qmgr_entry.c, smtp/smtp_connect.c. *qmgr/qmgr_entry.c, smtp/smtp_connect.c.
Bugfix (introduced Postfix 2.3): the SMTP client never Bugfix (introduced Postfix 2.3): the SMTP client never
@ -13989,3 +13989,12 @@ Apologies for any names omitted.
without connect or handshake error. Victor Duchovni. Files: without connect or handshake error. Victor Duchovni. Files:
smtp/smtp_connect.c, smtp/smtp_session.c, smtp/smtp_proto.c, smtp/smtp_connect.c, smtp/smtp_session.c, smtp/smtp_proto.c,
smtp/smtp_trouble.c. smtp/smtp_trouble.c.
20071215
Documentation and code cleanup. Files: global/deliver_request.h,
*qmgr/qmgr_entry.c, smtp/smtp_connect.c,
proto/SCHEDULER_README.html.
Bugfix: qmqpd ignored the qmqpd_client_port_logging parameter
setting. File: qmqpd/qmqpd.c.

View File

@ -9,12 +9,14 @@ It schedules delivery of new mail, retries failed deliveries at specific times,
and removes mail from the queue after the last delivery attempt. There are two and removes mail from the queue after the last delivery attempt. There are two
major classes of mechanisms that control the operation of the queue manager. major classes of mechanisms that control the operation of the queue manager.
* Concurrency scheduling is concerned with the number of concurrent Topics covered by this document:
deliveries to a specific destination, including decisions on when to
suspend deliveries after persistent failures. * Concurrency scheduling, concerned with the number of concurrent deliveries
* Preemptive scheduling is concerned with the selection of email messages and to a specific destination, including decisions on when to suspend
deliveries after persistent failures.
* Preemptive scheduling, concerned with the selection of email messages and
recipients for a given destination. recipients for a given destination.
* Credits. This document would not be complete without. * Credits, something this document would not be complete without.
CCoonnccuurrrreennccyy sscchheedduulliinngg CCoonnccuurrrreennccyy sscchheedduulliinngg
@ -37,11 +39,11 @@ The material is organized as follows:
DDrraawwbbaacckkss ooff tthhee eexxiissttiinngg ccoonnccuurrrreennccyy sscchheedduulleerr DDrraawwbbaacckkss ooff tthhee eexxiissttiinngg ccoonnccuurrrreennccyy sscchheedduulleerr
From the start, Postfix has used a simple but robust algorithm where the per- From the start, Postfix has used a simple but robust algorithm where the per-
destination delivery concurrency is decremented by 1 after a delivery suffered destination delivery concurrency is decremented by 1 after delivery failed due
connection or handshake failure, and incremented by 1 otherwise. Of course the to connection or handshake failure, and incremented by 1 otherwise. Of course
concurrency is never allowed to exceed the maximum per-destination concurrency the concurrency is never allowed to exceed the maximum per-destination
limit. And when a destination's concurrency level drops to zero, the concurrency limit. And when a destination's concurrency level drops to zero,
destination is declared "dead" and delivery is suspended. the destination is declared "dead" and delivery is suspended.
Drawbacks of +/-1 concurrency feedback per delivery are: Drawbacks of +/-1 concurrency feedback per delivery are:
@ -200,8 +202,9 @@ Server configuration:
number is also used as the backlog argument to the listen(2) system call, number is also used as the backlog argument to the listen(2) system call,
and "postfix reload" does not re-issue this call. and "postfix reload" does not re-issue this call.
* Mail was discarded with "local_recipient_maps = static:all" and * Mail was discarded with "local_recipient_maps = static:all" and
"local_transport = discard". The discard action in header/body checks could "local_transport = discard". The discard action in access maps or header/
not be used as it fails to update the in_flow_delay counters. body checks could not be used as it fails to update the in_flow_delay
counters.
Client configuration: Client configuration:
@ -302,7 +305,7 @@ measurement we specified a server concurrency limit and a client initial
destination concurrency of 5, and a server process limit of 10; all other destination concurrency of 5, and a server process limit of 10; all other
conditions were the same as with the first measurement. The same result would conditions were the same as with the first measurement. The same result would
be obtained with a FreeBSD or Linux server, because the "pushing back" is done be obtained with a FreeBSD or Linux server, because the "pushing back" is done
entirely by the receiving Postfix. entirely by the receiving side.
cclliieenntt sseerrvveerr ffeeeeddbbaacckk ccoonnnneeccttiioonn ppeerrcceennttaaggee cclliieenntt tthheeoorreettiiccaall cclliieenntt sseerrvveerr ffeeeeddbbaacckk ccoonnnneeccttiioonn ppeerrcceennttaaggee cclliieenntt tthheeoorreettiiccaall
lliimmiitt lliimmiitt ssttyyllee ccaacchhiinngg ddeeffeerrrreedd ccoonnccuurrrreennccyy ddeeffeerr rraattee lliimmiitt lliimmiitt ssttyyllee ccaacchhiinngg ddeeffeerrrreedd ccoonnccuurrrreennccyy ddeeffeerr rraattee
@ -333,12 +336,16 @@ entirely by the receiving Postfix.
DDiissccuussssiioonn ooff ccoonnccuurrrreennccyy lliimmiitteedd sseerrvveerr rreessuullttss DDiissccuussssiioonn ooff ccoonnccuurrrreennccyy lliimmiitteedd sseerrvveerr rreessuullttss
All results in the previous sections are based on the first delivery runs only; All results in the previous sections are based on the first delivery runs only;
they do not include any second etc. delivery attempts. The first two examples they do not include any second etc. delivery attempts. It's also worth noting
show that the effect of feedback is negligible when concurrency is limited due that the measurements look at steady-state behavior only. They don't show what
to congestion. This is because the initial concurrency is already at the happens when the client starts sending at a much higher or lower concurrency.
client's concurrency maximum, and because there is 10-100 times more positive
than negative feedback. Under these conditions, it is no surprise that the The first two examples show that the effect of feedback is negligible when
contribution from SMTP connection caching is also negligible. concurrency is limited due to congestion. This is because the initial
concurrency is already at the client's concurrency maximum, and because there
is 10-100 times more positive than negative feedback. Under these conditions,
it is no surprise that the contribution from SMTP connection caching is also
negligible.
In the last example, the old +/-1 feedback per delivery will defer 50% of the In the last example, the old +/-1 feedback per delivery will defer 50% of the
mail when confronted with an active (anvil-style) server concurrency limit, mail when confronted with an active (anvil-style) server concurrency limit,
@ -350,6 +357,18 @@ next section.
LLiimmiittaattiioonnss ooff lleessss--tthhaann--11 ppeerr ddeelliivveerryy ffeeeeddbbaacckk LLiimmiittaattiioonnss ooff lleessss--tthhaann--11 ppeerr ddeelliivveerryy ffeeeeddbbaacckk
Less-than-1 feedback is of interest primarily when sending large amounts of
mail to destinations with active concurrency limiters (servers that reply with
421, or firewalls that send RST). When sending small amounts of mail per
destination, less-than-1 per-delivery feedback won't have a noticeable effect
on the per-destination concurrency, because the number of deliveries to the
same destination is too small. You might just as well use zero per-delivery
feedback and stay with the initial per-destination concurrency. And when mail
deliveries fail due to congestion instead of active concurrency limiters, the
measurements above show that per-delivery feedback has no effect. With large
amounts of mail you might just as well use zero per-delivery feedback and start
with the maximal per-destination concurrency.
The scheduler with less-than-1 concurrency feedback per delivery solves a The scheduler with less-than-1 concurrency feedback per delivery solves a
problem with servers that have active concurrency limiters. This works only problem with servers that have active concurrency limiters. This works only
because feedback is handled in a peculiar manner: positive feedback will because feedback is handled in a peculiar manner: positive feedback will
@ -379,8 +398,8 @@ of 4 bad servers in the above load balancer scenario, use positive feedback of
1/4 per "good" delivery (no connect or handshake error), and use an equal or 1/4 per "good" delivery (no connect or handshake error), and use an equal or
smaller amount of negative feedback per "bad" delivery. The downside of using smaller amount of negative feedback per "bad" delivery. The downside of using
concurrency-independent feedback is that some of the old +/-1 feedback problems concurrency-independent feedback is that some of the old +/-1 feedback problems
will return at large concurrencies. Sites that deliver at non-trivial per- will return at large concurrencies. Sites that must deliver mail at non-trivial
destination concurrencies will require special configuration. per-destination concurrencies will require special configuration.
CCoonnccuurrrreennccyy ccoonnffiigguurraattiioonn ppaarraammeetteerrss CCoonnccuurrrreennccyy ccoonnffiigguurraattiioonn ppaarraammeetteerrss
@ -448,7 +467,7 @@ Postfix versions.
PPrreeeemmppttiivvee sscchheedduulliinngg PPrreeeemmppttiivvee sscchheedduulliinngg
This document attempts to describe the new queue manager and its preemptive The following sections describe the new queue manager and its preemptive
scheduler algorithm. Note that the document was originally written to describe scheduler algorithm. Note that the document was originally written to describe
the changes between the new queue manager (in this text referred to as nqmgr, the changes between the new queue manager (in this text referred to as nqmgr,
the name it was known by before it became the default queue manager) and the the name it was known by before it became the default queue manager) and the
@ -1113,14 +1132,15 @@ CCrreeddiittss
* Wietse Venema designed and implemented the initial queue manager with per- * Wietse Venema designed and implemented the initial queue manager with per-
domain FIFO scheduling, and per-delivery +/-1 concurrency feedback. domain FIFO scheduling, and per-delivery +/-1 concurrency feedback.
* Patrik Rak designed and implemented preemption where mail with fewer * Patrik Rak designed and implemented preemption where mail with fewer
recipients can slip past mail with more recipients. recipients can slip past mail with more recipients in a controlled manner,
and wrote up its documentation.
* Wietse Venema initiated a discussion with Patrik Rak and Victor Duchovni on * Wietse Venema initiated a discussion with Patrik Rak and Victor Duchovni on
alternatives for the +/-1 feedback scheduler's aggressive behavior. This is alternatives for the +/-1 feedback scheduler's aggressive behavior. This is
when K/N feedback was reviewed (N = concurrency). The discussion ended when K/N feedback was reviewed (N = concurrency). The discussion ended
without a good solution for both negative feedback and dead site detection. without a good solution for both negative feedback and dead site detection.
* Victor Duchovni resumed work on concurrency feedback in the context of * Victor Duchovni resumed work on concurrency feedback in the context of
concurrency-limited servers. concurrency-limited servers.
* Wietse Venema then re-designed the concurrency scheduler in terms of * Wietse Venema then re-designed the concurrency scheduler in terms of the
simplest possible concepts: less-than-1 concurrency feedback per delivery, simplest possible concepts: less-than-1 concurrency feedback per delivery,
forward and reverse concurrency feedback hysteresis, and pseudo-cohort forward and reverse concurrency feedback hysteresis, and pseudo-cohort
failure. At this same time, concurrency feedback was separated from dead failure. At this same time, concurrency feedback was separated from dead

View File

@ -26,17 +26,19 @@ deliveries at specific times, and removes mail from the queue after
the last delivery attempt. There are two major classes of mechanisms the last delivery attempt. There are two major classes of mechanisms
that control the operation of the queue manager. </p> that control the operation of the queue manager. </p>
<p> Topics covered by this document: </p>
<ul> <ul>
<li> <a href="#concurrency"> Concurrency scheduling </a> is concerned <li> <a href="#concurrency"> Concurrency scheduling</a>, concerned
with the number of concurrent deliveries to a specific destination, with the number of concurrent deliveries to a specific destination,
including decisions on when to suspend deliveries after persistent including decisions on when to suspend deliveries after persistent
failures. failures.
<li> <a href="#jobs"> Preemptive scheduling </a> is concerned with <li> <a href="#jobs"> Preemptive scheduling</a>, concerned with
the selection of email messages and recipients for a given destination. the selection of email messages and recipients for a given destination.
<li> <a href="#credits"> Credits </a>. This document would not be <li> <a href="#credits"> Credits</a>, something this document would not be
complete without. complete without.
</ul> </ul>
@ -97,7 +99,7 @@ concurrency scheduler </a> </h3>
<p> From the start, Postfix has used a simple but robust algorithm <p> From the start, Postfix has used a simple but robust algorithm
where the per-destination delivery concurrency is decremented by 1 where the per-destination delivery concurrency is decremented by 1
after a delivery suffered connection or handshake failure, and after delivery failed due to connection or handshake failure, and
incremented by 1 otherwise. Of course the concurrency is never incremented by 1 otherwise. Of course the concurrency is never
allowed to exceed the maximum per-destination concurrency limit. allowed to exceed the maximum per-destination concurrency limit.
And when a destination's concurrency level drops to zero, the And when a destination's concurrency level drops to zero, the
@ -282,7 +284,8 @@ argument to the listen(2) system call, and "postfix reload" does
not re-issue this call. not re-issue this call.
<li> Mail was discarded with "<a href="postconf.5.html#local_recipient_maps">local_recipient_maps</a> = static:all" and <li> Mail was discarded with "<a href="postconf.5.html#local_recipient_maps">local_recipient_maps</a> = static:all" and
"<a href="postconf.5.html#local_transport">local_transport</a> = discard". The discard action in header/body checks "<a href="postconf.5.html#local_transport">local_transport</a> = discard". The discard action in access maps or
header/body checks
could not be used as it fails to update the <a href="postconf.5.html#in_flow_delay">in_flow_delay</a> counters. could not be used as it fails to update the <a href="postconf.5.html#in_flow_delay">in_flow_delay</a> counters.
</ul> </ul>
@ -468,7 +471,7 @@ a server concurrency limit and a client initial destination concurrency
of 5, and a server process limit of 10; all other conditions were of 5, and a server process limit of 10; all other conditions were
the same as with the first measurement. The same result would be the same as with the first measurement. The same result would be
obtained with a FreeBSD or Linux server, because the "pushing back" obtained with a FreeBSD or Linux server, because the "pushing back"
is done entirely by the receiving Postfix. </p> is done entirely by the receiving side. </p>
<blockquote> <blockquote>
@ -529,7 +532,12 @@ with increasing concurrency. See text for a discussion of results.
<p> All results in the previous sections are based on the first <p> All results in the previous sections are based on the first
delivery runs only; they do not include any second etc. delivery delivery runs only; they do not include any second etc. delivery
attempts. The first two examples show that the effect of feedback attempts. It's also worth noting that the measurements look at
steady-state behavior only. They don't show what happens when the
client starts sending at a much higher or lower concurrency.
</p>
<p> The first two examples show that the effect of feedback
is negligible when concurrency is limited due to congestion. This is negligible when concurrency is limited due to congestion. This
is because the initial concurrency is already at the client's is because the initial concurrency is already at the client's
concurrency maximum, and because there is 10-100 times more positive concurrency maximum, and because there is 10-100 times more positive
@ -548,6 +556,20 @@ the next section. </p>
<h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3> <h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3>
<p> Less-than-1 feedback is of interest primarily when sending large
amounts of mail to destinations with active concurrency limiters
(servers that reply with 421, or firewalls that send RST). When
sending small amounts of mail per destination, less-than-1 per-delivery
feedback won't have a noticeable effect on the per-destination
concurrency, because the number of deliveries to the same destination
is too small. You might just as well use zero per-delivery feedback
and stay with the initial per-destination concurrency. And when
mail deliveries fail due to congestion instead of active concurrency
limiters, the measurements above show that per-delivery feedback
has no effect. With large amounts of mail you might just as well
use zero per-delivery feedback and start with the maximal per-destination
concurrency. </p>
<p> The scheduler with less-than-1 concurrency <p> The scheduler with less-than-1 concurrency
feedback per delivery solves a problem with servers that have active feedback per delivery solves a problem with servers that have active
concurrency limiters. This works only because feedback is handled concurrency limiters. This works only because feedback is handled
@ -582,8 +604,8 @@ delivery (no connect or handshake error), and use an equal or smaller
amount of negative feedback per "bad" delivery. The downside of amount of negative feedback per "bad" delivery. The downside of
using concurrency-independent feedback is that some of the old +/-1 using concurrency-independent feedback is that some of the old +/-1
feedback problems will return at large concurrencies. Sites that feedback problems will return at large concurrencies. Sites that
deliver at non-trivial per-destination concurrencies will require must deliver mail at non-trivial per-destination concurrencies will
special configuration. </p> require special configuration. </p>
<h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3> <h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3>
@ -643,7 +665,7 @@ activity </td> </tr>
<p> <p>
This document attempts to describe the new queue manager and its The following sections describe the new queue manager and its
preemptive scheduler algorithm. Note that the document was originally preemptive scheduler algorithm. Note that the document was originally
written to describe the changes between the new queue manager (in written to describe the changes between the new queue manager (in
this text referred to as <tt>nqmgr</tt>, the name it was known by this text referred to as <tt>nqmgr</tt>, the name it was known by
@ -1776,7 +1798,8 @@ with per-domain FIFO scheduling, and per-delivery +/-1 concurrency
feedback. feedback.
<li> Patrik Rak designed and implemented preemption where mail with <li> Patrik Rak designed and implemented preemption where mail with
fewer recipients can slip past mail with more recipients. fewer recipients can slip past mail with more recipients in a
controlled manner, and wrote up its documentation.
<li> Wietse Venema initiated a discussion with Patrik Rak and Victor <li> Wietse Venema initiated a discussion with Patrik Rak and Victor
Duchovni on alternatives for the +/-1 feedback scheduler's aggressive Duchovni on alternatives for the +/-1 feedback scheduler's aggressive
@ -1788,10 +1811,10 @@ feedback and dead site detection.
context of concurrency-limited servers. context of concurrency-limited servers.
<li> Wietse Venema then re-designed the concurrency scheduler in <li> Wietse Venema then re-designed the concurrency scheduler in
terms of simplest possible concepts: less-than-1 concurrency feedback terms of the simplest possible concepts: less-than-1 concurrency
per delivery, forward and reverse concurrency feedback hysteresis, feedback per delivery, forward and reverse concurrency feedback
and pseudo-cohort failure. At this same time, concurrency feedback hysteresis, and pseudo-cohort failure. At this same time, concurrency
was separated from dead site detection. feedback was separated from dead site detection.
<li> These simplifications, and their modular implementation, helped <li> These simplifications, and their modular implementation, helped
to develop further insights into the different roles that positive to develop further insights into the different roles that positive

View File

@ -3278,8 +3278,7 @@ Examples:
<p> <p>
The initial per-destination concurrency level for parallel delivery The initial per-destination concurrency level for parallel delivery
to the same destination. This limit applies to delivery via <a href="smtp.8.html">smtp(8)</a>, to the same destination.
and via the <a href="pipe.8.html">pipe(8)</a> and <a href="virtual.8.html">virtual(8)</a> delivery agents.
With per-destination recipient limit &gt; 1, a destination is a domain, With per-destination recipient limit &gt; 1, a destination is a domain,
otherwise it is a recipient. otherwise it is a recipient.
</p> </p>

View File

@ -1829,8 +1829,7 @@ inet_protocols = ipv4, ipv6
.ft R .ft R
.SH initial_destination_concurrency (default: 5) .SH initial_destination_concurrency (default: 5)
The initial per-destination concurrency level for parallel delivery The initial per-destination concurrency level for parallel delivery
to the same destination. This limit applies to delivery via \fBsmtp\fR(8), to the same destination.
and via the \fBpipe\fR(8) and \fBvirtual\fR(8) delivery agents.
With per-destination recipient limit > 1, a destination is a domain, With per-destination recipient limit > 1, a destination is a domain,
otherwise it is a recipient. otherwise it is a recipient.
.PP .PP

View File

@ -26,17 +26,19 @@ deliveries at specific times, and removes mail from the queue after
the last delivery attempt. There are two major classes of mechanisms the last delivery attempt. There are two major classes of mechanisms
that control the operation of the queue manager. </p> that control the operation of the queue manager. </p>
<p> Topics covered by this document: </p>
<ul> <ul>
<li> <a href="#concurrency"> Concurrency scheduling </a> is concerned <li> <a href="#concurrency"> Concurrency scheduling</a>, concerned
with the number of concurrent deliveries to a specific destination, with the number of concurrent deliveries to a specific destination,
including decisions on when to suspend deliveries after persistent including decisions on when to suspend deliveries after persistent
failures. failures.
<li> <a href="#jobs"> Preemptive scheduling </a> is concerned with <li> <a href="#jobs"> Preemptive scheduling</a>, concerned with
the selection of email messages and recipients for a given destination. the selection of email messages and recipients for a given destination.
<li> <a href="#credits"> Credits </a>. This document would not be <li> <a href="#credits"> Credits</a>, something this document would not be
complete without. complete without.
</ul> </ul>
@ -97,7 +99,7 @@ concurrency scheduler </a> </h3>
<p> From the start, Postfix has used a simple but robust algorithm <p> From the start, Postfix has used a simple but robust algorithm
where the per-destination delivery concurrency is decremented by 1 where the per-destination delivery concurrency is decremented by 1
after a delivery suffered connection or handshake failure, and after delivery failed due to connection or handshake failure, and
incremented by 1 otherwise. Of course the concurrency is never incremented by 1 otherwise. Of course the concurrency is never
allowed to exceed the maximum per-destination concurrency limit. allowed to exceed the maximum per-destination concurrency limit.
And when a destination's concurrency level drops to zero, the And when a destination's concurrency level drops to zero, the
@ -282,7 +284,8 @@ argument to the listen(2) system call, and "postfix reload" does
not re-issue this call. not re-issue this call.
<li> Mail was discarded with "local_recipient_maps = static:all" and <li> Mail was discarded with "local_recipient_maps = static:all" and
"local_transport = discard". The discard action in header/body checks "local_transport = discard". The discard action in access maps or
header/body checks
could not be used as it fails to update the in_flow_delay counters. could not be used as it fails to update the in_flow_delay counters.
</ul> </ul>
@ -468,7 +471,7 @@ a server concurrency limit and a client initial destination concurrency
of 5, and a server process limit of 10; all other conditions were of 5, and a server process limit of 10; all other conditions were
the same as with the first measurement. The same result would be the same as with the first measurement. The same result would be
obtained with a FreeBSD or Linux server, because the "pushing back" obtained with a FreeBSD or Linux server, because the "pushing back"
is done entirely by the receiving Postfix. </p> is done entirely by the receiving side. </p>
<blockquote> <blockquote>
@ -529,7 +532,12 @@ with increasing concurrency. See text for a discussion of results.
<p> All results in the previous sections are based on the first <p> All results in the previous sections are based on the first
delivery runs only; they do not include any second etc. delivery delivery runs only; they do not include any second etc. delivery
attempts. The first two examples show that the effect of feedback attempts. It's also worth noting that the measurements look at
steady-state behavior only. They don't show what happens when the
client starts sending at a much higher or lower concurrency.
</p>
<p> The first two examples show that the effect of feedback
is negligible when concurrency is limited due to congestion. This is negligible when concurrency is limited due to congestion. This
is because the initial concurrency is already at the client's is because the initial concurrency is already at the client's
concurrency maximum, and because there is 10-100 times more positive concurrency maximum, and because there is 10-100 times more positive
@ -548,6 +556,20 @@ the next section. </p>
<h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3> <h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3>
<p> Less-than-1 feedback is of interest primarily when sending large
amounts of mail to destinations with active concurrency limiters
(servers that reply with 421, or firewalls that send RST). When
sending small amounts of mail per destination, less-than-1 per-delivery
feedback won't have a noticeable effect on the per-destination
concurrency, because the number of deliveries to the same destination
is too small. You might just as well use zero per-delivery feedback
and stay with the initial per-destination concurrency. And when
mail deliveries fail due to congestion instead of active concurrency
limiters, the measurements above show that per-delivery feedback
has no effect. With large amounts of mail you might just as well
use zero per-delivery feedback and start with the maximal per-destination
concurrency. </p>
<p> The scheduler with less-than-1 concurrency <p> The scheduler with less-than-1 concurrency
feedback per delivery solves a problem with servers that have active feedback per delivery solves a problem with servers that have active
concurrency limiters. This works only because feedback is handled concurrency limiters. This works only because feedback is handled
@ -582,8 +604,8 @@ delivery (no connect or handshake error), and use an equal or smaller
amount of negative feedback per "bad" delivery. The downside of amount of negative feedback per "bad" delivery. The downside of
using concurrency-independent feedback is that some of the old +/-1 using concurrency-independent feedback is that some of the old +/-1
feedback problems will return at large concurrencies. Sites that feedback problems will return at large concurrencies. Sites that
deliver at non-trivial per-destination concurrencies will require must deliver mail at non-trivial per-destination concurrencies will
special configuration. </p> require special configuration. </p>
<h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3> <h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3>
@ -643,7 +665,7 @@ activity </td> </tr>
<p> <p>
This document attempts to describe the new queue manager and its The following sections describe the new queue manager and its
preemptive scheduler algorithm. Note that the document was originally preemptive scheduler algorithm. Note that the document was originally
written to describe the changes between the new queue manager (in written to describe the changes between the new queue manager (in
this text referred to as <tt>nqmgr</tt>, the name it was known by this text referred to as <tt>nqmgr</tt>, the name it was known by
@ -1776,7 +1798,8 @@ with per-domain FIFO scheduling, and per-delivery +/-1 concurrency
feedback. feedback.
<li> Patrik Rak designed and implemented preemption where mail with <li> Patrik Rak designed and implemented preemption where mail with
fewer recipients can slip past mail with more recipients. fewer recipients can slip past mail with more recipients in a
controlled manner, and wrote up its documentation.
<li> Wietse Venema initiated a discussion with Patrik Rak and Victor <li> Wietse Venema initiated a discussion with Patrik Rak and Victor
Duchovni on alternatives for the +/-1 feedback scheduler's aggressive Duchovni on alternatives for the +/-1 feedback scheduler's aggressive
@ -1788,10 +1811,10 @@ feedback and dead site detection.
context of concurrency-limited servers. context of concurrency-limited servers.
<li> Wietse Venema then re-designed the concurrency scheduler in <li> Wietse Venema then re-designed the concurrency scheduler in
terms of simplest possible concepts: less-than-1 concurrency feedback terms of the simplest possible concepts: less-than-1 concurrency
per delivery, forward and reverse concurrency feedback hysteresis, feedback per delivery, forward and reverse concurrency feedback
and pseudo-cohort failure. At this same time, concurrency feedback hysteresis, and pseudo-cohort failure. At this same time, concurrency
was separated from dead site detection. feedback was separated from dead site detection.
<li> These simplifications, and their modular implementation, helped <li> These simplifications, and their modular implementation, helped
to develop further insights into the different roles that positive to develop further insights into the different roles that positive

View File

@ -1859,8 +1859,7 @@ inet_protocols = ipv4, ipv6
<p> <p>
The initial per-destination concurrency level for parallel delivery The initial per-destination concurrency level for parallel delivery
to the same destination. This limit applies to delivery via smtp(8), to the same destination.
and via the pipe(8) and virtual(8) delivery agents.
With per-destination recipient limit &gt; 1, a destination is a domain, With per-destination recipient limit &gt; 1, a destination is a domain,
otherwise it is a recipient. otherwise it is a recipient.
</p> </p>

View File

@ -69,14 +69,15 @@ typedef struct DELIVER_REQUEST {
#define DEL_REQ_FLAG_MTA_VRFY (1<<8) /* MTA-requested address probe */ #define DEL_REQ_FLAG_MTA_VRFY (1<<8) /* MTA-requested address probe */
#define DEL_REQ_FLAG_USR_VRFY (1<<9) /* user-requested address probe */ #define DEL_REQ_FLAG_USR_VRFY (1<<9) /* user-requested address probe */
#define DEL_REQ_FLAG_RECORD (1<<10) /* record and deliver */ #define DEL_REQ_FLAG_RECORD (1<<10) /* record and deliver */
#define DEL_REQ_FLAG_SCACHE_LD (1<<11) /* Consult opportunistic cache */ #define DEL_REQ_FLAG_CONN_LOAD (1<<11) /* Consult opportunistic cache */
#define DEL_REQ_FLAG_SCACHE_ST (1<<12) /* Update opportunistic cache */ #define DEL_REQ_FLAG_CONN_STORE (1<<12) /* Update opportunistic cache */
/* /*
* Cache Load and Store as value or mask. Use explicit names for multi-bit * Cache Load and Store as value or mask. Use explicit _MASK for multi-bit
* values. * values.
*/ */
#define DEL_REQ_FLAG_SCACHE_MASK (DEL_REQ_FLAG_SCACHE_LD|DEL_REQ_FLAG_SCACHE_ST) #define DEL_REQ_FLAG_CONN_MASK \
(DEL_REQ_FLAG_CONN_LOAD | DEL_REQ_FLAG_CONN_STORE)
/* /*
* For compatibility, the old confusing names. * For compatibility, the old confusing names.

View File

@ -20,7 +20,7 @@
* Patches change both the patchlevel and the release date. Snapshots have no * Patches change both the patchlevel and the release date. Snapshots have no
* patchlevel; they change the release date only. * patchlevel; they change the release date only.
*/ */
#define MAIL_RELEASE_DATE "20071213" #define MAIL_RELEASE_DATE "20071215"
#define MAIL_VERSION_NUMBER "2.5" #define MAIL_VERSION_NUMBER "2.5"
#ifdef SNAPSHOT #ifdef SNAPSHOT

View File

@ -138,12 +138,12 @@ QMGR_ENTRY *qmgr_entry_select(QMGR_QUEUE *queue)
* prevents unnecessary session caching when we have a burst of mail * prevents unnecessary session caching when we have a burst of mail
* <= the initial concurrency limit. * <= the initial concurrency limit.
*/ */
if ((queue->dflags & DEL_REQ_FLAG_SCACHE_ST) == 0) { if ((queue->dflags & DEL_REQ_FLAG_CONN_STORE) == 0) {
if (BACK_TO_BACK_DELIVERY()) { if (BACK_TO_BACK_DELIVERY()) {
if (msg_verbose) if (msg_verbose)
msg_info("%s: allowing on-demand session caching for %s", msg_info("%s: allowing on-demand session caching for %s",
myname, queue->name); myname, queue->name);
queue->dflags |= DEL_REQ_FLAG_SCACHE_MASK; queue->dflags |= DEL_REQ_FLAG_CONN_MASK;
} }
} }
@ -158,7 +158,7 @@ QMGR_ENTRY *qmgr_entry_select(QMGR_QUEUE *queue)
if (msg_verbose) if (msg_verbose)
msg_info("%s: disallowing on-demand session caching for %s", msg_info("%s: disallowing on-demand session caching for %s",
myname, queue->name); myname, queue->name);
queue->dflags &= ~DEL_REQ_FLAG_SCACHE_ST; queue->dflags &= ~DEL_REQ_FLAG_CONN_STORE;
} }
} }
} }

View File

@ -150,12 +150,12 @@ QMGR_ENTRY *qmgr_entry_select(QMGR_PEER *peer)
* prevents unnecessary session caching when we have a burst of mail * prevents unnecessary session caching when we have a burst of mail
* <= the initial concurrency limit. * <= the initial concurrency limit.
*/ */
if ((queue->dflags & DEL_REQ_FLAG_SCACHE_ST) == 0) { if ((queue->dflags & DEL_REQ_FLAG_CONN_STORE) == 0) {
if (BACK_TO_BACK_DELIVERY()) { if (BACK_TO_BACK_DELIVERY()) {
if (msg_verbose) if (msg_verbose)
msg_info("%s: allowing on-demand session caching for %s", msg_info("%s: allowing on-demand session caching for %s",
myname, queue->name); myname, queue->name);
queue->dflags |= DEL_REQ_FLAG_SCACHE_MASK; queue->dflags |= DEL_REQ_FLAG_CONN_MASK;
} }
} }
@ -170,7 +170,7 @@ QMGR_ENTRY *qmgr_entry_select(QMGR_PEER *peer)
if (msg_verbose) if (msg_verbose)
msg_info("%s: disallowing on-demand session caching for %s", msg_info("%s: disallowing on-demand session caching for %s",
myname, queue->name); myname, queue->name);
queue->dflags &= ~DEL_REQ_FLAG_SCACHE_ST; queue->dflags &= ~DEL_REQ_FLAG_CONN_STORE;
} }
} }
} }

View File

@ -802,6 +802,7 @@ int main(int argc, char **argv)
single_server_main(argc, argv, qmqpd_service, single_server_main(argc, argv, qmqpd_service,
MAIL_SERVER_TIME_TABLE, time_table, MAIL_SERVER_TIME_TABLE, time_table,
MAIL_SERVER_STR_TABLE, str_table, MAIL_SERVER_STR_TABLE, str_table,
MAIL_SERVER_BOOL_TABLE, bool_table,
MAIL_SERVER_PRE_INIT, pre_jail_init, MAIL_SERVER_PRE_INIT, pre_jail_init,
MAIL_SERVER_PRE_ACCEPT, pre_accept, MAIL_SERVER_PRE_ACCEPT, pre_accept,
MAIL_SERVER_POST_INIT, post_jail_init, MAIL_SERVER_POST_INIT, post_jail_init,

View File

@ -457,9 +457,9 @@ static void smtp_cache_policy(SMTP_STATE *state, const char *dest)
if (smtp_cache_dest && string_list_match(smtp_cache_dest, dest)) { if (smtp_cache_dest && string_list_match(smtp_cache_dest, dest)) {
state->misc_flags |= SMTP_MISC_FLAG_CONN_CACHE_MASK; state->misc_flags |= SMTP_MISC_FLAG_CONN_CACHE_MASK;
} else if (var_smtp_cache_demand) { } else if (var_smtp_cache_demand) {
if (request->flags & DEL_REQ_FLAG_SCACHE_LD) if (request->flags & DEL_REQ_FLAG_CONN_LOAD)
state->misc_flags |= SMTP_MISC_FLAG_CONN_LOAD; state->misc_flags |= SMTP_MISC_FLAG_CONN_LOAD;
if (request->flags & DEL_REQ_FLAG_SCACHE_ST) if (request->flags & DEL_REQ_FLAG_CONN_STORE)
state->misc_flags |= SMTP_MISC_FLAG_CONN_STORE; state->misc_flags |= SMTP_MISC_FLAG_CONN_STORE;
} }
} }