2
0
mirror of https://github.com/vdukhovni/postfix synced 2025-08-29 13:18:12 +00:00

postfix-2.11-20130709

This commit is contained in:
Wietse Venema 2013-07-09 00:00:00 -05:00 committed by Viktor Dukhovni
parent 6808166cf8
commit eb5a2ae162
15 changed files with 82 additions and 52 deletions

View File

@ -18761,3 +18761,24 @@ Apologies for any names omitted.
mantools/postlink, proto/postconf.proto, tls/tls_mgr.c, mantools/postlink, proto/postconf.proto, tls/tls_mgr.c,
tls/tls_misc.c, tlsproxy/tls-proxy.c, smtp/smtp.c, tls/tls_misc.c, tlsproxy/tls-proxy.c, smtp/smtp.c,
smtpd/smtpd.c. smtpd/smtpd.c.
20130629
Cleanup: documentation. Files: proto/CONNECTION_CACHE_README.html,
proto/SCHEDULER_README.html.
20130708
Cleanup: postscreen_upstream_proxy_protocol setting. Files:
global/mail_params.h, postscreen/postscreen_endpt.c.
20130709
Cleanup: qmgr documentation clarification by Patrik Rak.
Files: proto/SCHEDULER_README.html, qmgr/qmgr_job.c.
Cleanup: re-indented code. File: qmgr/qmgr_job.c.
Logging: minimal DNAME support. Viktor Dukhovni. dns/dns.h,
dns/dns_lookup.c, dns/dns_strtype.c, dns/test_dns_lookup.c.

View File

@ -40,7 +40,8 @@ improves performance depends on the conditions:
* SMTP Connection caching introduces some overhead: the client needs to send * SMTP Connection caching introduces some overhead: the client needs to send
an RSET command to find out if a connection is still usable, before it can an RSET command to find out if a connection is still usable, before it can
send the next MAIL FROM command. send the next MAIL FROM command. This introduces one additional round-trip
delay.
For other potential issues with SMTP connection caching, see the discussion of For other potential issues with SMTP connection caching, see the discussion of
limitations at the end of this document. limitations at the end of this document.

View File

@ -29,7 +29,7 @@ Topics covered by this document:
CCoonnccuurrrreennccyy sscchheedduulliinngg CCoonnccuurrrreennccyy sscchheedduulliinngg
The following sections document the Postfix 2.5 concurrency scheduler, after a The following sections document the Postfix 2.5 concurrency scheduler, after a
discussion of the limitations of the existing concurrency scheduler. This is discussion of the limitations of the earlier concurrency scheduler. This is
followed by results of medium-concurrency experiments, and a discussion of followed by results of medium-concurrency experiments, and a discussion of
trade-offs between performance and robustness. trade-offs between performance and robustness.
@ -504,7 +504,8 @@ elsewhere, but it is nice to have a coherent overview in one place:
to be delivered and what transports are going to be used for the delivery. to be delivered and what transports are going to be used for the delivery.
* Each recipient entry groups a batch of recipients of one message which are * Each recipient entry groups a batch of recipients of one message which are
all going to be delivered to the same destination. all going to be delivered to the same destination (and over the same
transport).
* Each transport structure groups everything what is going to be delivered by * Each transport structure groups everything what is going to be delivered by
delivery agents dedicated for that transport. Each transport maintains a delivery agents dedicated for that transport. Each transport maintains a
@ -710,9 +711,8 @@ accumulated, job which requires no more than that number of slots to be fully
delivered can preempt this job. delivered can preempt this job.
[Well, the truth is, the counter is incremented every time an entry is selected [Well, the truth is, the counter is incremented every time an entry is selected
and it is divided by k when it is used. Or even more true, there is no and it is divided by k when it is used. But for the understanding it's good
division, the other side of the equation is multiplied by k. But for the enough to use the above approximation of the truth.]
understanding it's good enough to use the above approximation of the truth.]
OK, so now we know the conditions which must be satisfied so one job can OK, so now we know the conditions which must be satisfied so one job can
preempt another one. But what job gets preempted, how do we choose what job preempt another one. But what job gets preempted, how do we choose what job

View File

@ -71,7 +71,7 @@ waits for the TCP final handshake to complete. </p>
<li> <p> SMTP Connection caching introduces some overhead: the <li> <p> SMTP Connection caching introduces some overhead: the
client needs to send an RSET command to find out if a connection client needs to send an RSET command to find out if a connection
is still usable, before it can send the next MAIL FROM command. is still usable, before it can send the next MAIL FROM command.
</p> This introduces one additional round-trip delay. </p>
</ul> </ul>
@ -200,7 +200,7 @@ lookups is ignored. </p>
/etc/postfix/<a href="postconf.5.html">main.cf</a>: /etc/postfix/<a href="postconf.5.html">main.cf</a>:
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = $<a href="postconf.5.html#relayhost">relayhost</a> <a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = $<a href="postconf.5.html#relayhost">relayhost</a>
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = hotmail.com, ... <a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = hotmail.com, ...
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = static:all (<i>not recommended</i>) <a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = <a href="DATABASE_README.html#types">static</a>:all (<i>not recommended</i>)
</pre> </pre>
</blockquote> </blockquote>

View File

@ -68,7 +68,7 @@ before they can become a problem). </p>
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2> <h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
<p> The following sections document the Postfix 2.5 concurrency <p> The following sections document the Postfix 2.5 concurrency
scheduler, after a discussion of the limitations of the existing scheduler, after a discussion of the limitations of the earlier
concurrency scheduler. This is followed by results of medium-concurrency concurrency scheduler. This is followed by results of medium-concurrency
experiments, and a discussion of trade-offs between performance and experiments, and a discussion of trade-offs between performance and
robustness. </p> robustness. </p>
@ -734,7 +734,8 @@ destinations is the message to be delivered and what transports are
going to be used for the delivery. </p> going to be used for the delivery. </p>
<li> <p> Each recipient entry groups a batch of recipients of one <li> <p> Each recipient entry groups a batch of recipients of one
message which are all going to be delivered to the same destination. message which are all going to be delivered to the same destination
(and over the same transport).
</p> </p>
<li> <p> Each transport structure groups everything what is going <li> <p> Each transport structure groups everything what is going
@ -1064,9 +1065,8 @@ can preempt this job.
<p> <p>
[Well, the truth is, the counter is incremented every time an entry [Well, the truth is, the counter is incremented every time an entry
is selected and it is divided by k when it is used. Or even more is selected and it is divided by k when it is used.
true, there is no division, the other side of the equation is But for the understanding it's good enough to use
multiplied by k. But for the understanding it's good enough to use
the above approximation of the truth.] the above approximation of the truth.]
</p> </p>

View File

@ -71,7 +71,7 @@ waits for the TCP final handshake to complete. </p>
<li> <p> SMTP Connection caching introduces some overhead: the <li> <p> SMTP Connection caching introduces some overhead: the
client needs to send an RSET command to find out if a connection client needs to send an RSET command to find out if a connection
is still usable, before it can send the next MAIL FROM command. is still usable, before it can send the next MAIL FROM command.
</p> This introduces one additional round-trip delay. </p>
</ul> </ul>

View File

@ -68,7 +68,7 @@ before they can become a problem). </p>
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2> <h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
<p> The following sections document the Postfix 2.5 concurrency <p> The following sections document the Postfix 2.5 concurrency
scheduler, after a discussion of the limitations of the existing scheduler, after a discussion of the limitations of the earlier
concurrency scheduler. This is followed by results of medium-concurrency concurrency scheduler. This is followed by results of medium-concurrency
experiments, and a discussion of trade-offs between performance and experiments, and a discussion of trade-offs between performance and
robustness. </p> robustness. </p>
@ -734,7 +734,8 @@ destinations is the message to be delivered and what transports are
going to be used for the delivery. </p> going to be used for the delivery. </p>
<li> <p> Each recipient entry groups a batch of recipients of one <li> <p> Each recipient entry groups a batch of recipients of one
message which are all going to be delivered to the same destination. message which are all going to be delivered to the same destination
(and over the same transport).
</p> </p>
<li> <p> Each transport structure groups everything what is going <li> <p> Each transport structure groups everything what is going
@ -1064,9 +1065,8 @@ can preempt this job.
<p> <p>
[Well, the truth is, the counter is incremented every time an entry [Well, the truth is, the counter is incremented every time an entry
is selected and it is divided by k when it is used. Or even more is selected and it is divided by k when it is used.
true, there is no division, the other side of the equation is But for the understanding it's good enough to use
multiplied by k. But for the understanding it's good enough to use
the above approximation of the truth.] the above approximation of the truth.]
</p> </p>

View File

@ -75,6 +75,9 @@
#endif #endif
#ifndef T_RRSIG #ifndef T_RRSIG
#define T_RRSIG 46 /* Avoid unknown RR in logs */ #define T_RRSIG 46 /* Avoid unknown RR in logs */
#endif
#ifndef T_DNAME
#define T_DNAME 39 /* [RFC6672] */
#endif #endif
/* /*

View File

@ -401,6 +401,7 @@ static int dns_get_rr(DNS_RR **list, const char *orig_name, DNS_REPLY *reply,
msg_panic("dns_get_rr: don't know how to extract resource type %s", msg_panic("dns_get_rr: don't know how to extract resource type %s",
dns_strtype(fixed->type)); dns_strtype(fixed->type));
case T_CNAME: case T_CNAME:
case T_DNAME:
case T_MB: case T_MB:
case T_MG: case T_MG:
case T_MR: case T_MR:

View File

@ -174,6 +174,9 @@ static struct dns_type_map dns_type_map[] = {
#ifdef T_RRSIG #ifdef T_RRSIG
T_RRSIG, "RRSIG", T_RRSIG, "RRSIG",
#endif #endif
#ifdef T_DNAME
T_DNAME, "DNAME",
#endif
#ifdef T_ANY #ifdef T_ANY
T_ANY, "ANY", T_ANY, "ANY",
#endif #endif

View File

@ -58,6 +58,7 @@ static void print_rr(DNS_RR *rr)
printf("%s: %s\n", dns_strtype(rr->type), host.buf); printf("%s: %s\n", dns_strtype(rr->type), host.buf);
break; break;
case T_CNAME: case T_CNAME:
case T_DNAME:
case T_MB: case T_MB:
case T_MG: case T_MG:
case T_MR: case T_MR:

View File

@ -3514,8 +3514,10 @@ extern char *var_psc_acl;
#define DEF_PSC_WLIST_IF "static:all" #define DEF_PSC_WLIST_IF "static:all"
extern char *var_psc_wlist_if; extern char *var_psc_wlist_if;
#define NOPROXY_PROTO_NAME ""
#define VAR_PSC_UPROXY_PROTO "postscreen_upstream_proxy_protocol" #define VAR_PSC_UPROXY_PROTO "postscreen_upstream_proxy_protocol"
#define DEF_PSC_UPROXY_PROTO "" #define DEF_PSC_UPROXY_PROTO NOPROXY_PROTO_NAME
extern char *var_psc_uproxy_proto; extern char *var_psc_uproxy_proto;
#define VAR_PSC_UPROXY_TMOUT "postscreen_upstream_proxy_timeout" #define VAR_PSC_UPROXY_TMOUT "postscreen_upstream_proxy_timeout"

View File

@ -20,7 +20,7 @@
* Patches change both the patchlevel and the release date. Snapshots have no * Patches change both the patchlevel and the release date. Snapshots have no
* patchlevel; they change the release date only. * patchlevel; they change the release date only.
*/ */
#define MAIL_RELEASE_DATE "20130623" #define MAIL_RELEASE_DATE "20130709"
#define MAIL_VERSION_NUMBER "2.11" #define MAIL_VERSION_NUMBER "2.11"
#ifdef SNAPSHOT #ifdef SNAPSHOT

View File

@ -179,7 +179,7 @@ typedef struct {
} PSC_ENDPT_LOOKUP_INFO; } PSC_ENDPT_LOOKUP_INFO;
static const PSC_ENDPT_LOOKUP_INFO psc_endpt_lookup_info[] = { static const PSC_ENDPT_LOOKUP_INFO psc_endpt_lookup_info[] = {
DEF_PSC_UPROXY_PROTO, psc_endpt_local_lookup, NOPROXY_PROTO_NAME, psc_endpt_local_lookup,
HAPROXY_PROTO_NAME, psc_endpt_haproxy_lookup, HAPROXY_PROTO_NAME, psc_endpt_haproxy_lookup,
0, 0,
}; };

View File

@ -130,12 +130,7 @@ static void qmgr_job_link(QMGR_JOB *job)
{ {
QMGR_TRANSPORT *transport = job->transport; QMGR_TRANSPORT *transport = job->transport;
QMGR_MESSAGE *message = job->message; QMGR_MESSAGE *message = job->message;
QMGR_JOB *prev, QMGR_JOB *prev, *next, *list_prev, *list_next, *unread, *current;
*next,
*list_prev,
*list_next,
*unread,
*current;
int delay; int delay;
/* /*
@ -163,6 +158,13 @@ static void qmgr_job_link(QMGR_JOB *job)
* for jobs which are created long after the first chunk of recipients * for jobs which are created long after the first chunk of recipients
* was read in-core (either of these can happen only for multi-transport * was read in-core (either of these can happen only for multi-transport
* messages). * messages).
*
* XXX Note that we test stack_parent rather than stack_level below. This
* subtle difference allows us to enqueue the job in correct time order
* with respect to orphaned children even after their original parent on
* level zero is gone. Consequently, the early loop stop in candidate
* selection works reliably, too. These are the reasons why we care to
* bother with children adoption at all.
*/ */
current = transport->job_current; current = transport->job_current;
for (next = 0, prev = transport->job_list.prev; prev; for (next = 0, prev = transport->job_list.prev; prev;
@ -278,8 +280,7 @@ void qmgr_job_move_limits(QMGR_JOB *job)
QMGR_TRANSPORT *transport = job->transport; QMGR_TRANSPORT *transport = job->transport;
QMGR_MESSAGE *message = job->message; QMGR_MESSAGE *message = job->message;
QMGR_JOB *next = transport->job_next_unread; QMGR_JOB *next = transport->job_next_unread;
int rcpt_unused, int rcpt_unused, msg_rcpt_unused;
msg_rcpt_unused;
/* /*
* Find next unread job on the job list if necessary. Cache it for later. * Find next unread job on the job list if necessary. Cache it for later.
@ -483,13 +484,9 @@ static void qmgr_job_count_slots(QMGR_JOB *job)
static QMGR_JOB *qmgr_job_candidate(QMGR_JOB *current) static QMGR_JOB *qmgr_job_candidate(QMGR_JOB *current)
{ {
QMGR_TRANSPORT *transport = current->transport; QMGR_TRANSPORT *transport = current->transport;
QMGR_JOB *job, QMGR_JOB *job, *best_job = 0;
*best_job = 0; double score, best_score = 0.0;
double score, int max_slots, max_needed_entries, max_total_entries;
best_score = 0.0;
int max_slots,
max_needed_entries,
max_total_entries;
int delay; int delay;
time_t now = sane_time(); time_t now = sane_time();
@ -576,8 +573,7 @@ static QMGR_JOB *qmgr_job_preempt(QMGR_JOB *current)
{ {
const char *myname = "qmgr_job_preempt"; const char *myname = "qmgr_job_preempt";
QMGR_TRANSPORT *transport = current->transport; QMGR_TRANSPORT *transport = current->transport;
QMGR_JOB *job, QMGR_JOB *job, *prev;
*prev;
int expected_slots; int expected_slots;
int rcpt_slots; int rcpt_slots;
@ -706,6 +702,9 @@ static void qmgr_job_pop(QMGR_JOB *job)
/* /*
* Adjust the number of delivery slots available to preempt job's parent. * Adjust the number of delivery slots available to preempt job's parent.
* Note that the -= actually adds back any unused slots, as we have
* already subtracted the expected amount of slots from both counters
* when we did the preemption.
* *
* Note that we intentionally do not adjust slots_used of the parent. Doing * Note that we intentionally do not adjust slots_used of the parent. Doing
* so would decrease the maximum per message inflation factor if the * so would decrease the maximum per message inflation factor if the
@ -778,9 +777,9 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
* entire message delivery, leaving lot of fast destination recipients * entire message delivery, leaving lot of fast destination recipients
* sitting idle in the queue file. * sitting idle in the queue file.
* *
* Ideally we would like to read in recipients whenever there is a * Ideally we would like to read in recipients whenever there is a space,
* space, but to prevent excessive I/O, we read them only when enough * but to prevent excessive I/O, we read them only when enough time has
* time has passed or we can read enough of them at once. * passed or we can read enough of them at once.
* *
* Note that even if we read the recipients few at a time, the message * Note that even if we read the recipients few at a time, the message
* loading code tries to put them to existing recipient entries whenever * loading code tries to put them to existing recipient entries whenever
@ -799,7 +798,7 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
&& message->refcount > 0 && message->refcount > 0
&& (message->rcpt_limit - message->rcpt_count >= job->transport->refill_limit && (message->rcpt_limit - message->rcpt_count >= job->transport->refill_limit
|| (message->rcpt_limit > message->rcpt_count || (message->rcpt_limit > message->rcpt_count
&& sane_time() - message->refill_time >= job->transport->refill_delay))) && sane_time() - message->refill_time >= job->transport->refill_delay)))
qmgr_message_realloc(message); qmgr_message_realloc(message);
/* /*
@ -809,8 +808,9 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
return (peer); return (peer);
/* /*
* There is no suitable peer in-core, so try reading in more recipients if possible. * There is no suitable peer in-core, so try reading in more recipients
* This is our last chance to get suitable peer before giving up on this job for now. * if possible. This is our last chance to get suitable peer before
* giving up on this job for now.
* *
* XXX For message->refcount, see above. * XXX For message->refcount, see above.
*/ */
@ -828,8 +828,7 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
QMGR_ENTRY *qmgr_job_entry_select(QMGR_TRANSPORT *transport) QMGR_ENTRY *qmgr_job_entry_select(QMGR_TRANSPORT *transport)
{ {
QMGR_JOB *job, QMGR_JOB *job, *next;
*next;
QMGR_PEER *peer; QMGR_PEER *peer;
QMGR_ENTRY *entry; QMGR_ENTRY *entry;
@ -948,7 +947,7 @@ QMGR_ENTRY *qmgr_job_entry_select(QMGR_TRANSPORT *transport)
/* qmgr_job_blocker_update - update "blocked job" status */ /* qmgr_job_blocker_update - update "blocked job" status */
void qmgr_job_blocker_update(QMGR_QUEUE *queue) void qmgr_job_blocker_update(QMGR_QUEUE *queue)
{ {
QMGR_TRANSPORT *transport = queue->transport; QMGR_TRANSPORT *transport = queue->transport;
@ -977,4 +976,3 @@ void qmgr_job_blocker_update(QMGR_QUEUE *queue)
queue->blocker_tag = 0; queue->blocker_tag = 0;
} }
} }