2
0
mirror of https://github.com/vdukhovni/postfix synced 2025-08-29 13:18:12 +00:00

postfix-2.11-20130709

This commit is contained in:
Wietse Venema 2013-07-09 00:00:00 -05:00 committed by Viktor Dukhovni
parent 6808166cf8
commit eb5a2ae162
15 changed files with 82 additions and 52 deletions

View File

@ -18761,3 +18761,24 @@ Apologies for any names omitted.
mantools/postlink, proto/postconf.proto, tls/tls_mgr.c,
tls/tls_misc.c, tlsproxy/tls-proxy.c, smtp/smtp.c,
smtpd/smtpd.c.
20130629
Cleanup: documentation. Files: proto/CONNECTION_CACHE_README.html,
proto/SCHEDULER_README.html.
20130708
Cleanup: postscreen_upstream_proxy_protocol setting. Files:
global/mail_params.h, postscreen/postscreen_endpt.c.
20130709
Cleanup: qmgr documentation clarification by Patrik Rak.
Files: proto/SCHEDULER_README.html, qmgr/qmgr_job.c.
Cleanup: re-indented code. File: qmgr/qmgr_job.c.
Logging: minimal DNAME support. Viktor Dukhovni. dns/dns.h,
dns/dns_lookup.c, dns/dns_strtype.c, dns/test_dns_lookup.c.

View File

@ -40,7 +40,8 @@ improves performance depends on the conditions:
* SMTP Connection caching introduces some overhead: the client needs to send
an RSET command to find out if a connection is still usable, before it can
send the next MAIL FROM command.
send the next MAIL FROM command. This introduces one additional round-trip
delay.
For other potential issues with SMTP connection caching, see the discussion of
limitations at the end of this document.

View File

@ -29,7 +29,7 @@ Topics covered by this document:
CCoonnccuurrrreennccyy sscchheedduulliinngg
The following sections document the Postfix 2.5 concurrency scheduler, after a
discussion of the limitations of the existing concurrency scheduler. This is
discussion of the limitations of the earlier concurrency scheduler. This is
followed by results of medium-concurrency experiments, and a discussion of
trade-offs between performance and robustness.
@ -504,7 +504,8 @@ elsewhere, but it is nice to have a coherent overview in one place:
to be delivered and what transports are going to be used for the delivery.
* Each recipient entry groups a batch of recipients of one message which are
all going to be delivered to the same destination.
all going to be delivered to the same destination (and over the same
transport).
* Each transport structure groups everything what is going to be delivered by
delivery agents dedicated for that transport. Each transport maintains a
@ -710,9 +711,8 @@ accumulated, job which requires no more than that number of slots to be fully
delivered can preempt this job.
[Well, the truth is, the counter is incremented every time an entry is selected
and it is divided by k when it is used. Or even more true, there is no
division, the other side of the equation is multiplied by k. But for the
understanding it's good enough to use the above approximation of the truth.]
and it is divided by k when it is used. But for the understanding it's good
enough to use the above approximation of the truth.]
OK, so now we know the conditions which must be satisfied so one job can
preempt another one. But what job gets preempted, how do we choose what job

View File

@ -71,7 +71,7 @@ waits for the TCP final handshake to complete. </p>
<li> <p> SMTP Connection caching introduces some overhead: the
client needs to send an RSET command to find out if a connection
is still usable, before it can send the next MAIL FROM command.
</p>
This introduces one additional round-trip delay. </p>
</ul>
@ -200,7 +200,7 @@ lookups is ignored. </p>
/etc/postfix/<a href="postconf.5.html">main.cf</a>:
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = $<a href="postconf.5.html#relayhost">relayhost</a>
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = hotmail.com, ...
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = static:all (<i>not recommended</i>)
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = <a href="DATABASE_README.html#types">static</a>:all (<i>not recommended</i>)
</pre>
</blockquote>

View File

@ -68,7 +68,7 @@ before they can become a problem). </p>
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
<p> The following sections document the Postfix 2.5 concurrency
scheduler, after a discussion of the limitations of the existing
scheduler, after a discussion of the limitations of the earlier
concurrency scheduler. This is followed by results of medium-concurrency
experiments, and a discussion of trade-offs between performance and
robustness. </p>
@ -734,7 +734,8 @@ destinations is the message to be delivered and what transports are
going to be used for the delivery. </p>
<li> <p> Each recipient entry groups a batch of recipients of one
message which are all going to be delivered to the same destination.
message which are all going to be delivered to the same destination
(and over the same transport).
</p>
<li> <p> Each transport structure groups everything what is going
@ -1064,9 +1065,8 @@ can preempt this job.
<p>
[Well, the truth is, the counter is incremented every time an entry
is selected and it is divided by k when it is used. Or even more
true, there is no division, the other side of the equation is
multiplied by k. But for the understanding it's good enough to use
is selected and it is divided by k when it is used.
But for the understanding it's good enough to use
the above approximation of the truth.]
</p>

View File

@ -71,7 +71,7 @@ waits for the TCP final handshake to complete. </p>
<li> <p> SMTP Connection caching introduces some overhead: the
client needs to send an RSET command to find out if a connection
is still usable, before it can send the next MAIL FROM command.
</p>
This introduces one additional round-trip delay. </p>
</ul>

View File

@ -68,7 +68,7 @@ before they can become a problem). </p>
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
<p> The following sections document the Postfix 2.5 concurrency
scheduler, after a discussion of the limitations of the existing
scheduler, after a discussion of the limitations of the earlier
concurrency scheduler. This is followed by results of medium-concurrency
experiments, and a discussion of trade-offs between performance and
robustness. </p>
@ -734,7 +734,8 @@ destinations is the message to be delivered and what transports are
going to be used for the delivery. </p>
<li> <p> Each recipient entry groups a batch of recipients of one
message which are all going to be delivered to the same destination.
message which are all going to be delivered to the same destination
(and over the same transport).
</p>
<li> <p> Each transport structure groups everything what is going
@ -1064,9 +1065,8 @@ can preempt this job.
<p>
[Well, the truth is, the counter is incremented every time an entry
is selected and it is divided by k when it is used. Or even more
true, there is no division, the other side of the equation is
multiplied by k. But for the understanding it's good enough to use
is selected and it is divided by k when it is used.
But for the understanding it's good enough to use
the above approximation of the truth.]
</p>

View File

@ -75,6 +75,9 @@
#endif
#ifndef T_RRSIG
#define T_RRSIG 46 /* Avoid unknown RR in logs */
#endif
#ifndef T_DNAME
#define T_DNAME 39 /* [RFC6672] */
#endif
/*

View File

@ -401,6 +401,7 @@ static int dns_get_rr(DNS_RR **list, const char *orig_name, DNS_REPLY *reply,
msg_panic("dns_get_rr: don't know how to extract resource type %s",
dns_strtype(fixed->type));
case T_CNAME:
case T_DNAME:
case T_MB:
case T_MG:
case T_MR:

View File

@ -174,6 +174,9 @@ static struct dns_type_map dns_type_map[] = {
#ifdef T_RRSIG
T_RRSIG, "RRSIG",
#endif
#ifdef T_DNAME
T_DNAME, "DNAME",
#endif
#ifdef T_ANY
T_ANY, "ANY",
#endif

View File

@ -58,6 +58,7 @@ static void print_rr(DNS_RR *rr)
printf("%s: %s\n", dns_strtype(rr->type), host.buf);
break;
case T_CNAME:
case T_DNAME:
case T_MB:
case T_MG:
case T_MR:

View File

@ -3514,8 +3514,10 @@ extern char *var_psc_acl;
#define DEF_PSC_WLIST_IF "static:all"
extern char *var_psc_wlist_if;
#define NOPROXY_PROTO_NAME ""
#define VAR_PSC_UPROXY_PROTO "postscreen_upstream_proxy_protocol"
#define DEF_PSC_UPROXY_PROTO ""
#define DEF_PSC_UPROXY_PROTO NOPROXY_PROTO_NAME
extern char *var_psc_uproxy_proto;
#define VAR_PSC_UPROXY_TMOUT "postscreen_upstream_proxy_timeout"

View File

@ -20,7 +20,7 @@
* Patches change both the patchlevel and the release date. Snapshots have no
* patchlevel; they change the release date only.
*/
#define MAIL_RELEASE_DATE "20130623"
#define MAIL_RELEASE_DATE "20130709"
#define MAIL_VERSION_NUMBER "2.11"
#ifdef SNAPSHOT

View File

@ -179,7 +179,7 @@ typedef struct {
} PSC_ENDPT_LOOKUP_INFO;
static const PSC_ENDPT_LOOKUP_INFO psc_endpt_lookup_info[] = {
DEF_PSC_UPROXY_PROTO, psc_endpt_local_lookup,
NOPROXY_PROTO_NAME, psc_endpt_local_lookup,
HAPROXY_PROTO_NAME, psc_endpt_haproxy_lookup,
0,
};

View File

@ -130,12 +130,7 @@ static void qmgr_job_link(QMGR_JOB *job)
{
QMGR_TRANSPORT *transport = job->transport;
QMGR_MESSAGE *message = job->message;
QMGR_JOB *prev,
*next,
*list_prev,
*list_next,
*unread,
*current;
QMGR_JOB *prev, *next, *list_prev, *list_next, *unread, *current;
int delay;
/*
@ -163,6 +158,13 @@ static void qmgr_job_link(QMGR_JOB *job)
* for jobs which are created long after the first chunk of recipients
* was read in-core (either of these can happen only for multi-transport
* messages).
*
* XXX Note that we test stack_parent rather than stack_level below. This
* subtle difference allows us to enqueue the job in correct time order
* with respect to orphaned children even after their original parent on
* level zero is gone. Consequently, the early loop stop in candidate
* selection works reliably, too. These are the reasons why we care to
* bother with children adoption at all.
*/
current = transport->job_current;
for (next = 0, prev = transport->job_list.prev; prev;
@ -278,8 +280,7 @@ void qmgr_job_move_limits(QMGR_JOB *job)
QMGR_TRANSPORT *transport = job->transport;
QMGR_MESSAGE *message = job->message;
QMGR_JOB *next = transport->job_next_unread;
int rcpt_unused,
msg_rcpt_unused;
int rcpt_unused, msg_rcpt_unused;
/*
* Find next unread job on the job list if necessary. Cache it for later.
@ -483,13 +484,9 @@ static void qmgr_job_count_slots(QMGR_JOB *job)
static QMGR_JOB *qmgr_job_candidate(QMGR_JOB *current)
{
QMGR_TRANSPORT *transport = current->transport;
QMGR_JOB *job,
*best_job = 0;
double score,
best_score = 0.0;
int max_slots,
max_needed_entries,
max_total_entries;
QMGR_JOB *job, *best_job = 0;
double score, best_score = 0.0;
int max_slots, max_needed_entries, max_total_entries;
int delay;
time_t now = sane_time();
@ -576,8 +573,7 @@ static QMGR_JOB *qmgr_job_preempt(QMGR_JOB *current)
{
const char *myname = "qmgr_job_preempt";
QMGR_TRANSPORT *transport = current->transport;
QMGR_JOB *job,
*prev;
QMGR_JOB *job, *prev;
int expected_slots;
int rcpt_slots;
@ -706,6 +702,9 @@ static void qmgr_job_pop(QMGR_JOB *job)
/*
* Adjust the number of delivery slots available to preempt job's parent.
* Note that the -= actually adds back any unused slots, as we have
* already subtracted the expected amount of slots from both counters
* when we did the preemption.
*
* Note that we intentionally do not adjust slots_used of the parent. Doing
* so would decrease the maximum per message inflation factor if the
@ -777,16 +776,16 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
* in. Otherwise single recipient for slow destination might starve the
* entire message delivery, leaving lot of fast destination recipients
* sitting idle in the queue file.
*
* Ideally we would like to read in recipients whenever there is a
* space, but to prevent excessive I/O, we read them only when enough
* time has passed or we can read enough of them at once.
*
*
* Ideally we would like to read in recipients whenever there is a space,
* but to prevent excessive I/O, we read them only when enough time has
* passed or we can read enough of them at once.
*
* Note that even if we read the recipients few at a time, the message
* loading code tries to put them to existing recipient entries whenever
* possible, so the per-destination recipient grouping is not grossly
* affected.
*
*
* XXX Workaround for logic mismatch. The message->refcount test needs
* explanation. If the refcount is zero, it means that qmgr_active_done()
* is being completed asynchronously. In such case, we can't read in
@ -799,7 +798,7 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
&& message->refcount > 0
&& (message->rcpt_limit - message->rcpt_count >= job->transport->refill_limit
|| (message->rcpt_limit > message->rcpt_count
&& sane_time() - message->refill_time >= job->transport->refill_delay)))
&& sane_time() - message->refill_time >= job->transport->refill_delay)))
qmgr_message_realloc(message);
/*
@ -809,8 +808,9 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
return (peer);
/*
* There is no suitable peer in-core, so try reading in more recipients if possible.
* This is our last chance to get suitable peer before giving up on this job for now.
* There is no suitable peer in-core, so try reading in more recipients
* if possible. This is our last chance to get suitable peer before
* giving up on this job for now.
*
* XXX For message->refcount, see above.
*/
@ -828,8 +828,7 @@ static QMGR_PEER *qmgr_job_peer_select(QMGR_JOB *job)
QMGR_ENTRY *qmgr_job_entry_select(QMGR_TRANSPORT *transport)
{
QMGR_JOB *job,
*next;
QMGR_JOB *job, *next;
QMGR_PEER *peer;
QMGR_ENTRY *entry;
@ -948,7 +947,7 @@ QMGR_ENTRY *qmgr_job_entry_select(QMGR_TRANSPORT *transport)
/* qmgr_job_blocker_update - update "blocked job" status */
void qmgr_job_blocker_update(QMGR_QUEUE *queue)
void qmgr_job_blocker_update(QMGR_QUEUE *queue)
{
QMGR_TRANSPORT *transport = queue->transport;
@ -977,4 +976,3 @@ void qmgr_job_blocker_update(QMGR_QUEUE *queue)
queue->blocker_tag = 0;
}
}