This code and its features evolved over several releases; some, but not all of that evolution is included below for retrospective history and completeness. This article represents the stage we had reached before the last changes that were added as we merged the code into Open Source BIND 9.9.8. 9.10.3 and 9.11 (future release). Irrespective of which edition it is, we strongly encourage those dependent upon and running the pre-release version of Recursive Client Rate limiting functionality to upgrade to the production version.
Several new tuning options for Recursive Server behaviour made their
debut in BIND 9.9.6-S1 and the newer BIND 9.9 and 9.10 experimental versions (available on request). These features are intended to optimize
recursive server behaviour in favor of good client queries, whilst at
the same time limiting the impact of bad (cannot be resolved, or which
take too long to resolve) client queries on local recursive server
Early-testing Experimental Features Removed
'Hold-down' timer introduced in 9.9.6-S1b1 has been removed in favor of
rate-limiting fetches per server (described below). The associated
options, holddown-threshold and holddown-time, have been removed.
Another option introduced in 9.9.6-S1b1, client-soft-quota,
was removed in favor of named calculating its own soft quota based on
the recursive-clients (or hard quota) setting. For changes in the
Client Soft Quota, see below.
If any of those settings are still
in your named.conf file from 9.9.6-S1b1, you will have an error when
starting named (or from named-checkconf.)
Rate-limiting Fetches Per Server
Replacing the hold-down timer feature is a dynamic limit to the number of fetches allowed per server (IP).
The fetches-per-server option
sets a hard upper limit to the number of outstanding fetches allowed
for a single server. The lower limit is 2% of fetches-per-server, but
never below 1.
Based on a moving average of the timeout ratio for
each server, the server's individual quota will be periodically
adjusted up or down. The adjustments up and down are not linear;
instead they follow a curve that is initially aggressive but which has a
The fetch-quota-params option specifies four parameters that control how the per-server fetch limit is calculated.
fetch-quota-params 100 0.1 0.3 0.7;
The default value for fetches-per-server is 0, which disables this feature.
The first number in fetch-quota-params
specifies how often, in number of queries to the server, to recalculate
its fetch quota. The default is to recalculate every 100 queries sent.
second number specifies the threshold timeout ratio below which the
server will be considered to be "good" and will have its fetch quota
raised if it is below the maximum. The default is 0.1, or 10%.
third number specifies the threshold timeout ratio above which the
server will be considered to be "bad" and will have its fetch quota
lowered if it is above the minimum. The default is 0.3, or 30%.
fourth number specifies the weight given to the most recent counting
period when averaging it with the previously held timeout ratio. The
default is 0.7, or 70%.
By design, this per-server quota should
have little impact on lightly-used servers no matter how responsive (or
not) they are, whilst heavily-used servers will have enough traffic to
keep the moving average of their timeout ratio "fresh" even when they
are deeply penalized for not responding.
Rate-limiting Fetches Per Zone
already has an option that limits how many identical client queries
(that cannot be answered directly from cache or authoritative zone data)
it will accept. When many clients simultaneously query for the same
name and type, the clients will all be attached to the same fetch, up to
the max-clients-per-query limit, and only one iterative query
will be sent. This doesn't help however in the situation where client
queries are for the same domain, but the hostname portion of the query
is unique for each.
To help with this, we're introducing logic to rate-limit by zone
instead. This is configured using a new option fetches-per-zone
which defines the maximum number of simultaneous iterative queries to
any one domain that the server will permit before blocking new queries
for data in or beneath that zone. If fetches-per-zone is set to zero, then there is no limit on the number of fetches per query and no queries will be dropped.
The default is 0, which disables this feature. (In earlier versions it was 200.)
a fetch context is created to carry out an iterative query, it gets
initialized with the closest known zone cut, and we put a cap on the
number of fetches are allowed to be querying for that same zone cut at a
The statistics maintained on fetches per zone are reset when there are
no outstanding fetches per zone. This is because the structure that was
holding them doesn't persist once there are no longer any outstanding
fetches for that zone.
FAQs on Rate-limiting Fetches Per Zone/Server
What happens when a client query is dropped as a result of fetches-per-server/zone rate-limiting?
Clients whose queries are dropped due to client rate-limiting quotas are sent a SERVFAIL response.
When are these features useful?
These options are particularly good when a large number of queries are being received:
- fetches-per-zone: for different names in the same zone
- fetches-per-server: for different names in different domains for the same server
when these authoritative servers are slow at responding or are failing
to respond. They should not impact popular domains whose servers are
responding promptly to each query received.
When are these feature unlikely to be helpful?
authoritative servers are responding very quickly, then it's possible
that the number of outstanding queries for that server or zone will
never reach the limit, rendering this mechanism ineffectual. Care
should also be taken not to configure too low a value for these:
Are there any edge cases where odd behavior might be observed?
- fetches-per-server: as might negatively impact servers which host many popular zones.
- fetches-per-zone might negatively impact some popular social media and other sites.
restarting a server, or if the cache has just been cleared via the rndc
utility, then there may be some temporary spikes in traffic that
trigger these limits unexpectedly, but the effect should be temporary.
How can I find out how this configuration option is impacting my server?
now reports the list of current fetches, with statistics on how many
are active, how many have been allowed and how many have been dropped
due to exceeding the fetches-per-server and fetches-per-zone quotas.
Client Drop Policy
This feature was introduced following the observation that the build-up of recursive clients is
very similar in behavior to a TCP SYN storm. Researchers determined
that not dropping
the oldest connection (in our case, the oldest recursive client) when the pool of
connections becomes full, is a more effective strategy than always dropping the
oldest. This is because it has a good chance of dropping one of the
'bad' connections than the 'OK' ones, and will be dropping it sooner
rather than later, which overall works out better. This code also works best in combination with tuning the
recursive clients soft limit so that the recursive server is never in
the position of hitting the hard limit - we always want to accept the
The client-drop-policy option lets you set
values for the probability of dropping oldest, newest, or random (the
three of which have to sum to 100) existing recursive queries when
recursive clients quota is reached. (Note: soft or hard - there is
still a drop, but in the case of the hard limit we also drop the inbound
has three arguments that define percentage probabilities for "drop
newest", "drop random" and "drop oldest" in that order. All three values
must be set, and they must sum to exactly 100. By default, the
probabilities of these are 0% for drop newest, 50% for drop random, and
50% for drop oldest.
Prior to the RED implementation, and with no client-drop-policy defined,
the default would have been to drop the oldest outstanding query..
Recursive Client Contexts Soft Quota
In the traditional
recursive clients context model, we have both a soft and a hard limit
to the number of recursive clients. When reached, the soft limit acts
by dropping a pending request for each new incoming request. When named
reaches the hard limit, it drops both a pending request, and the new
inbound client query. So ideally we want named to be managing its
backlog of recursive clients before reaching the hard limit.
is no soft limit at all in the traditional model where
recursive-clients <= 1000. For recursive-clients > 1000, the soft
quota defaults to hard-quota -100.
In 9.9.6-S1b1 we introduced the client-soft-quota
option to give the operator precise control over how the soft quota was
configured. In testing since the introduction of this option we have
determined that tuning this is not very useful, but that better defaults
were needed than we had before.
Now, when recursive-clients
<= 1000 the soft quota is 90% of recursive-clients. When
recursive-clients > 1000, the soft quota will the equal to the hard
quota minus either 100 or the number of worker threads, whichever is
Caching of SERVFAIL responses
9.9.6-S1 is a new feature to cache a SERVFAIL response due to DNSSEC
validation failure or other general server failure. This feature is
controlled by the servfail-ttl option, in global or per-view options.
SERVFAIL cache is not consulted if a query has the CD (Checking
Disabled) bit set; this allows a query that failed due to DNSSEC
validation to be retried without waiting for the SERVFAIL TTL to expire.
default value for servfail-ttl is 10, which causes any SERVFAIL results
to be cached for 10 seconds. The maximum value is 300 (five minutes); a
higher value will be silently reduced to 300. A value of 0 disables
SERVFAIL caching addresses the same problems as
fetches-per-zone and fetches-per-server.
that there can be unexpected consequences from this caching, as
previously all SERVFAIL responses were retried immediately when
Only SERVFAIL responses due to recursion or validation failures will be
cached. SERVFAILs that occur due to expiry of or failure to load
authoritative zone data will not be cached. This is because it does not
require much work from named, and very little delay/use of resources,
to recreate the same answer for a repeated client query. It also
ensures that when the problem is rectified, that named starts responding
authoritatively again immediately instead of waiting for the SERVFAIL
TTL to expire.
Caching of SERVFAIL responses assists in limiting the impact of
repeated queries (due to client retries) for the same name for which
resolution has already failed.
The outcome of caching SERVFAIL responses has included some situations where it was seen to be detrimental to the client experience, particularly when the causes of the SERVFAIL being presented to the client were transient and from a scenario where an immediate retry of the query would be a more appropriate action. Caching of SERVFAIL responses will be reviewed and revised before 9.11.0 is released. Production environments upgrading to 9.9.8 and 9.10.3 will find that the configuration option servfail-ttl is no longer valid.
Production environments continuing to use the older versions of BIND that include this feature are recommended to disable it by setting servfail-ttl 0; or if they are deriving clear benefit from it, to consider setting it to a lower value than the default 10s. 1s or 2s should be sufficient.
© 2001-2016 Internet Systems ConsortiumPlease help us to improve the content of our knowledge base by letting us know below how we can improve this article. If you have a technical question or problem on which you'd like help, please don't submit it here as article feedback. For assistance with problems and questions for which you have not been able to find an answer in our Knowledge Base, we recommend searching our community mailing list archives and/or posting your question there (you will need to register there first for your posts to be accepted). The bind-users and the dhcp-users lists particularly have a long-standing and active membership.ISC relies on the financial support of the community to fund the development of its open source software products. If you would like to support future product evolution and maintenance as well having peace of mind knowing that our team of experts are poised to provide you with individual technical assistance whenever you call upon them, then please consider our Professional Subscription Support services - details can be found on our main website.