5. Advanced Configurations

5.1. Dynamic Update

Dynamic update is a method for adding, replacing, or deleting records in a primary server by sending it a special form of DNS messages. The format and meaning of these messages is specified in RFC 2136.

Dynamic update is enabled by including an allow-update or an update-policy clause in the zone statement.

If the zone’s update-policy is set to local, updates to the zone are permitted for the key local-ddns, which is generated by named at startup. See Dynamic Update Policies for more details.

Dynamic updates using Kerberos-signed requests can be made using the TKEY/GSS protocol, either by setting the tkey-gssapi-keytab option or by setting both the tkey-gssapi-credential and tkey-domain options. Once enabled, Kerberos-signed requests are matched against the update policies for the zone, using the Kerberos principal as the signer for the request.

Updating of secure zones (zones using DNSSEC) follows RFC 3007: RRSIG, NSEC, and NSEC3 records affected by updates are automatically regenerated by the server using an online zone key. Update authorization is based on transaction signatures and an explicit server policy.

5.1.1. The Journal File

All changes made to a zone using dynamic update are stored in the zone’s journal file. This file is automatically created by the server when the first dynamic update takes place. The name of the journal file is formed by appending the extension .jnl to the name of the corresponding zone file unless specifically overridden. The journal file is in a binary format and should not be edited manually.

The server also occasionally writes (“dumps”) the complete contents of the updated zone to its zone file. This is not done immediately after each dynamic update because that would be too slow when a large zone is updated frequently. Instead, the dump is delayed by up to 15 minutes, allowing additional updates to take place. During the dump process, transient files are created with the extensions .jnw and .jbk; under ordinary circumstances, these are removed when the dump is complete, and can be safely ignored.

When a server is restarted after a shutdown or crash, it replays the journal file to incorporate into the zone any updates that took place after the last zone dump.

Changes that result from incoming incremental zone transfers are also journaled in a similar way.

The zone files of dynamic zones cannot normally be edited by hand because they are not guaranteed to contain the most recent dynamic changes; those are only in the journal file. The only way to ensure that the zone file of a dynamic zone is up-to-date is to run rndc stop.

To make changes to a dynamic zone manually, follow these steps: first, disable dynamic updates to the zone using rndc freeze zone. This updates the zone file with the changes stored in its .jnl file. Then, edit the zone file. Finally, run rndc thaw zone to reload the changed zone and re-enable dynamic updates.

rndc sync zone updates the zone file with changes from the journal file without stopping dynamic updates; this may be useful for viewing the current zone state. To remove the .jnl file after updating the zone file, use rndc sync -clean.

5.2. Incremental Zone Transfers (IXFR)

The incremental zone transfer (IXFR) protocol is a way for secondary servers to transfer only changed data, instead of having to transfer an entire zone. The IXFR protocol is specified in RFC 1995.

When acting as a primary server, BIND 9 supports IXFR for those zones where the necessary change history information is available. These include primary zones maintained by dynamic update and secondary zones whose data was obtained by IXFR. For manually maintained primary zones, and for secondary zones obtained by performing a full zone transfer (AXFR), IXFR is supported only if the option ixfr-from-differences is set to yes.

When acting as a secondary server, BIND 9 attempts to use IXFR unless it is explicitly disabled. For more information about disabling IXFR, see the description of the request-ixfr clause of the server statement.

When a secondary server receives a zone via AXFR, it creates a new copy of the zone database and then swaps it into place; during the loading process, queries continue to be served from the old database with no interference. When receiving a zone via IXFR, however, changes are applied to the running zone, which may degrade query performance during the transfer. If a server receiving an IXFR request determines that the response size would be similar in size to an AXFR response, it may wish to send AXFR instead. The threshold at which this determination is made can be configured using the max-ixfr-ratio option.

5.3. Split DNS

Setting up different views of the DNS space to internal and external resolvers is usually referred to as a split DNS setup. There are several reasons an organization might want to set up its DNS this way.

One common reason to use split DNS is to hide “internal” DNS information from “external” clients on the Internet. There is some debate as to whether this is actually useful. Internal DNS information leaks out in many ways (via email headers, for example) and most savvy “attackers” can find the information they need using other means. However, since listing addresses of internal servers that external clients cannot possibly reach can result in connection delays and other annoyances, an organization may choose to use split DNS to present a consistent view of itself to the outside world.

Another common reason for setting up a split DNS system is to allow internal networks that are behind filters or in RFC 1918 space (reserved IP space, as documented in RFC 1918) to resolve DNS on the Internet. Split DNS can also be used to allow mail from outside back into the internal network.

5.3.1. Example Split DNS Setup

Let’s say a company named Example, Inc. (example.com) has several corporate sites that have an internal network with reserved Internet Protocol (IP) space and an external demilitarized zone (DMZ), or “outside” section of a network, that is available to the public.

Example, Inc. wants its internal clients to be able to resolve external hostnames and to exchange mail with people on the outside. The company also wants its internal resolvers to have access to certain internal-only zones that are not available at all outside of the internal network.

To accomplish this, the company sets up two sets of name servers. One set is on the inside network (in the reserved IP space) and the other set is on bastion hosts, which are “proxy” hosts in the DMZ that can talk to both sides of its network.

The internal servers are configured to forward all queries, except queries for site1.internal, site2.internal, site1.example.com, and site2.example.com, to the servers in the DMZ. These internal servers have complete sets of information for site1.example.com, site2.example.com, site1.internal, and site2.internal.

To protect the site1.internal and site2.internal domains, the internal name servers must be configured to disallow all queries to these domains from any external hosts, including the bastion hosts.

The external servers, which are on the bastion hosts, are configured to serve the “public” version of the site1.example.com and site2.example.com zones. This could include things such as the host records for public servers (www.example.com and ftp.example.com) and mail exchange (MX) records (a.mx.example.com and b.mx.example.com).

In addition, the public site1.example.com and site2.example.com zones should have special MX records that contain wildcard (*) records pointing to the bastion hosts. This is needed because external mail servers have no other way of determining how to deliver mail to those internal hosts. With the wildcard records, the mail is delivered to the bastion host, which can then forward it on to internal hosts.

Here’s an example of a wildcard MX record:

*   IN MX 10 external1.example.com.

Now that they accept mail on behalf of anything in the internal network, the bastion hosts need to know how to deliver mail to internal hosts. The resolvers on the bastion hosts need to be configured to point to the internal name servers for DNS resolution.

Queries for internal hostnames are answered by the internal servers, and queries for external hostnames are forwarded back out to the DNS servers on the bastion hosts.

For all of this to work properly, internal clients need to be configured to query only the internal name servers for DNS queries. This could also be enforced via selective filtering on the network.

If everything has been set properly, Example, Inc.’s internal clients are now able to:

  • Look up any hostnames in the site1.example.com and site2.example.com zones.

  • Look up any hostnames in the site1.internal and site2.internal domains.

  • Look up any hostnames on the Internet.

  • Exchange mail with both internal and external users.

Hosts on the Internet are able to:

  • Look up any hostnames in the site1.example.com and site2.example.com zones.

  • Exchange mail with anyone in the site1.example.com and site2.example.com zones.

Here is an example configuration for the setup just described above. Note that this is only configuration information; for information on how to configure the zone files, see Configurations and Zone Files.

Internal DNS server config:

acl internals { 172.16.72.0/24; 192.168.1.0/24; };

acl externals { bastion-ips-go-here; };

options {
    ...
    ...
    forward only;
    // forward to external servers
    forwarders {
    bastion-ips-go-here;
    };
    // sample allow-transfer (no one)
    allow-transfer { none; };
    // restrict query access
    allow-query { internals; externals; };
    // restrict recursion
    allow-recursion { internals; };
    ...
    ...
};

// sample primary zone
zone "site1.example.com" {
  type primary;
  file "m/site1.example.com";
  // do normal iterative resolution (do not forward)
  forwarders { };
  allow-query { internals; externals; };
  allow-transfer { internals; };
};

// sample secondary zone
zone "site2.example.com" {
  type secondary;
  file "s/site2.example.com";
  primaries { 172.16.72.3; };
  forwarders { };
  allow-query { internals; externals; };
  allow-transfer { internals; };
};

zone "site1.internal" {
  type primary;
  file "m/site1.internal";
  forwarders { };
  allow-query { internals; };
  allow-transfer { internals; }
};

zone "site2.internal" {
  type secondary;
  file "s/site2.internal";
  primaries { 172.16.72.3; };
  forwarders { };
  allow-query { internals };
  allow-transfer { internals; }
};

External (bastion host) DNS server configuration:

acl internals { 172.16.72.0/24; 192.168.1.0/24; };

acl externals { bastion-ips-go-here; };

options {
  ...
  ...
  // sample allow-transfer (no one)
  allow-transfer { none; };
  // default query access
  allow-query { any; };
  // restrict cache access
  allow-query-cache { internals; externals; };
  // restrict recursion
  allow-recursion { internals; externals; };
  ...
  ...
};

// sample secondary zone
zone "site1.example.com" {
  type primary;
  file "m/site1.foo.com";
  allow-transfer { internals; externals; };
};

zone "site2.example.com" {
  type secondary;
  file "s/site2.foo.com";
  primaries { another_bastion_host_maybe; };
  allow-transfer { internals; externals; }
};

In the resolv.conf (or equivalent) on the bastion host(s):

search ...
nameserver 172.16.72.2
nameserver 172.16.72.3
nameserver 172.16.72.4

5.4. IPv6 Support in BIND 9

BIND 9 fully supports all currently defined forms of IPv6 name-to-address and address-to-name lookups. It also uses IPv6 addresses to make queries when running on an IPv6-capable system.

For forward lookups, BIND 9 supports only AAAA records. RFC 3363 deprecated the use of A6 records, and client-side support for A6 records was accordingly removed from BIND 9. However, authoritative BIND 9 name servers still load zone files containing A6 records correctly, answer queries for A6 records, and accept zone transfer for a zone containing A6 records.

For IPv6 reverse lookups, BIND 9 supports the traditional “nibble” format used in the ip6.arpa domain, as well as the older, deprecated ip6.int domain. Older versions of BIND 9 supported the “binary label” (also known as “bitstring”) format, but support of binary labels has been completely removed per RFC 3363. Many applications in BIND 9 do not understand the binary label format at all anymore, and return an error if one is given. In particular, an authoritative BIND 9 name server will not load a zone file containing binary labels.

5.4.1. Address Lookups Using AAAA Records

The IPv6 AAAA record is a parallel to the IPv4 A record, and, unlike the deprecated A6 record, specifies the entire IPv6 address in a single record. For example:

$ORIGIN example.com.
host            3600    IN      AAAA    2001:db8::1

Use of IPv4-in-IPv6 mapped addresses is not recommended. If a host has an IPv4 address, use an A record, not a AAAA, with ::ffff:192.168.42.1 as the address.

5.4.2. Address-to-Name Lookups Using Nibble Format

When looking up an address in nibble format, the address components are simply reversed, just as in IPv4, and ip6.arpa. is appended to the resulting name. For example, the following commands produce a reverse name lookup for a host with address 2001:db8::1:

$ORIGIN 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0  14400   IN    PTR    (
                    host.example.com. )

5.5. Dynamically Loadable Zones (DLZ)

Dynamically Loadable Zones (DLZ) are an extension to BIND 9 that allows zone data to be retrieved directly from an external database. There is no required format or schema. DLZ modules exist for several different database backends, including MySQL and LDAP, and can be written for any other.

The DLZ module provides data to named in text format, which is then converted to DNS wire format by named. This conversion, and the lack of any internal caching, places significant limits on the query performance of DLZ modules. Consequently, DLZ is not recommended for use on high-volume servers. However, it can be used in a hidden primary configuration, with secondaries retrieving zone updates via AXFR. Note, however, that DLZ has no built-in support for DNS notify; secondary servers are not automatically informed of changes to the zones in the database.

5.5.1. Configuring DLZ

A DLZ database is configured with a dlz statement in named.conf:

dlz example {
database "dlopen driver.so args";
search yes;
};

This specifies a DLZ module to search when answering queries; the module is implemented in driver.so and is loaded at runtime by the dlopen DLZ driver. Multiple dlz statements can be specified; when answering a query, all DLZ modules with search set to yes are queried to see whether they contain an answer for the query name. The best available answer is returned to the client.

The search option in the above example can be omitted, because yes is the default value.

If search is set to no, this DLZ module is not searched for the best match when a query is received. Instead, zones in this DLZ must be separately specified in a zone statement. This allows users to configure a zone normally using standard zone-option semantics, but specify a different database backend for storage of the zone’s data. For example, to implement NXDOMAIN redirection using a DLZ module for backend storage of redirection rules:

dlz other {
       database "dlopen driver.so args";
       search no;
};

zone "." {
       type redirect;
       dlz other;
};

5.5.2. Sample DLZ Module

For guidance in the implementation of DLZ modules, the directory contrib/dlz/example contains a basic dynamically linkable DLZ module - i.e., one which can be loaded at runtime by the “dlopen” DLZ driver. The example sets up a single zone, whose name is passed to the module as an argument in the dlz statement:

dlz other {
       database "dlopen driver.so example.nil";
};

In the above example, the module is configured to create a zone “example.nil”, which can answer queries and AXFR requests and accept DDNS updates. At runtime, prior to any updates, the zone contains an SOA, NS, and a single A record at the apex:

example.nil.  3600    IN      SOA     example.nil. hostmaster.example.nil. (
                          123 900 600 86400 3600
                      )
example.nil.  3600    IN      NS      example.nil.
example.nil.  1800    IN      A       10.53.0.1

The sample driver can retrieve information about the querying client and alter its response on the basis of this information. To demonstrate this feature, the example driver responds to queries for “source-addr.``zonename``>/TXT” with the source address of the query. Note, however, that this record will not be included in AXFR or ANY responses. Normally, this feature is used to alter responses in some other fashion, e.g., by providing different address records for a particular name depending on the network from which the query arrived.

Documentation of the DLZ module API can be found in contrib/dlz/example/README. This directory also contains the header file dlz_minimal.h, which defines the API and should be included by any dynamically linkable DLZ module.

5.6. Dynamic Database (DynDB)

Dynamic Database, or DynDB, is an extension to BIND 9 which, like DLZ (see Dynamically Loadable Zones (DLZ)), allows zone data to be retrieved from an external database. Unlike DLZ, a DynDB module provides a full-featured BIND zone database interface. Where DLZ translates DNS queries into real-time database lookups, resulting in relatively poor query performance, and is unable to handle DNSSEC-signed data due to its limited API, a DynDB module can pre-load an in-memory database from the external data source, providing the same performance and functionality as zones served natively by BIND.

A DynDB module supporting LDAP has been created by Red Hat and is available from https://pagure.io/bind-dyndb-ldap.

A sample DynDB module for testing and developer guidance is included with the BIND source code, in the directory bin/tests/system/dyndb/driver.

5.6.1. Configuring DynDB

A DynDB database is configured with a dyndb statement in named.conf:

dyndb example "driver.so" {
    parameters
};

The file driver.so is a DynDB module which implements the full DNS database API. Multiple dyndb statements can be specified, to load different drivers or multiple instances of the same driver. Zones provided by a DynDB module are added to the view’s zone table, and are treated as normal authoritative zones when BIND responds to queries. Zone configuration is handled internally by the DynDB module.

The parameters are passed as an opaque string to the DynDB module’s initialization routine. Configuration syntax differs depending on the driver.

5.6.2. Sample DynDB Module

For guidance in the implementation of DynDB modules, the directory bin/tests/system/dyndb/driver contains a basic DynDB module. The example sets up two zones, whose names are passed to the module as arguments in the dyndb statement:

dyndb sample "sample.so" { example.nil. arpa. };

In the above example, the module is configured to create a zone, “example.nil”, which can answer queries and AXFR requests and accept DDNS updates. At runtime, prior to any updates, the zone contains an SOA, NS, and a single A record at the apex:

example.nil.  86400    IN      SOA     example.nil. example.nil. (
                                              0 28800 7200 604800 86400
                                      )
example.nil.  86400    IN      NS      example.nil.
example.nil.  86400    IN      A       127.0.0.1

When the zone is updated dynamically, the DynDB module determines whether the updated RR is an address (i.e., type A or AAAA); if so, it automatically updates the corresponding PTR record in a reverse zone. Note that updates are not stored permanently; all updates are lost when the server is restarted.

5.7. Catalog Zones

A “catalog zone” is a special DNS zone that contains a list of other zones to be served, along with their configuration parameters. Zones listed in a catalog zone are called “member zones.” When a catalog zone is loaded or transferred to a secondary server which supports this functionality, the secondary server creates the member zones automatically. When the catalog zone is updated (for example, to add or delete member zones, or change their configuration parameters), those changes are immediately put into effect. Because the catalog zone is a normal DNS zone, these configuration changes can be propagated using the standard AXFR/IXFR zone transfer mechanism.

Catalog zones’ format and behavior are specified as an Internet draft for interoperability among DNS implementations. The latest revision of the DNS catalog zones draft can be found here: https://datatracker.ietf.org/doc/draft-toorop-dnsop-dns-catalog-zones/ .

5.7.1. Principle of Operation

Normally, if a zone is to be served by a secondary server, the named.conf file on the server must list the zone, or the zone must be added using rndc addzone. In environments with a large number of secondary servers, and/or where the zones being served are changing frequently, the overhead involved in maintaining consistent zone configuration on all the secondary servers can be significant.

A catalog zone is a way to ease this administrative burden: it is a DNS zone that lists member zones that should be served by secondary servers. When a secondary server receives an update to the catalog zone, it adds, removes, or reconfigures member zones based on the data received.

To use a catalog zone, it must first be set up as a normal zone on both the primary and secondary servers that are configured to use it. It must also be added to a catalog-zones list in the options or view statement in named.conf. This is comparable to the way a policy zone is configured as a normal zone and also listed in a response-policy statement.

To use the catalog zone feature to serve a new member zone:

  • Set up the member zone to be served on the primary as normal. This can be done by editing named.conf or by running rndc addzone.

  • Add an entry to the catalog zone for the new member zone. This can be done by editing the catalog zone’s zone file and running rndc reload, or by updating the zone using nsupdate.

The change to the catalog zone is propagated from the primary to all secondaries using the normal AXFR/IXFR mechanism. When the secondary receives the update to the catalog zone, it detects the entry for the new member zone, creates an instance of that zone on the secondary server, and points that instance to the primaries specified in the catalog zone data. The newly created member zone is a normal secondary zone, so BIND immediately initiates a transfer of zone contents from the primary. Once complete, the secondary starts serving the member zone.

Removing a member zone from a secondary server requires only deleting the member zone’s entry in the catalog zone; the change to the catalog zone is propagated to the secondary server using the normal AXFR/IXFR transfer mechanism. The secondary server, on processing the update, notices that the member zone has been removed, stops serving the zone, and removes it from its list of configured zones. However, removing the member zone from the primary server must be done by editing the configuration file or running rndc delzone.

5.7.2. Configuring Catalog Zones

Catalog zones are configured with a catalog-zones statement in the options or view section of named.conf. For example:

catalog-zones {
    zone "catalog.example"
         default-primaries { 10.53.0.1; }
         in-memory no
         zone-directory "catzones"
         min-update-interval 10;
};

This statement specifies that the zone catalog.example is a catalog zone. This zone must be properly configured in the same view. In most configurations, it would be a secondary zone.

The options following the zone name are not required, and may be specified in any order.

default-masters

Synonym for default-primaries.

default-primaries

This option defines the default primaries for member zones listed in a catalog zone, and can be overridden by options within a catalog zone. If no such options are included, then member zones transfer their contents from the servers listed in this option.

in-memory

This option, if set to yes, causes member zones to be stored only in memory. This is functionally equivalent to configuring a secondary zone without a file option. The default is no; member zones’ content is stored locally in a file whose name is automatically generated from the view name, catalog zone name, and member zone name.

zone-directory

This option causes local copies of member zones’ zone files to be stored in the specified directory, if in-memory is not set to yes. The default is to store zone files in the server’s working directory. A non-absolute pathname in zone-directory is assumed to be relative to the working directory.

min-update-interval

This option sets the minimum interval between updates to catalog zones, in seconds. If an update to a catalog zone (for example, via IXFR) happens less than min-update-interval seconds after the most recent update, the changes are not carried out until this interval has elapsed. The default is 5 seconds.

Catalog zones are defined on a per-view basis. Configuring a non-empty catalog-zones statement in a view automatically turns on allow-new-zones for that view. This means that rndc addzone and rndc delzone also work in any view that supports catalog zones.

5.7.3. Catalog Zone Format

A catalog zone is a regular DNS zone; therefore, it must have a single SOA and at least one NS record.

A record stating the version of the catalog zone format is also required. If the version number listed is not supported by the server, then a catalog zone may not be used by that server.

catalog.example.    IN SOA . . 2016022901 900 600 86400 1
catalog.example.    IN NS invalid.
version.catalog.example.    IN TXT "2"

Note that this record must have the domain name version.catalog-zone-name. The data stored in a catalog zone is indicated by the domain name label immediately before the catalog zone domain. Currently BIND supports catalog zone schema versions “1” and “2”.

Also note that the catalog zone must have an NS record in order to be a valid DNS zone, and using the value “invalid.” for NS is recommended.

A member zone is added by including a PTR resource record in the zones sub-domain of the catalog zone. The record label can be any unique label. The target of the PTR record is the member zone name. For example, to add member zones domain.example and domain2.example:

5960775ba382e7a4e09263fc06e7c00569b6a05c.zones.catalog.example. IN PTR domain.example.
uniquelabel.zones.catalog.example. IN PTR domain2.example.

The label is necessary to identify custom properties (see below) for a specific member zone. Also, the zone state can be reset by changing its label, in which case BIND will remove the member zone and add it back.

5.7.4. Catalog Zone Custom Properties

BIND uses catalog zones custom properties to define different properties which can be set either globally for the whole catalog zone or for a single member zone. Global custom properties override the settings in the configuration file, and member zone custom properties override global custom properties.

For the version “1” of the schema custom properties must be placed without a special suffix.

For the version “2” of the schema custom properties must be placed under the “.ext” suffix.

Global custom properties are set at the apex of the catalog zone, e.g.:

primaries.ext.catalog.example.    IN AAAA 2001:db8::1

BIND currently supports the following custom properties:

  • A simple primaries definition:

    primaries.ext.catalog.example.    IN A 192.0.2.1
    

    This custom property defines a primary server for the member zones, which can be either an A or AAAA record. If multiple primaries are set, the order in which they are used is random.

    Note: masters can be used as a synonym for primaries.

  • A primaries with a TSIG key defined:

    label.primaries.ext.catalog.example.     IN A 192.0.2.2
    label.primaries.ext.catalog.example.     IN TXT "tsig_key_name"
    

    This custom property defines a primary server for the member zone with a TSIG key set. The TSIG key must be configured in the configuration file. label can be any valid DNS label.

    Note: masters can be used as a synonym for primaries.

  • allow-query and allow-transfer ACLs:

    allow-query.ext.catalog.example.   IN APL 1:10.0.0.1/24
    allow-transfer.ext.catalog.example.    IN APL !1:10.0.0.1/32 1:10.0.0.0/24
    

    These custom properties are the equivalents of allow-query and allow-transfer options in a zone declaration in the named.conf configuration file. The ACL is processed in order; if there is no match to any rule, the default policy is to deny access. For the syntax of the APL RR, see RFC 3123.

The member zone-specific custom properties are defined the same way as global custom properties, but in the member zone subdomain:

primaries.ext.5960775ba382e7a4e09263fc06e7c00569b6a05c.zones.catalog.example. IN A 192.0.2.2
label.primaries.ext.5960775ba382e7a4e09263fc06e7c00569b6a05c.zones.catalog.example. IN AAAA 2001:db8::2
label.primaries.ext.5960775ba382e7a4e09263fc06e7c00569b6a05c.zones.catalog.example. IN TXT "tsig_key_name"
allow-query.ext.5960775ba382e7a4e09263fc06e7c00569b6a05c.zones.catalog.example. IN APL 1:10.0.0.0/24
primaries.ext.uniquelabel.zones.catalog.example. IN A 192.0.2.3

Custom properties defined for a specific zone override the global custom properties defined in the catalog zone. These in turn override the global options defined in the catalog-zones statement in the configuration file.

Note that none of the global records for a custom property are inherited if any records are defined for that custom property for the specific zone. For example, if the zone had a primaries record of type A but not AAAA, it would not inherit the type AAAA record from the global custom property or from the global option in the configuration file.

5.7.5. Change of Ownership (coo)

BIND supports the catalog zones “Change of Ownership” (coo) property. When the same entry which exists in one catalog zone is added into another catalog zone, the default behavior for BIND is to ignore it, and continue serving the zone using the catalog zone where it was originally existed, unless it is removed from there, then it can be added into the new one.

Using the coo property it is possible to gracefully move a zone from one catalog zone into another, by letting the catalog consumers know that it is permitted to do so. To do that, the original catalog zone should be updated with a new record with coo custom property:

uniquelabel.zones.catalog.example. IN PTR domain2.example.
coo.uniquelabel.zones.catalog.example. IN PTR catalog2.example.

Here, the catalog.example catalog zone gives permission for the member zone with label “uniquelabel” to be transferred into catalog2.example catalog zone. Catalog consumers which support the coo property will then take note, and when the zone is finally added into catalog2.example catalog zone, catalog consumers will change the ownership of the zone from catalog.example to catalog2.example. BIND’s implementation simply deletes the zone from the old catalog zone and adds it back into the new catalog zone, which also means that all associated state for the just migrated zone will be reset, including when the unique label is the same.

The record with coo custom property can be later deleted by the catalog zone operator after confirming that all the consumers have received it and have successfully changed the ownership of the zone.

5.8. DNS Firewalls and Response Policy Zones

A DNS firewall examines DNS traffic and allows some responses to pass through while blocking others. This examination can be based on several criteria, including the name requested, the data (such as an IP address) associated with that name, or the name or IP address of the name server that is authoritative for the requested name. Based on these criteria, a DNS firewall can be configured to discard, modify, or replace the original response, allowing administrators more control over what systems can access or be accessed from their networks.

DNS Response Policy Zones (RPZ) are a form of DNS firewall in which the firewall rules are expressed within the DNS itself - encoded in an open, vendor-neutral format as records in specially constructed DNS zones.

Using DNS zones to configure policy allows policy to be shared from one server to another using the standard DNS zone transfer mechanism. This allows a DNS operator to maintain their own firewall policies and share them easily amongst their internal name servers, or to subscribe to external firewall policies such as commercial or cooperative “threat feeds,” or both.

named can subscribe to up to 64 Response Policy Zones, each of which encodes a separate policy rule set. Each rule is stored in a DNS resource record set (RRset) within the RPZ, and consists of a trigger and an action. There are four types of triggers and four types of actions.

A response policy rule in a DNS RPZ can be triggered as follows:

  • by the query name

  • by an address which would be present in a truthful response

  • by the name or address of an authoritative name server responsible for publishing the original response

A response policy action can be one of the following:

  • to synthesize a “domain does not exist” (NXDOMAIN) response

  • to synthesize a “name exists but there are no records of the requested type” (NODATA) response

  • to replace/override the response’s data with specific data (provided within the response policy zone)

  • to exempt the response from further policy processing

The most common use of a DNS firewall is to “poison” a domain name, IP address, name server name, or name server IP address. Poisoning is usually done by forcing a synthetic “domain does not exist” (NXDOMAIN) response. This means that if an administrator maintains a list of known “phishing” domains, these names can be made unreachable by customers or end users just by adding a firewall policy into the recursive DNS server, with a trigger for each known “phishing” domain, and an action in every case forcing a synthetic NXDOMAIN response. It is also possible to use a data-replacement action such as answering for these known “phishing” domains with the name of a local web server that can display a warning page. Such a web server would be called a “walled garden.”

Note

Authoritative name servers can be responsible for many different domains. If DNS RPZ is used to poison all domains served by some authoritative name server name or address, the effects can be quite far-reaching. Users are advised to ensure that such authoritative name servers do not also serve domains that should not be poisoned.

5.8.1. Why Use a DNS Firewall?

Criminal and network abuse traffic on the Internet often uses the Domain Name System (DNS), so protection against these threats should include DNS firewalling. A DNS firewall can selectively intercept DNS queries for known network assets including domain names, IP addresses, and name servers. Interception can mean rewriting a DNS response to direct a web browser to a “walled garden,” or simply making any malicious network assets invisible and unreachable.

5.8.2. What Can a DNS Firewall Do?

Firewalls work by applying a set of rules to a traffic flow, where each rule consists of a trigger and an action. Triggers determine which messages within the traffic flow are handled specially, and actions determine what that special handling is. For a DNS firewall, the traffic flow to be controlled consists of responses sent by a recursive DNS server to its end-user clients. Some true responses are not safe for all clients, so the policy rules in a DNS firewall allow some responses to be intercepted and replaced with safer content.

5.8.3. Creating and Maintaining RPZ Rule Sets

In DNS RPZ, the DNS firewall policy rule set is stored in a DNS zone, which is maintained and synchronized using the same tools and methods as for any other DNS zone. The primary name server for a DNS RPZ may be an internal server, if an administrator is creating and maintaining their own DNS policy zone, or it may be an external name server (such as a security vendor’s server), if importing a policy zone published externally. The primary copy of the DNS firewall policy can be a DNS “zone file” which is either edited by hand or generated from a database. A DNS zone can also be edited indirectly using DNS dynamic updates (for example, using the “nsupdate” shell level utility).

DNS RPZ allows firewall rules to be expressed in a DNS zone format and then carried to subscribers as DNS data. A recursive DNS server which is capable of processing DNS RPZ synchronizes these DNS firewall rules using the same standard DNS tools and protocols used for secondary name service. The DNS policy information is then promoted to the DNS control plane inside the customer’s DNS resolver, making that server into a DNS firewall.

A security company whose products include threat intelligence feeds can use a DNS Response Policy Zone (RPZ) as a delivery channel to customers. Threats can be expressed as known-malicious IP addresses and subnets, known-malicious domain names, and known-malicious domain name servers. By feeding this threat information directly into customers’ local DNS resolvers, providers can transform these DNS servers into a distributed DNS firewall.

When a customer’s DNS resolver is connected by a realtime subscription to a threat intelligence feed, the provider can protect the customer’s end users from malicious network elements (including IP addresses and subnets, domain names, and name servers) immediately as they are discovered. While it may take days or weeks to “take down” criminal and abusive infrastructure once reported, a distributed DNS firewall can respond instantly.

Other distributed TCP/IP firewalls have been on the market for many years, and enterprise users are now comfortable importing real-time threat intelligence from their security vendors directly into their firewalls. This intelligence can take the form of known-malicious IP addresses or subnets, or of patterns which identify known-malicious email attachments, file downloads, or web addresses (URLs). In some products it is also possible to block DNS packets based on the names or addresses they carry.

5.8.4. Limitations of DNS RPZ

We’re often asked if DNS RPZ could be used to set up redirection to a CDN. For example, if “mydomain.com” is a normal domain with SOA, NS, MX, TXT records etc., then if someone sends an A or AAAA query for “mydomain.com”, can we use DNS RPZ on an authoritative nameserver to return “CNAME mydomain.com.my-cdn-provider.net”?

The problem with this suggestion is that there is no way to CNAME only A and AAAA queries, not even with RPZ.

The underlying reason is that if the authoritative server answers with a CNAME, the recursive server making that query will cache the response. Thereafter (while the CNAME is still in cache), it assumes that there are no records of any non-CNAME type for the name that was being queried, and directs subsequent queries for all other types directly to the target name of the CNAME record.

To be clear, this is not a limitation of RPZ; it is a function of the way the DNS protocol works. It’s simply not possible to use “partial” CNAMES to help when setting up CDNs because doing this will break other functionality such as email routing.

Similarly, following the DNS protocol definition, wildcards in the form of *.example records might behave in unintuitive ways. For a detailed definition of wildcards in DNS, please see RFC 4592, especially section 2.

5.8.5. DNS Firewall Usage Examples

Here are some scenarios in which a DNS firewall might be useful.

Some known threats are based on an IP address or subnet (IP address range). For example, an analysis may show that all addresses in a “class C” network are used by a criminal gang for “phishing” web servers. With a DNS firewall based on DNS RPZ, a firewall policy can be created such as “if a DNS lookup would result in an address from this class C network, then answer instead with an NXDOMAIN indication.” That simple rule would prevent any end users inside customers’ networks from being able to look up any domain name used in this phishing attack – without having to know in advance what those names might be.

Other known threats are based on domain names. An analysis may determine that a certain domain name or set of domain names is being or will shortly be used for spamming, phishing, or other Internet-based attacks which all require working domain names. By adding name-triggered rules to a distributed DNS firewall, providers can protect customers’ end users from any attacks which require them to be able to look up any of these malicious names. The names can be wildcards (for example, *.evil.com), and these wildcards can have exceptions if some domains are not as malicious as others (if *.evil.com is bad, then not.evil.com might be an exception).

Alongside growth in electronic crime has come growth of electronic criminal expertise. Many criminal gangs now maintain their own extensive DNS infrastructure to support a large number of domain names and a diverse set of IP addressing resources. Analyses show in many cases that the only truly fixed assets criminal organizations have are their name servers, which are by nature slightly less mobile than other network assets. In such cases, DNS administrators can anchor their DNS firewall policies in the abusive name server names or name server addresses, and thus protect their customers’ end users from threats where neither the domain name nor the IP address of that threat is known in advance.

Electronic criminals rely on the full resiliency of DNS just as the rest of digital society does. By targeting criminal assets at the DNS level we can deny these criminals the resilience they need. A distributed DNS firewall can leverage the high skills of a security company to protect a large number of end users. DNS RPZ, as the first open and vendor-neutral distributed DNS firewall, can be an effective way to deliver threat intelligence to customers.

5.8.5.1. A Real-World Example of DNS RPZ’s Value

The Conficker malware worm (https://en.wikipedia.org/wiki/Conficker) was first detected in 2008. Although it is no longer an active threat, the techniques described here can be applied to other DNS threats.

Conficker used a domain generation algorithm (DGA) to choose up to 50,000 command and control domains per day. It would be impractical to create an RPZ that contains so many domain names and changes so much on a daily basis. Instead, we can trigger RPZ rules based on the names of the name servers that are authoritative for the command and control domains, rather than trying to trigger on each of 50,000 different (daily) query names. Since the well-known name server names for Conficker’s domain names are never used by nonmalicious domains, it is safe to poison all lookups that rely on these name servers. Here is an example that achieves this result:

$ORIGIN rpz.example.com.
ns.0xc0f1c3a5.com.rpz-nsdname  CNAME  *.walled-garden.example.com.
ns.0xc0f1c3a5.net.rpz-nsdname  CNAME  *.walled-garden.example.com.
ns.0xc0f1c3a5.org.rpz-nsdname  CNAME  *.walled-garden.example.com.

The * at the beginning of these CNAME target names is special, and it causes the original query name to be prepended to the CNAME target. So if a user tries to visit the Conficker command and control domain http://racaldftn.com.ai/ (which was a valid Conficker command and control domain name on 19-October-2011), the RPZ-configured recursive name server will send back this answer:

racaldftn.com.ai.     CNAME     racaldftn.com.ai.walled-garden.example.com.
racaldftn.com.ai.walled-garden.example.com.     A      192.168.50.3

This example presumes that the following DNS content has also been created, which is not part of the RPZ zone itself but is in another domain:

$ORIGIN walled-garden.example.com.
*     A     192.168.50.3

Assuming that we’re running a web server listening on 192.168.50.3 that always displays a warning message no matter what uniform resource identifier (URI) is used, the above RPZ configuration will instruct the web browser of any infected end user to connect to a “server name” consisting of their original lookup name (racaldftn.com.ai) prepended to the walled garden domain name (walled-garden.example.com). This is the name that will appear in the web server’s log file, and having the full name in that log file will facilitate an analysis as to which users are infected with what virus.

5.8.6. Keeping Firewall Policies Updated

It is vital for overall system performance that incremental zone transfers (see RFC 1995) and real-time change notification (see RFC 1996) be used to synchronize DNS firewall rule sets between the publisher’s primary copy of the rule set and the subscribers’ working copies of the rule set.

If DNS dynamic updates are used to maintain a DNS RPZ rule set, the name server automatically calculates a stream of deltas for use when sending incremental zone transfers to the subscribing name servers. Sending a stream of deltas – known as an “incremental zone transfer” or IXFR – is usually much faster than sending the full zone every time it changes, so it’s worth the effort to use an editing method that makes such incremental transfers possible.

Administrators who edit or periodically regenerate a DNS RPZ rule set and whose primary name server uses BIND can enable the ixfr-from-differences option, which tells the primary name server to calculate the differences between each new zone and the preceding version, and to make these differences available as a stream of deltas for use in incremental zone transfers to the subscribing name servers. This will look something like the following:

options {
          // ...
          ixfr-from-differences yes;
          // ...
};

As mentioned above, the simplest and most common use of a DNS firewall is to poison domain names known to be purely malicious, by simply making them disappear. All DNS RPZ rules are expressed as resource record sets (RRsets), and the way to express a “force a name-does-not-exist condition” is by adding a CNAME pointing to the root domain (.). In practice this looks like:

$ORIGIN rpz.example.com.
malicious1.org          CNAME .
*.malicious1.org        CNAME .
malicious2.org          CNAME .
*.malicious2.org        CNAME .

Two things are noteworthy in this example. First, the malicious names are made relative within the response policy zone. Since there is no trailing dot following “.org” in the above example, the actual RRsets created within this response policy zone are, after expansion:

malicious1.org.rpz.example.com.         CNAME .
*.malicious1.org.rpz.example.com.       CNAME .
malicious2.org.rpz.example.com.         CNAME .
*.malicious2.org.rpz.example.com.       CNAME .

Second, for each name being poisoned, a wildcard name is also listed. This is because a malicious domain name probably has or may potentially have malicious subdomains.

In the above example, the relative domain names malicious1.org and malicious2.org will match only the real domain names malicious1.org and malicious2.org, respectively. The relative domain names *.malicious1.org and *.malicious2.org will match any subdomain.of.malicious1.org or subdomain.of.malicious2.org, respectively.

This example forces an NXDOMAIN condition as its policy action, but other policy actions are also possible.

5.8.7. Performance and Scalability When Using Multiple RPZs

Since version 9.10, BIND can be configured to have different response policies depending on the identity of the querying client and the nature of the query. To configure BIND response policy, the information is placed into a zone file whose only purpose is conveying the policy information to BIND. A zone file containing response policy information is called a Response Policy Zone, or RPZ, and the mechanism in BIND that uses the information in those zones is called DNS RPZ.

It is possible to use as many as 64 separate RPZ files in a single instance of BIND, and BIND is not significantly slowed by such heavy use of RPZ.

(Note: by default, BIND 9.11 only supports up to 32 RPZ files, but this can be increased to 64 at compile time. All other supported versions of BIND support 64 by default.)

Each one of the policy zone files can specify policy for as many different domains as necessary. The limit of 64 is on the number of independently-specified policy collections and not the number of zones for which they specify policy.

Policy information from all of the policy zones together are stored in a special data structure allowing simultaneous lookups across all policy zones to be performed very rapidly. Looking up a policy rule is proportional to the logarithm of the number of rules in the largest single policy zone.

5.8.8. Practical Tips for DNS Firewalls and DNS RPZ

Administrators who subscribe to an externally published DNS policy zone and who have a large number of internal recursive name servers should create an internal name server called a “distribution master” (DM). The DM is a secondary (stealth secondary) name server from the publisher’s point of view; that is, the DM is fetching zone content from the external server. The DM is also a primary name server from the internal recursive name servers’ point of view: they fetch zone content from the DM. In this configuration the DM is acting as a gateway between the external publisher and the internal subscribers.

The primary server must know the unicast listener address of every subscribing recursive server, and must enumerate all of these addresses as destinations for real time zone change notification (as described in RFC 1996). So if an enterprise-wide RPZ is called “rpz.example.com” and if the unicast listener addresses of four of the subscribing recursive name servers are 192.0.200.1, 192.0.201.1, 192.0.202.1, and 192.0.203.1, the primary server’s configuration looks like this:

zone "rpz.example.com" {
     type primary;
     file "primary/rpz.example.com";
     notify explicit;
     also-notify { 192.0.200.1;
                   192.0.201.1;
                   192.0.202.1;
                   192.0.203.1; };
     allow-transfer { 192.0.200.1;
                      192.0.201.1;
                      192.0.202.1;
                      192.0.203.1; };
     allow-query { localhost; };
};

Each recursive DNS server that subscribes to the policy zone must be configured as a secondary server for the zone, and must also be configured to use the policy zone for local response policy. To subscribe a recursive name server to a response policy zone where the unicast listener address of the primary server is 192.0.220.2, the server’s configuration should look like this:

options {
     // ...
     response-policy {
          zone "rpz.example.com";
     };
     // ...
};

zone "rpz.example.com";
     type secondary;
     primaries { 192.0.222.2; };
     file "secondary/rpz.example.com";
     allow-query { localhost; };
     allow-transfer { none; };
};

Note that queries are restricted to “localhost,” since query access is never used by DNS RPZ itself, but may be useful to DNS operators for use in debugging. Transfers should be disallowed to prevent policy information leaks.

If an organization’s business continuity depends on full connectivity with another company whose ISP also serves some criminal or abusive customers, it’s possible that one or more external RPZ providers – that is, security feed vendors – may eventually add some RPZ rules that could hurt a company’s connectivity to its business partner. Users can protect themselves from this risk by using an internal RPZ in addition to any external RPZs, and by putting into their internal RPZ some “pass-through” rules to prevent any policy action from affecting a DNS response that involves a business partner.

A recursive DNS server can be connected to more than one RPZ, and these are searched in order. Therefore, to protect a network from dangerous policies which may someday appear in external RPZ zones, administrators should list the internal RPZ zones first.

options {
     // ...
     response-policy {
          zone "rpz.example.com";
          zone "rpz.security-vendor-1.com";
          zone "rpz.security-vendor-2.com";
     };
     // ...
};

Within an internal RPZ, there need to be rules describing the network assets of business partners whose communications need to be protected. Although it is not generally possible to know what domain names they use, administrators will be aware of what address space they have and perhaps what name server names they use.

$ORIGIN rpz.example.com.
8.0.0.0.10.rpz-ip                CNAME   rpz-passthru.
16.0.0.45.128.rpz-nsip           CNAME   rpz-passthru.
ns.partner1.com.rpz-nsdname      CNAME   rpz-passthru.
ns.partner2.com.rpz-nsdname      CNAME   rpz-passthru.

Here, we know that answers in address block 10.0.0.0/8 indicate a business partner, as well as answers involving any name server whose address is in the 128.45.0.0/16 address block, and answers involving the name servers whose names are ns.partner1.com or ns.partner2.com.

The above example demonstrates that when matching by answer IP address (the .rpz-ip owner), or by name server IP address (the .rpz-nsip owner) or by name server domain name (the .rpz-nsdname owner), the special RPZ marker (.rpz-ip, .rpz-nsip, or .rpz-nsdname) does not appear as part of the CNAME target name.

By triggering these rules using the known network assets of a partner, and using the “pass-through” policy action, no later RPZ processing (which in the above example refers to the “rpz.security-vendor-1.com” and “rpz.security-vendor-2.com” policy zones) will have any effect on DNS responses for partner assets.

5.8.9. Creating a Simple Walled Garden Triggered by IP Address

It may be the case that the only thing known about an attacker is the IP address block they are using for their “phishing” web servers. If the domain names and name servers they use are unknown, but it is known that every one of their “phishing” web servers is within a small block of IP addresses, a response can be triggered on all answers that would include records in this address range, using RPZ rules that look like the following example:

$ORIGIN rpz.example.com.
22.0.212.94.109.rpz-ip          CNAME   drop.garden.example.com.
*.212.94.109.in-addr.arpa       CNAME   .
*.213.94.109.in-addr.arpa       CNAME   .
*.214.94.109.in-addr.arpa       CNAME   .
*.215.94.109.in-addr.arpa       CNAME   .

Here, if a truthful answer would include an A (address) RR (resource record) whose value were within the 109.94.212.0/22 address block, then a synthetic answer is sent instead of the truthful answer. Assuming the query is for www.malicious.net, the synthetic answer is:

www.malicious.net.              CNAME   drop.garden.example.com.
drop.garden.example.com.        A       192.168.7.89

This assumes that drop.garden.example.com has been created as real DNS content, outside of the RPZ:

$ORIGIN example.com.
drop.garden                     A       192.168.7.89

In this example, there is no “*” in the CNAME target name, so the original query name will not be present in the walled garden web server’s log file. This is an undesirable loss of information, and is shown here for example purposes only.

The above example RPZ rules would also affect address-to-name (also known as “reverse DNS”) lookups for the unwanted addresses. If a mail or web server receives a connection from an address in the example’s 109.94.212.0/22 address block, it will perform a PTR record lookup to find the domain name associated with that IP address.

This kind of address-to-name translation is usually used for diagnostic or logging purposes, but it is also common for email servers to reject any email from IP addresses which have no address-to-name translation. Most mail from such IP addresses is spam, so the lack of a PTR record here has some predictive value. By using the “force name-does-not-exist” policy trigger on all lookups in the PTR name space associated with an address block, DNS administrators can give their servers a hint that these IP addresses are probably sending junk.

5.8.10. A Known Inconsistency in DNS RPZ’s NSDNAME and NSIP Rules

Response Policy Zones define several possible triggers for each rule, and among these two are known to produce inconsistent results. This is not a bug; rather, it relates to inconsistencies in the DNS delegation model.

5.8.10.1. DNS Delegation

In DNS authority data, an NS RRset that is not at the apex of a DNS zone creates a sub-zone. That sub-zone’s data is separate from the current (or “parent”) zone, and it can have different authoritative name servers than the current zone. In this way, the root zone leads to COM, NET, ORG, and so on, each of which have their own name servers and their own way of managing their authoritative data. Similarly, ORG has delegations to ISC.ORG and to millions of other “dot-ORG” zones, each of which can have its own set of authoritative name servers. In the parlance of the protocol, these NS RRsets below the apex of a zone are called “delegation points.” An NS RRset at a delegation point contains a list of authoritative servers to which the parent zone is delegating authority for all names at or below the delegation point.

At the apex of every zone there is also an NS RRset. Ideally, this so-called “apex NS RRset” should be identical to the “delegation point NS RRset” in the parent zone, but this ideal is not always achieved. In the real DNS, it’s almost always easier for a zone administrator to update one of these NS RRsets than the other, so that one will be correct and the other out of date. This inconsistency is so common that it’s been necessarily rendered harmless: domains that are inconsistent in this way are less reliable and perhaps slower, but they still function as long as there is some overlap between each of the NS RRsets and the truth. (“Truth” in this case refers to the actual set of name servers that are authoritative for the zone.)

5.8.10.2. A Quick Review of DNS Iteration

In DNS recursive name servers, an incoming query that cannot be answered from the local cache is sent to the closest known delegation point for the query name. For example, if a server is looking up XYZZY.ISC.ORG and it the name servers for ISC.ORG, then it sends the query to those servers directly; however, if it has never heard of ISC.ORG before, it must first send the query to the name servers for ORG (or perhaps even to the root zone that is the parent of ORG).

When it asks one of the parent name servers, that server will not have an answer, so it sends a “referral” consisting only of the “delegation point NS RRset.” Once the server receives this referral, it “iterates” by sending the same query again, but this time to name servers for a more specific part of the query name. Eventually this iteration terminates, usually by getting an answer or a “name error” (NXDOMAIN) from the query name’s authoritative server, or by encountering some type of server failure.

When an authoritative server for the query name sends an answer, it has the option of including a copy of the zone’s apex NS RRset. If this occurs, the recursive name server caches this NS RRset, replacing the delegation point NS RRset that it had received during the iteration process. In the parlance of the DNS, the delegation point NS RRset is “glue,” meaning non-authoritative data, or more of a hint than a real truth. On the other hand, the apex NS RRset is authoritative data, coming as it does from the zone itself, and it is considered more credible than the “glue.” For this reason, it’s a little bit more important that the apex NS RRset be correct than that the delegation point NS RRset be correct, since the former will quickly replace the latter, and will be used more often for a longer total period of time.

Importantly, the authoritative name server need not include its apex NS RRset in any answers, and recursive name servers do not ordinarily query directly for this RRset. Therefore it is possible for the apex NS RRset to be completely wrong without any operational ill-effects, since the wrong data need not be exposed. Of course, if a query comes in for this NS RRset, most recursive name servers will forward the query to the zone’s authority servers, since it’s bad form to return “glue” data when asked a specific question. In these corner cases, bad apex NS RRset data can cause a zone to become unreachable unpredictably, according to what other queries the recursive name server has processed.

There is another kind of “glue,” for name servers whose names are below delegation points. If ORG delegates ISC.ORG to NS-EXT.ISC.ORG, the ORG server needs to know an address for NS-EXT.ISC.ORG and return this address as part of the delegation response. However, the name-to-address binding for this name server is only authoritative inside the ISC.ORG zone; therefore, the A or AAAA RRset given out with the delegation is non-authoritative “glue,” which is replaced by an authoritative RRset if one is seen. As with apex NS RRsets, the real A or AAAA RRset is not automatically queried for by the recursive name server, but is queried for if an incoming query asks for this RRset.

5.8.10.3. Enter RPZ

RPZ has two trigger types that are intended to allow policy zone authors to target entire groups of domains based on those domains all being served by the same DNS servers: NSDNAME and NSIP. The NSDNAME and NSIP rules are matched against the name and IP address (respectively) of the nameservers of the zone the answer is in, and all of its parent zones. In its default configuration, BIND actively fetches any missing NS RRsets and address records. If, in the process of attempting to resolve the names of all of these delegated server names, BIND receives a SERVFAIL response for any of the queries, then it aborts the policy rule evaluation and returns SERVFAIL for the query. This is technically neither a match nor a non-match of the rule.

Every “.” in a fully qualified domain name (FQDN) represents a potential delegation point. When BIND goes searching for parent zone NS RRsets (and, in the case of NSIP, their accompanying address records), it has to check every possible delegation point. This can become a problem for some specialized pseudo-domains, such as some domain name and network reputation systems, that have many “.” characters in the names. It is further complicated if that system also has non-compliant DNS servers that silently drop queries for NS and SOA records. This forces BIND to wait for those queries to time out before it can finish evaluating the policy rule, even if this takes longer than a reasonable client typically waits for an answer (delays of over 60 seconds have been observed).

While both of these cases do involve configurations and/or servers that are technically “broken,” they may still “work” outside of RPZ NSIP and NSDNAME rules because of redundancy and iteration optimizations.

There are two RPZ options, nsip-wait-recurse and nsdname-wait-recurse, that alter BIND’s behavior by allowing it to use only those records that already exist in the cache when evaluating NSIP and NSDNAME rules, respectively.

Therefore NSDNAME and NSIP rules are unreliable. The rules may be matched against either the apex NS RRset or the “glue” NS RRset, each with their associated addresses (that also might or might not be “glue”). It’s in the administrator’s interests to discover both the delegation name server names and addresses, and the apex name server names and authoritative address records, to ensure correct use of NS and NSIP triggers in RPZ. Even then, there may be collateral damage to completely unrelated domains that otherwise “work,” just by having NSIP and NSDNAME rules.

5.8.11. Example: Using RPZ to Disable Mozilla DoH-by-Default

Mozilla announced in September 2019 that they would enable DNS-over-HTTPS (DoH) for all US-based users of the Firefox browser, sending all their DNS queries to predefined DoH providers (Cloudflare’s 1.1.1.1 service in particular). This is a concern for some network administrators who do not want their users’ DNS queries to be rerouted unexpectedly. However, Mozilla provides a mechanism to disable the DoH-by-default setting: if the Mozilla-owned domain use-application-dns.net returns an NXDOMAIN response code, Firefox will not use DoH.

To accomplish this using RPZ:

  1. Create a polizy zone file called mozilla.rpz.db configured so that NXDOMAIN will be returned for any query to use-application-dns.net:

$TTL  604800
$ORIGIN       mozilla.rpz.
@     IN      SOA     localhost. root.localhost. 1 604800 86400 2419200 604800
@     IN      NS      localhost.
use-application-dns.net CNAME .
  1. Add the zone into the BIND configuration (usually named.conf):

zone mozilla.rpz {
    type primary;
    file "/<PATH_TO>/mozilla.rpz.db";
    allow-query { localhost; };
};
  1. Enable use of the Response Policy Zone for all incoming queries by adding the response-policy directive into the options {} section:

options {
      response-policy { zone mozilla.rpz; } break-dnssec yes;
};
  1. Reload the configuration and test whether the Response Policy Zone that was just added is in effect:

# rndc reload
# dig IN A use-application-dns.net @<IP_ADDRESS_OF_YOUR_RESOLVER>
# dig IN AAAA use-application-dns.net @<IP_ADDRESS_OF_YOUR_RESOLVER>

The response should return NXDOMAIN instead of the list of IP addresses, and the BIND 9 log should contain lines like this:

09-Sep-2019 18:50:49.439 client @0x7faf8e004a00 ::1#54175 (use-application-dns.net): rpz QNAME NXDOMAIN rewrite use-application-dns.net/AAAA/IN via use-application-dns.net.mozilla.rpz
09-Sep-2019 18:50:49.439 client @0x7faf8e007800 127.0.0.1#62915 (use-application-dns.net): rpz QNAME NXDOMAIN rewrite use-application-dns.net/AAAA/IN via use-application-dns.net.mozilla.rpz

Note that this is the simplest possible configuration; specific configurations may be different, especially for administrators who are already using other response policy zones, or whose servers are configured with multiple views.