Crouching Slave, Hidden Master

Introduction

For the past 15 years or so I have run my own authoritative DNS server for all of my domain names.  It started out because back when I was 18 or 19 I ran a small web design / web hosting business which was also a re-seller for a registrar.  At the time it made sense to run my own name servers.  After that business was dissolved, I consolidated my server farm down to one small un-managed VPS (proudly hosted by Linode!) for my own personal projects, and just kept running my own DNS.  There was no real reason for me to keep it, but it worked, and I like hosting my own stuff.

Recently I started toying with the idea of offloading my DNS to somewhere else.  There are a number of free options that I have access to, but the more I was thinking about it, the more I liked having complete control over my DNS zones.  I was also planning to enable DNSSEC on my zones, so maintaining my local DNS server seemed even more attractive, and in the case of using DNSSEC, a requirement.

A couple things I knew I wanted to do:

  • Increase the redundancy of my name servers as I only had one
  • Offload the DNS queries and close a few more ports on my VPS for security purposes

Originally I was thinking about getting a second really small VPS to be a slave DNS server only, and have that provide me with the redundancy, and then I learned about the concept of a hidden master.  I am by no means a networking guru, in fact, my network skill set is quite weak so this concept might be a no-brainer for most people but it was new to me, and quite interesting.  Essentially it works like this:

Configuration Overview

My VPS is the master authoritative DNS server for all of my domain names/zones.  This server is not directly queried or accessible to the public and it is not listed in the list of name servers within the zones.  Upon changes, the public facing slave DNS servers get notified about the changes and perform a zone transfer.  This transfer can only happen in one direction.  Effectively this means that all my DNS queries are served by the public facing slaves, but controlled by my private master.  My VPS provider, Linode, has 5 public name servers that I can use in this manner.  There are also a number of other free services which can allow you do this, along with your registrar (most likely).

So to set this up, the first thing I did was switch from BIND to NSD, which is NOT a requirement, BIND will do this just fine, and the configuration is almost the same.  I made the decision to switch because I was going to be overhauling my DNS setup completely and I wanted it to be a more lightweight solution on my end as I did not need a local caching resolver.

Switching from BIND to NSD was quite easy and I won’t go into all of the details here.  The zone files use the same format so it was pretty much just a straight swap.  After I was on NSD and I had everything tested and working with my server continuing to be the single master authoritative server, I tied it into Linodes slaves.  This consisted of creating stub slave zones on their DNS servers and defining the address of the master as well as the zone transfer ACL’s in those stub zones to point at my VPS.  This process will vary based on the tools provided to you by your upstream DNS choice.

In my local NSD configuration, I had to then set the Linode slave servers as allowed to perform zone transfers, and set them up to be notified on zone changes.  The NSD configuration for this looks like this (in /etc/nsd.conf):

...

pattern:
    name: "linode-slaves-v6"
    provide-xfr: 2600:3c00::a NOKEY
    provide-xfr: 2600:3c01::a NOKEY
    provide-xfr: 2600:3c02::a NOKEY
    provide-xfr: 2600:3c03::a NOKEY
    provide-xfr: 2a01:7e00::a NOKEY

pattern:
    name: "linode-slaves-v4"
    provide-xfr: 65.19.178.10 NOKEY
    provide-xfr: 69.93.127.10 NOKEY
    provide-xfr: 75.127.96.10 NOKEY
    provide-xfr: 109.74.194.10 NOKEY
    provide-xfr: 207.192.70.10 NOKEY

pattern:
    name: "linode-notify"
    notify: 65.19.178.10 NOKEY
    notify: 69.93.127.10 NOKEY
    notify: 75.127.96.10 NOKEY
    notify: 109.74.194.10 NOKEY
    notify: 207.192.70.10 NOKEY
    notify-retry: 5

...
zone:
        name: "example.com"
        include-pattern: "linode-slaves-v6"
        include-pattern: "linode-slaves-v4"
        include-pattern: "linode-notify"
        zonefile: "example.com.zone.signed"


The breakdown of this is that I defined 3 patterns, the first two simply define the ACL’s for Linodes slave servers IPv4 and IPv6 addresses and say that those addresses are allowed to transfer zones.  The 3rd pattern is the list of addresses to be notified on changes.  Within the zone configuration itself, any zones you want to be transferred/notified to the slaves include those patterns.

Next, I verified that this configuration worked and that my zones were transferred to Linodes slaves, in the NSD logs you will see lines that look like this, indicating successful zone transfers:

[2016-03-21 00:00:28.653] nsd[21823]: info: new control connection from ::1
[2016-03-21 00:00:28.704] nsd[21823]: info: control cmd:  reload example.com
[2016-03-21 00:00:28.705] nsd[21827]: info: zone example.com read with success
[2016-03-21 00:00:28.705] nsd[21827]: info: rehash of zone example.com. with parameters 1 0 1 a103b3080b7bd61f
[2016-03-21 00:00:28.958] nsd[5357]: info: axfr for example.com. from 2600:3c03::a
[2016-03-21 00:00:29.016] nsd[5356]: info: axfr for example.com. from 2600:3c02::a
[2016-03-21 00:00:29.184] nsd[5357]: info: axfr for example.com. from 2600:3c00::a
[2016-03-21 00:00:29.321] nsd[5356]: info: axfr for example.com. from 2600:3c01::a
[2016-03-21 00:00:29.504] nsd[5356]: info: axfr for example.com. from 2a01:7e00::a

I also queried the Linode servers directly to verify the zone contents, and example command might look like this:

dig example.com. @65.19.178.10 +multiline +norec

It is important to note that at this point the world is still querying my master for DNS, as I have not yet updated my GLUE records, nor have I updated my zones to reflect the existence of the new name servers.

At this point, since my domains don’t really do anything important, I pretty much threw caution to the wind and just swapped everything to the new setup.  Be careful here as I cannot be sure this is the most graceful, outage-free, way to make this transition.

First I updated the GLUE records with my registrar such that all of my ns records pointed at Linodes 5 DNS servers.  I could have just pointed to ns[1-5].linode.com but I wanted my vanity names to still exist so I updated my existing ns[1-5].werkkrew.com records to point to the Linode addresses.  My registrar (Gandi) allows IPv4/IPv6 addresses to be in the glue records at the same time, so I also took this opportunity to make my entire DNS setup IPv6 capable by adding both the IPv4 and 6 name server addresses to the glue records.

At the same time, while this change was propagating, I updated my zones to reflect the change. Here is an example of my werkkrew.com zone (with some sections removed):

$ORIGIN werkkrew.com.
$TTL    3600
@       IN              SOA         thorim.werkkrew.com. admin.werkkrew.com. (
                                    _SERIAL_    ; serial, todays date + todays serial #
                                    28800       ; refresh, seconds
                                    7200        ; retry, seconds
                                    2419200     ; expire, seconds
                                    3600        ; minimum, seconds
                                    )

; NS Records
                        IN NS       ns1.werkkrew.com.
                        IN NS       ns2.werkkrew.com.
                        IN NS       ns3.werkkrew.com.
                        IN NS       ns4.werkkrew.com.
                        IN NS       ns5.werkkrew.com.


; A Records for Nameservers
ns1.werkkrew.com.       IN A        69.93.127.10        ; ns1.linode.com
ns2.werkkrew.com.       IN A        65.19.178.10        ; ns2.linode.com
ns3.werkkrew.com.       IN A        75.127.96.10        ; ns3.linode.com
ns4.werkkrew.com.       IN A        207.192.70.10       ; ns4.linode.com
ns5.werkkrew.com.       IN A        109.74.194.10       ; ns5.linode.com

; AAAA Records for Nameservers
ns1.werkkrew.com.       IN AAAA     2600:3c00::a        ; ns1.linode.com
ns2.werkkrew.com.       IN AAAA     2600:3c01::a        ; ns2.linode.com
ns3.werkkrew.com.       IN AAAA     2600:3c02::a        ; ns3.linode.com
ns4.werkkrew.com.       IN AAAA     2600:3c03::a        ; ns4.linode.com
ns5.werkkrew.com.       IN AAAA     2a01:7e00::a        ; ns5.linode.com

Of note here is that the new A/AAAA records for the name servers all point to the slaves, and the SOA record defines the master, which cannot be queried from the outside world anymore.  Also note that the serial is SERIAL, this is because my DNSSEC signing scripts generate a new serial number programmatically when they sign the zone keyed off of this string.  The SOA record pointing to an inaccessible master, or exposing the name of your master might be undesirable to some people, so you can technically use one of the slaves as the SOA here as well.

The final step was to lock down the DNS ports on my VPS firewall to only allow queries from the Linode slaves.

Once these changes had fully propogated through the DNS system I was able to verify that all DNS queries work exactly as desired by using the slaves and my VPS no longer services any DNS traffic at all.

In summary, my VPS server is the only place where I can modify my zones, but the Linode public name servers are what actually serve the DNS queries to the public, which turned out to be the perfect solution for me.

The next thing I did was enable DNSSEC on all of my zones, which will be covered in another post.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.