None of the biggest internet services are DNSSEC-enabled
Breakthrough needed to increase actual usage levels
Breakthrough needed to increase actual usage levels
Over the last 2 years, there has been considerable criticism of DNSSEC from within the DNS world itself. The main focuses of discontent have been the complexity of the protocol, poor market adoption and aging of the design. The causes of sluggish adoption should be sought in DNSSEC's 'invisibility' at the application level, and historical design decisions that have since been overtaken by developments.
In this article, we explore the main criticisms and explain how DNSSEC can nevertheless be deployed in large-scale critical applications without causing problems. Simplifications are still being made and innovations being devised, enabling DNSSEC to meet current expectations despite its age.
Not only is DNSSEC important for securing the existing DNS system, but it also provides a basis for new and future security technologies. However, a breakthrough is required to get it adopted by the biggest internet service providers, none of whom currently use it for their primary domains.
Over the last 2 years, there has been significant criticism of DNSSEC from within the DNS world itself. The most vocal critic has been Geoff Huston, Chief Scientist at APNIC, who has published several articles expressing his somewhat negative view of the security mechanism. At its worst, Huston says, DNSSEC is a half-cooked, awkward addition to the DNS system, and a poorly adopted innovation [which is not the same as a poor innovation] that refuses to disappear, but has not been succeeded by anything better and more modern. With adoption remaining sluggish even after several decades, the problems to be addressed and the solutions to them need redefining, as is the case with IPv6 as well, he argues.
Notwithstanding Huston's own acknowledgement that he is "a bit too enthusiastic" and "overstating [his] case", we disagree with him on numerous points. In the following paragraphs, we consider the main criticisms of DNSSEC: the protocol's complexity, poor market adoption and aging of the design.
Let's start with the complexity of the DNSSEC protocol. DNSSEC does indeed make the modern DNS significantly more complex than the DNS used to be. Whereas the traditional DNS was a largely administrative system, DNSSEC adds an entire new layer to the old DNS infrastructure, featuring a security mechanism based on public-key cryptography.
However, public-key cryptography is nowadays a standard element of many internet applications – including TLS, RPKI, SSH, S/MIME, OpenPGP and blockchains – meaning that the underlying concepts are familiar to many people in the ICT world.
Aspects of DNSSEC that require particular attention are the delegations, where various layers of the distributed DNS infrastructure have to be linked together using DS records, and the signing of DNS responses for non-existent names (authenticated denials of existence), for which NSEC(3) was devised. However, all non-trivial applications of public-key cryptography have similar complexities.
Considered in the round, the DNSSEC protocol is not fundamentally more complex than, say, a blockchain protocol. We therefore prefer to invert the complexity criticism; we would say that the old DNS protocol was much too simple for the modern internet.
What's more, the .nl zone serves to demonstrate that DNSSEC can be deployed without causing problems. At present, 62 per cent of .nl domain names are signed, and nearly 60 per cent of inbound DNS traffic comes from validating resolvers.
Although the great majority of top-level domains support DNSSEC, the Netherlands is unfortunately one of just a handful of countries with such a high level of DNSSEC adoption. Only about 5 per cent of domain names under .com (by far the biggest top-level domain) are signed, for example.
Figure 3: By 2024, approximately 5% of .com domain names were signed with DNSSEC.
In 2023, Huston also commented on how much DNSSEC is actually used (i.e. how often validating resolvers query signed domain names). Globally, he suggested that the figure was only about 1 per cent. The basis for that number was APNIC data showing that 35 per cent of the world's internet users have validation enabled, coupled with Cloudflare's disclosure that only 3 per cent of its inbound traffic is heading for DNSSEC-enabled domains. Huston multiplied the one figure by the other to arrive at his 1 per cent figure.
Figure 5: Number of validating resolvers worldwide. [Source: APNIC]
Figue 6: Proportion of inbound DNS queries handled by Cloudflare that relate to signed domain names. [Source: APNIC]
The reason for the disappointing level of actual use is that none of the most frequently visited second-level domains is DNSSEC-enabled. So, for example, google.com, youtube.com, facebook.com, instagram.com, x.com, whatsapp.com, wikipedia.org, yahoo.com, reddit.com, amazon.com chatgpt.com, tiktok.com, netflix.com, linkedin.com and microsoft.com are all unsigned.
Evidently, those domains' operators see the network overhead, and the potential impact and consequent reputational risk associated with a DNSSEC outage as outweighing the added value of enhanced security.
According to Huston, the reason why DNSSEC has not been more widely adopted is primarily economic. DNSSEC is an infrastructure-level security mechanism. For it to succeed, the benefits would need to be felt at the application level. The impact and value of DNSSEC security are not currently apparent at that level, he argues. Access to a signed domain name that can't be validated is denied by the resolver, without ceremony or explanation. Similarly, an application that does realise a connection has no idea whether DNSSEC validation has actually taken place. Because it knows nothing about any DNSSEC validation that may or may not have taken place, an application has no choice but to perform its own authentication (regardless of the DNSSEC security status).
Writing in response to Huston's blog, Edward Lewis, who was involved in DNSSEC's development, suggested that the obstacles standing in the way of adoption aren't exclusively economic. He believes that some of the design choices made during the protocol's development have since been overtaken by events.
At that time, it was pointed out that securing the mapping of a domain name to an IP address was only one part of the puzzle. While it assures the authenticity and integrity of DNS data, DNSSEC does not prevent snooping (confidentiality). Furthermore, without RPKI, routing to the correct host is not assured either.
Another design choice made during development was that all the records in a zone should be pre-signed. At that time, server systems were not yet secure enough for storing cryptographic keys. It was therefore decided that all records should be signed so that the private keys could be kept offline.
The downside of that decision is that it precludes dynamic (on-the-fly) signing of DNS records. Consequently, the potential benefits of dynamic signing – more convenient and efficient zone updating, plus the possibility of generating negative responses and signing wildcard records and CNAME/DNAME references – cannot be realised. Dynamic signing would also remove the need for the complex (and partially outmoded) NSEC(3) mechanism. However, that would depend on a zone's authoritative name servers all having the same owner, or all having access to the private key(s) by some other means.
A final criticism of DNSSEC is that the chain of trust does not generally extend all the way to the end user's system: the so-called 'last mile' in a set-up with a stub resolver is not secured. Most end users therefore have to trust that the AD flag was correctly set by their caching DNS server, and has not been compromised in transit. That problem and the lack of confidentiality can be partially resolved by encrypting DNS traffic, but ideally every client should perform its own DNSSEC validation.
Automation of DNSSEC management tasks is the best way to reduce the complexity, errors and risks associated with manual procedures. Indeed, support for automation is the main thing that distinguishes PowerDNS software from other authoritative name servers.
Re-signing and key management were fully automated some years ago. However, mechanisms are now available that can automate the exchange of cryptographic records between the various links in an established chain of trust.
Having previously required manual work, DS record updating can now be fully automated as well (using CDNSKEY and/or CDS records). That removes the need to divide the cryptographic key pair into a cascade of KSK and ZSK pairs. Cascading was devised to limit the exchange of key material between child zone and parent zone, because updating the DS record in the parent zone typically involves interaction with another operator. By default, PowerDNS already uses a single CSK pair (with a long validity period).
The CDNSKEY/CDS mechanism complements the mechanism described in RFC 5011. Validating resolvers can use the latter mechanism to migrate to a new trust anchor without involving an operator. A resolver can therefore perform the entire process of rolling over the root KSK pair, whose current trust anchor must be present on each validating resolver, on an automated basis. Since the (first) root KSK rollover in 2017/2018, RFC 5011 has been adopted by almost all validating resolvers.
The simplification of NSEC(3) and the KSK/ZSK cascade, and the automation of the exchange of cryptographic record exchange between the various levels in the DNS hierarchy, have done a great deal to reduce the complexity, inefficiencies and risks associated with DNSSEC. More fundamentally, DNSSEC was originally designed on the assumption that its introduction would be a bottom-up process. It was envisaged that individual isles of trust, having initially been grouped together under the trust anchor of the DLV service, would gradually be moved to the regular DNS namespace as the top-level domains and ultimately the root zone were signed.
However, the adoption of DNSSEC has been a top-down process: the root was signed in 2010 and more than 90 per cent of top-level domains now support DNSSEC. Globally speaking, the rest of the DNS hierarchy is slowly following suit. That is why both Huston and Lewis attach great importance to development of an entire new delegation method for DNS.
The new DELEG record type [1] is therefore being developed to modernise the DNS delegation mechanism, which is now more than 40 years old. First of all, it will imply replacement of the NS and glue records in the parent zone. It will also enable reference to an encrypted DNS service based on DoT, DoH or DoQ and specification of an alternative port number. Finally, it’ll be possible to use a DELEG record as the starting point for a series of SVCB/CNAME references, so that delegations for multiple subdomains can be shared and hosted at a central location. That allows service providers and DNS operators to make bulk changes without modifying the parent zone. As a result, it is easier for registrars/registrants to contract out DNS server operation to DNS service providers.
If introduced, the new DELEG record type will make delegation to subdomains explicitly top-down and authoritative. The old soft delegation method will be replaced by a method aligned with the hard, authoritative DS linkage of the DNSSEC hierarchy.
Furthermore, signing of the delegation prevents substitution attacks, and the ability to stipulate encrypted transport addresses the 'last mile' problem with stub resolvers. The use of a TCP connection also opens the way for a further innovation: a resolver could ask the caching DNS server to send the entire chain of DNS(SEC) records needed for validation at the same time (as with the CHAIN queries in RFC 7901).
As well as being of direct importance for the security of the DNS, DNSSEC provides an infrastructure that supports a variety of new security applications. The most important applications that make use of DNSSEC are the e-mail security mechanisms SPF, DKIM, DMARC and DANE, although DNSSEC is strictly mandatory only for the latter. What's more, in much the same way as DANE/TLSA does with TLS certificates, SSHFP and OPENPGPKEY build on the DNSSEC infrastructure to cryptographically anchor the public keys for SSH and OpenPGP respectively. Indeed, OPENPGPKEY (DANE for OpenPGP) resolves OpenPGP's key distribution and key authentication problem, which has long been seen as an obstacle to the adoption of OpenPGP.
Another possible application of DNSSEC security is the issue of domain-validated TLS certificates, which – contrary to what the name suggests – does not currently require DNSSEC security. At present, all you have to do to get such a certificate to demonstrate once that you have control of a domain (by mail or using the web). The risk of an attacker circumventing that relatively basic domain verification process to obtain a valid TLS certificate, is explicitly highlighted in RFC 5452.
The same is true of a CAA record, which specifies a particular CA for the signing of a TLS certificate. RFC 8659 strongly recommends using DNSSEC in that context, but stops short of mandating it.
As the examples given above illustrate, not only is DNSSEC important for securing the existing DNS system, but it also provides a basis for new and future security technologies. Moreover, the various simplifications and innovations described in this blog post mean that DNSSEC can meet modern expectations despite its age.
However, a real breakthrough in the take-up of DNSSEC depends on the major service providers changing their policies. So far, all we have seen is the creation of a new DNSSEC-secured MX port by Google (mx1/2/3/4.smtp.goog) and Microsoft (mx.microsoft). We therefore warmly endorse Lewis's plea for clarification of the reasons why all the big operators are unwilling to enable DNSSEC on their primary domains.