Quad9 Network Problem 2022-12-12

Issue Report: A Routing Leak (and how to prevent them)

A routing leak occurred earlier this week that impacted Quad9 and some of our users. Here are the details on the issue, and how, as a community, we can prevent this from happening in the future by taking two actions today.

The Issue

This week, a routing leak by an unrelated network in Kinshasa, DRC caused non-optimal paths to Quad9 for a few European Union and North American operators from 12:15 to 13:40 UTC on Monday.

Liquid Telecom became the favorite path for Verizon, BT, and some Orange France networks, overloading our site in Kinshasa.

The Response

Liquid Telecom fixed the issue quickly when informed, but responses were slow or lost for about 90 minutes for a small percentage of our users.


These types of leaks are infrequent, but there isn't much we or anyone else can do to prevent them as things stand today. Typically, these are configuration errors in filters, either outbound or inbound, on BGP routing devices. A network receives a list of BGP IP networks (or "prefixes") from transit providers or peers and uses them internally so they know where to send packets. Sometimes, a BGP-speaking device may also assert that it's the ultimate destination for a particular IP network. In rare conditions, a BGP-speaking device may "transit" the state of IP networks saying, in essence - I heard about these IP networks from one network provider to whom I am connected, and I claim that I am the path for those networks. Furthermore, I will tell others that I am connected to them so that they should use me as a path for those networks.

When a network claims to be in the path but does not have the authorization to make that claim, that is a "routing leak." These are much less common than they were in the past, but it still happens. Typically, this is an error by a smaller organization with two or three different ISPs with whom they connect; they accidentally "announce" all the IP networks they've heard on provider A to provider B (and C and D, etc.). Larger, more experienced ISPs will usually have filters inbound so that even if such a misconfiguration occurs, the only IP networks that will make it past the filters will be those authorized to be originated by the smaller organization.

This week we had three conditions occur which caused the outage:

  1. A single small organization (STE MDICIS SAS - AS328552) leaked the network table it heard from Packet Clearing House (AS42 - one of our providers) to their transit ISP connection, which was Liquid Telecom (AS30844).
  2. Liquid Telecom's filters did not catch those announcements and block them. So they, in turn, re-announced our IP networks to their peers and transit providers, causing a limited but still significant number of users in an unexpected set of geographies to start using that path to reach Quad9's network node in Kinshasa.
  3. Quad9's capacity in Kinshasa is unable to handle the volume of queries. So we saw slow replies or lost packets for many users who ended up in our Kinshasa anycast location.

Liquid Telecom was easy to reach, very quick to understand, and rapidly updated its filters, which we greatly appreciate. These errors are infrequent, and the speed at which diagnosis and repair happened, in this case, was excellent. Most routing problems resolve quickly. The original network that leaked our prefixes, ultimately the root cause, had no standardized contact data and a blank web page, so we still need insight into what happened there.

There's a cost equation here that we considered and pursued. The smaller the network operator, the more difficult it is to get results because of any one of several problems: no contact data easily discoverable, nobody on duty, language barriers, or fundamentally no understanding of what they've done (otherwise, they wouldn't have the problem in the first place). Moving one layer "upstream" is often quite a bit more successful for getting repairs implemented more quickly.

The Bigger Fix

There is a better solution to this type of problem if we each act to implement better operational standards. There are several things you can do today to help.

Action 1: Join MANRS

The global initiative around Mutually Agreed Norms for Routing Security (MANRS) aims to reduce the most common routing threats. MANRS is supported by the Internet Society and provides crucial fixes to reduce the most common routing threats. MANRS offers specific actions you can take based on your role as Network Operator, Internet Exchange Point, CDN or Cloud Provider, or Equipment Vendor.

Think of it this way – insecure routing is one of the most common paths for malicious and accidental network threats. Inadvertent errors can take entire countries offline, and attackers can steal an individual's data or hold an organization's network hostage. We need an internet that pressures out bad actors and forces operational techniques to prevent accidental misconfigurations.

When you participate in MANRS, you'll take simple, concrete actions tailored to your function, including:

  1. Filtering (ensure the correctness of your announcements and those of your customers)
  2. Anti-spoofing (enable source address validation)
  3. Coordination (maintain globally accessible, up-to-date contact info)
  4. Global validation (publish your data so others can validate routing information)
  5. Tools (provide monitoring and debugging tools)
  6. Promote the adoption of MANRS among your peers.

Get more details on MANRS and join today.

Action 2: Adopt RPKI-Signed Prefixes

IP addresses can be redirected maliciously or by mistake. Both result from a weakness in inter-domain BGP routing on the internet. Let me explain:

We know that the Border Gateway Protocol (BGP) enables a map of interconnections on the internet, so packets are sent across different networks to reach their final destination. It provides internet resiliency by offering multiple paths in case of a failure along the route. A misconfiguration (a leak) or hijack (intentional misuse) of an IP prefix causes problems; as noted, they both happen. Today, leaks and hijacks can be detected and redirected, but the process can take anywhere from a few minutes to a few days. In the meantime, the redirected traffic can be intercepted and used for malicious purposes.

The weakness in BGP is this: There is no built-in validation of "ownership" of an IP prefix announced in BGP. Only recently has there been a push to centralize a tool that allows anyone to ask the regional routing authorities: ‘Is this BGP announcement I am receiving a valid one from this autonomous system (aka network operator)?’

RPKI-Signed Prefixes help a lot. Resource Public Key Infrastructure (RPKI) establishes a semi-public database of registered internet resources and, here is the vital part, with each resource owner and entry authenticated with cryptography. RPKI ensures you can trust each entry, check packet routing against the database, and make adjustments confident that you have accurate and valid information on the owner and the owner-to-resource relationship.

How can you implement RPKI? There are two models for RPKI - hosted and delegated.

Delegated is relatively easy, and if you have IP networks that you announce via BGP, you should sign your prefixes; it is relatively easy and has no significant downsides. Then look at how you can validate the prefixes you receive from your peers or customers. It's complex, but it is an excellent step towards better routing stability. Get a full explanation of RPKI, the models, and implementation details at

Would RPKI have prevented our problem this week? Not necessarily. This problem was not a 'hijack' where a different ASN was announcing Quad9's prefixes (which has happened in the past, accidentally). This week's problem was a routing leak. Paths were added and forked in a different direction - the way that "worked" was not intentional. But RPKI validation is fundamental to getting more sophisticated path validation in place; it's the first step of several that need to be taken to create a more secure and stable global routing table.

Final Thoughts

This week's routing leak would have had near-zero impact outside the immediate regional area around the DRC if large telcos peered in-country at internet exchange locations. Intuitively, it's strange that North American traffic would go to Africa as part of the leak. During the event this week, we saw some users from Verizon and British Telecom heading toward Kinshasa. We understand the technical details of how this happens, but we believe there can be a different way.

A solution exists where telecoms work with internet peering exchange points to directly link to selected external networks, effectively bypassing broader internet routing. So when an error happens (and it will), the "splash" from the issue is much more constrained instead of spreading widely. The overall solution is a much larger topic that we'll discuss in the future.

At Quad9, we're working diligently to solve this by adding more partners to our network worldwide - South Africa, for instance, will shortly receive additional capacity from one of our partners. It shouldn't have to be this difficult, though - we'd like to see a world where settlement-free peering was even more pervasive to create a more stable and highly-interconnected internet at lower costs