Icon
Icon
Icon
Icon
Icon
Icon
2:15 AM
0 comments


Withstanding a (D)DoS Attack

In light of recent Internet attacks, whether a sound HA architecture should withstand a massive (distributed) denial-of-service ([D]DoS) attack or be able to mitigate its effects has become a legitimate question. From my point of view, a state-of-the-art HA architecture should have some inherent self-healing capabilities; HA architects should also add another line of defense to assist in at least crippling or weakening (D)DoS attacks and their progeny. Several, sometimes complementary and orthogonal, lines of defense are crucial to prevent (D)DoS attacks, as they are to overall security architectures.
HA in terms of almost 100 percent service availability within strict service level agreements (SLAs) and monitored key performance indicators (KPIs) represents a significant challenge for today's finest engineers and designers. The problem with any (D)DoS defense is that every system's strength defines its weaknesses, too. For example, handing over control of a firewall ruleset to a network intrusion detection system (NIDS) means that any successful trigger of this defense mechanism (spoofing) effectively locks out legitimate networks from crucial services. Therefore, a system designed to protect or prevent might become the perfect DoS trap.
NOTE
Recent hostile activities on the Internet have proven to me that, in general, operational staff are overwhelmed by and overburdened with reactive actions because of weak underlying network design and planning.

 

Network HA Approaches

The fundamental principle, and the foundation of network HA, is network link redundancy and redundant hardware (network elements).

Redundant Paths

The underlying design principle is that, for a critical service, at least two equivalent systems should be provided and topologies chosen in a way that there always exist, at the least, two redundant paths to the next device. This is why for so many years many robust and scalable photonic network approaches have been based successfully on protected ring topologies (for example, Synchronous Digital Hierarchy/Synchronous Optical Network [SDH/SONET]) and Resilient Packet Ring (RPR). Just because a lot of folks disliked Token Ring technology for no apparent reason does not mean that ring topologies per se are inferior to bus architectures or star topologies; on the contrary. With a small number of network elements, point-to-point links will suffice. Usually a collapsed network core consists of three or four network elements (as shown in Figure 12-1).

 

Simple but Effective Approaches to Server HA

Let us consider the network vicinity of a server in the context of its connected network interface cards (NICs), its LAN switch environment, VLAN membership, and exit gateways. Note that two or more NICs attached to redundant switch access ports provide sufficient redundancy, and channel bonding or interface teaming provides another useful combination of link aggregation (with an added redundancy benefit).
Route equalizing per destination or per packet can be configured to exit the two VLAN broadcast domains to which a server is usually hooked up. Beyond VLANs, dynamic routing protocols provide sufficiently fast rerouting around failures. It is fairly straightforward to provide redundant VLAN trunks and trunk termination (redundant routers on a stick) via Virtual Router Redundancy Protocol (VRRP) or Hot Standby Router Protocol (HSRP). This can be combined with equalized default routes (Linux) or floating static route concepts. Manual load distribution can be manipulated via manually tuned more-specific prefix routes. Return-packet load balancing originating from distant sites is an entirely different story; Domain Name System round-robin (DNS RR), Border Gateway Protocol (BGP) approaches (MED or path prepending), or dedicated load-balancing devices represent possible solutions.
When experimenting with special load-distribution approaches, keep in mind that Internet Control Message Protocol (ICMP) redirects might affect what you try to accomplish. sysctl provides a hook to disable dissemination of ICMP redirect messages (as shown in Example 12-1).
Example 12-1. Disabling ICMP Redirects for Special Cases
[root@ganymed:~#] sysctl -a | grep redirect

net.inet.ip.redirect = 1

net.inet6.ip6.redirect = 1


Address Resolution Protocol (ARP) cache latency is another issue that considerably affects certain setups. How long an ARP entry remains in a cache until it is removed is implementation-specific and might require manual intervention. To compensate for long timeouts, failover concepts such as VRRP/HSRP use gratuitous ARP featuring unsolicited updates.
Split-view DNS setups are popular, especially in enterprise networks where Network Address Translation (NAT) is used. Split-view DNS essentially means that an internal name server responds to queries for names associated with corporate RFC 1918 addresses and consults the external name server if it fails to resolve global records. Therefore, for true redundancy, two internal and two external DNS servers are advisable.

 

DNS Shuffle Records and Round-Robin (DNS RR)

DNS round-robin (DNS RR), as shown in Example 12-2, is the concept of entering multiple IP addresses for one fully qualified domain name (FQDN). It is qualitatively described in RFC 1794, "DNS Support for Load Balancing." When a DNS resolver (client) request reaches the server, it answers in an unweighted round-robin fashion. Although the server answers with the complete round-robin set, most clients consider only the first entry, which works as long as the server cycles the entries. This results in almost equal but crude and inefficient (unweighted) load distribution to resources of equal content or services. Nevertheless, this approach has several drawbacks, such as DNS caching problems and a considerable percentage of the requests directed lost when just one constituent of the DNS RR group becomes unavailable.
DNS RR essentially is deployed for migration scenarios, load balancing, and in poor-man redundancy architectures. For the Internet Systems Consortium's (ISC) point of view regarding the implications on Berkeley Internet Name Domain (BIND), read the excellent BIND load-balancing comment at http://www.isc.org/products/BIND/docs/bind-load-bal.html. For BIND-specific configuration options, consult the documentation that comes with your version of BIND.
Example 12-2. DNS RR Server Setup
www.iktech.net   300    IN  A   192.168.1.1

www.iktech.net   300    IN  A   192.168.2.1

www.iktech.net   300    IN  A   192.168.3.1


To my knowledge, DNS servers support the following approaches to round-robin-like regimes:[1]
  • Shuffle— Only one address at any given time from a list of address candidates is presented to the resolver (not possible in BIND, but with commercial load balancers).
  • SRV records— An added weight integer specifically describes the ordering (weighted DNS RR). This requires application support, however.
  • Sortlists (Example 12-3)— This refers to sorting of all address pools according to the source address of the querying resolver. For a detailed discussion, consult the BIND documentation at http://www.isc.org/products/BIND/.
    Example 12-3. BIND Sortlist
    sortlist {
    
               { localhost;
    
                 { localnets;
    
                   192.168.1/24;
    
                   { 192,168.2/24; 192.168.3/24; }; }; };
    
               { 192.168.1/24;
    
                 { 192.168.1/24;
    
                   { 192.168.2/24; 192.168.3/24; }; }; };
    
    };
    
    

  • Rrset order (Example 12-4)— When a DNS response contains multiple records, it might be useful to configure the order in which the records are placed into the response (shuffle, cyclic round-robin, user-defined).
Example 12-4. BIND Rrset Order
rrset-order {

       class IN type A name "www.iktech.net" order random;

          order cyclic;

    };


An alternative to these server-side approaches is to put the intelligence into the resolver/client application. However, this is difficult to predict and to deploy because resolvers are often part of an application.
If you want to manipulate the amount of traffic a specific round-robin participant receives, you can add alias addresses to the server and add additional entries to the DNS configuration. That's pretty much all you can do to alter the unweighted behavior. Be aware of possible caching issues and nondeterministic behavior and have a client-side sniffer ready to debug the queries and responses.
In closing, note that going one step further to ensure that the receiving server is up and available requires commercial-grade load-balancing solutions such as the Cisco server load balancing (SLB) IOS feature or the Cisco Local Director. Consult Cisco.com for a feature overview.
NOTE
For a flexible load-balancing name server written in Perl, by Roland Schemers, see the resources at http://www.stanford.edu/~riepel/lbnamed/.

 

Dynamic Routing Protocols

Dynamic routing is the most flexible and effective approach to provide redundancy for alternative paths and the only way to detect network node, port, or link failures reliably. Routing and standby protocols rely on the simple principle that if a speaker hasn't heard from a neighbor in a certain time, something must be wrong. Load balancing over multiple links can be accomplished in several ways: BGP "pseudo" load balancing can be achieved in dual-homed Internet service provider (ISP) architectures, Multilink PPP, and link-state Equal-Cost Multi-Path (ECMP) for interior gateway protocol (IGP) paths. It is a good idea to fine-tune protocol parameters for fast-converging resilient architectures or deploy incremental SPF (iSPF). Routing provides the signaling protocols to detect and route around failures within highly meshed nondeterministic IP networks.

Firewall Failover

Firewall failover is the art of exchanging state information for stateful inspection gateways and NAT tables, at the least, for the purpose of taking over for an equivalent resource. Such a takeover also involves stateful IP Security (IPSec) failover concepts for IPSec tunnel termination and security associations. This usually requires a heartbeat protocol between the master and slave firewall(s) on a dedicated crossover link. Most of the failover devices run in hot-standby mode or in expensive commercial (per-flow) load-balancing clusters.
The OpenBSD packet filter team is already working on a stateful failover concept using the newly introduced pfsync pseudo-device and a crude multicast pfsyncd in context with the OpenBSD firewall (pf) and the shiny new redundancy protocol Common Address Redundancy Protocol (CARP). The first integrated release of OpenBSD with all these features available is 3.5. I am sorry that the discussion did not make it into this first edition

If You Enjoyed This Post Please Take a Second To Share It.

You Might Also Like

Stay Connected With Free Updates

Subscribe via Email

teaser