Detecting Preventing and Mitigating DOS or Distributed DOS Attacks Research Paper
- Length: 6 pages
- Sources: 6
- Subject: Education - Computers
- Type: Research Paper
- Paper: #82938417
Excerpt from Research Paper :
Detecting, Preventing and Mitigating Dos or DDOS Attacks
Detecting, Preventing and mitigating DoS or distributed Dos Attacks
Distributed Denial of Services is constantly evolving from small megabits to massive megabits of data. Many Internet Service providers lack the capacity and the ability to mitigate this problem. Most of these attacks are run from one master station that takes control of millions or many stations and use them as Zombies to launch the attack. This paper uses ideas from peer-reviewed articles to summarize aspects related to detection, prevention, and mitigation of DoS attacks.
Rationale of selecting the papers
The first research paper selected by Kompella, Singh, and Varghese (2007)is titled "On Scallable Attack Detection in the network" from IEEE/ACM Transactions on Networking Journal. I selected this research paper because it showed a significant research on the current issue of denial of service. The research paper also contains knowledge that captures researcher's attention to this topic.
The second research paper of interest is the Wang & Shin, Sept 2003, "Transport-Aware IP Routers: A Built-in Protection Mechanism to counter DDos Attacks" because it tackles a research on IP routers, which are essential in mitigation of DDoS.
The third article by Chen, Park, and Marchany (2007) is titled "A Divide and conquer strategy for thwarting Distributed denial of Service Attacks." This research paper is useful to my study because of the mitigation technique identified: the 'divide and conquer strategy'. The concept works on other spheres of professions, but the author has identified its significance and application.
The techniques and Summary
Kompella et al.'s (2007) "on scalable attack detection in the network" article sought to find out if aggregation could be used to scale detection of attacks at very high speeds. The researcher brings out in the paper this technique. Aggregation has been used with other network functions for attaining higher speeds in IP look-up and the network Quality of Service. Some attacks such as evasion and TCP hijacking are difficult to detect using scalable fashion. This paper focuses on scalable DDoS and scan detection methods proposing another new scalable technique called partial completion filters (PCFs). The three types of attacks as discussed by Kompella et al. (2007) include partial completion attack, attacks that do scanning, and bandwidth attacks (Kompella, Singh, & Varghese, Feb 2007).
Bandwidth attacks are detectable using MULTOPS, sketches, multistage filter techniques and some tools like auto focus. The best method to detect DDoS attacks such as TCP scans and partial completion is PCFs data structure. The difference of PCFs with multistage filters is non-monotonicity, false negatives, different analysis. For instance, multistage filters only increase and do not negate, have one-side errors while PCF are analyzed using a central limit theorem (Kompella, Singh, & Varghese, 2007).
The proposed technique of partial completion filters to detect TCP scans, and partial completion is a promising technique. However, it requires more research because one has to choose between performance and completeness once detecting of intrusions in the network is done scalably. This means that it cannot be effectively implemented in a network without interfering with performance. In general, it can be implemented in the relevant field of the line card with ASIC that contains all the information about packets such as port, destination, and source. The PCFs uses that information to update its counters. Once this information has been obtained the Access control rule use PCFs to activate the forwarding and blocking of any suspicious packets (Chen, Park, & Marchany, May 2007).
The major weaknesses of partial completion filters are that it has high false positives and has altered the performance of any device when fully implemented. Few researches on this area limits the possibility of obtaining useful information is lacking. On the other hand, Partial Completion Filters has strengths of detecting attacks that other detection mechanisms cannot detect (Wang & Shin, 2003). This technique portrays out that behavioral aliasing and spoofing need to be addressed in any scalable solution because they eventually if not addressed properly cause the failure of the technique.
A mitigation technique discussed in this research paper is transport aware IP (tIP) method that provides the router with architecture that classifies its services and manages its resources. The tIP router has a fine-grained QoS classifier and an adaptive weight-based resource manager. It classifies packets in two stages that enable decoupling of the fine-grained Quality of service lookup from the common routing lookup at core routers. Service differentiation and isolation of resources provides a strong inbuilt protective mechanism against DDoS attacks (Wang & Shin, 2003).
This technique is implemented by a transport aware IP (tIP) router architecture that is expected to provide fine the desired differentiation of services and Isolation of resources among thinner aggregates without any compromise to scalability. It uses layer-4 packet headers to perform classification of packets and management of resources in IP routers. In order to implement this ability, a fine-grained Quality of Service Classifier and an adaptive resource manager giving the router a tIP infrastructure is required. The fine-grained classifier divides the BA into thinner combinations with the resource manager providing the required differentiation and isolation of services and resources respectively with the thinner combinations. The router employs a two-stage packet classifier mechanism; this decouples the QoS lookup and routing lookup. The first stage does the input ports lookup and the second stage does fine-grained QoS look up at the output port after routing the packet (Wang & Shin, 2003).
Simulation reveals the scalability and robustness strength of the tIP technique. It shows that isolating the resources has a significant throttle effect on the flooding, traffic on the victim server. That it saves the network and confines flooding, traffic by dropping all the attacks to the victim close to the attacker. Any flooding traffic affecting this technique has insignificant or no impact to the normal traffic belonging to different transport procedures. An example is the UDP and ICMP flooding traffic that does not interfere with transport of normal TCP traffic (Yu, Zhou, Doss, & Weijia, 2011). Thus, this is a highly promising technique to mitigate DDoS. Apart from these strengths, the tIP router provides better performance for applications with sessions having a higher effective throughput. They also do not require the support of DiffServ infrastructure because they work using the best effort model (Francois, Aib, & Boutaba, 2012). The weakness of this technique is that when the network resources are under provisioned good service differentiation and isolation is not possible. This makes the network vulnerable to ACK attacks (Khattab, Melhem, Mosse, & Znati, 2006).
Chen et al. (2007) discussed another technique that thwarts the distributed denial of service attack. The technique offers an attack diagnosis using a 'divide and conquer' strategy. The Attack Diagnosis uses the concepts of pushback and packet marking. It shares the architecture ideal and in line with a DDoS attack. It uses the victim host to detect the attack and executes packet filtering near the attack sources (Chen, Park, & Marchany, 2007). This technique is a reactive technique in that the defense is only activated when the attack of threat is detected near the victim. The technique instructs the routers in the upstream to mark packets deterministically. Once the packets have been marked, trace back is possible, allowing the victim to issue an Attack Diagnosis command to an AD enabled router to filter out all attack packets (Yu, Zhou, Doss, & Weijia, 2011).
This technique is composed of two modules; packet filtering, and attack detection. The attack detection module identifies attack signatures in packets these are the characteristics of packets, such as the destination and source IP header values. The packet filtering modules use the summarized information by the attack detection module to drop the suspicious attack packets. The two modules are always near the victim: however, having them close limits their effectiveness. Therefore, it is worthwhile to have them near the attack source as possible so that the attack packets are not deeply entangled in the core network (Chen, Park, & Marchany, 2007).
This technique is quite promising because it identifies one attacker, isolates them, and throttles them down. This process is repetitive until all the attackers are neutralized. This technique can further be enhanced using its more advanced form called Parallel Attack Diagnosis (PAD) that is capable of throttling traffic that come from many attackers simultaneously.
The technique has two key drawbacks and makes it almost impossible to implement. The first is the need to pass attack signatures upstream to routers securely. This requires a global verification and authentication structure like the key infrastructure for attack signatures. This is costly and hard to maintain. Another significant drawback is that this technique depends largely on attack signatures to identify and distinguish between legitimate and illegitimate traffic. This is a challenge because the attack detector module only focuses on the existing attack in order to develop the attack signatures. It lacks the ability to formulate its signature from the attacked traffic. If the signature lies above the network layer, it will not be…