Evolution Over Time of Network Multiple chapters
- Length: 20 pages
- Sources: 20
- Subject: Education - Computers
- Type: Multiple chapters
- Paper: #93470686
Excerpt from Multiple chapters :
In actual fact, because of STCP's option of multiplicative amplify, STCP have to in stable state persuade congestion actions approximately all 13.4 round trip times, in spite of the connection speed. HSTCP encourages packet losses at a slower speed than STCP, but still much quicker than RCP-Reno.
3. Problems of the Existing Delay-based TCP Versions
In contrast, TCP Vegas, Enhanced TCP Vegas and FAST TCP are delay-based protocols. By relying upon changes in queuing delay measurements to detect changes in available bandwidth, these delay-based protocols achieve higher average throughout with good intra-protocol RTT fairness (Cajon, 2004). However, they have more than a few deficiencies. For instance, both Vegas and FAST suffer from the overturn path congestion difficulty, in which simultaneous onward and overturn path traffic on a simple bidirectional blockage connection cannot attain full link operation. In addition, both Vegas and Enhanced Vegas employ a conservative window increase strategy of at most one packet ever RTT, leading to slow convergence to equilibrium when ample bandwidth is available. Although possessing an aggressive window increasing strategy leading to faster convergence in high-speed networks, we shall see that, FAST has trouble grappling with uncertainty in the networking infrastructure.
Similar to Vegas and Enhanced Vegas, FAST TCP attempts to buffer a fixed number, a, of packets in the router queues in the network loop path. In speedy networks, a must be adequately big to allow a delay-based protocol to calculate the line up delay. But with great values of a, the delay-based protocol inflicts supplementary buffering necessities on the network routers with an increase in the number of flows; the router queues may not be able to handle the demand. If the buffering supplies are not fulfilled, the delay-based protocols suffer failure, which mortifies their performance. In contrast, if ? is too diminutive, the queuing delay may not be detectable, and convergence to high throughput may be slow.
Preferably, in delay-based systems a source's worth of set-point ? must be animatedly attuned consistent with the connection capacities, queuing resources, and the number of simultaneous connections in common queues. To determine a sensible and effectual technique for enthusiastically setting a perhaps time-varying set-point ? (t) has remained as an open problem. Examples of delay-based schemes include TCP Vegas (1), Enhanced TCP Vegas and FAST TCP (C.Jin, 2004). While providing higher throughput that Reno, and exhibiting good intra-RTT fairness, the delay-based schemes still have shortcomings in terms of throughput and the selection of a suitable ?. In contrast to the marking / loss-based schemes, delay-based schemes primarily do not use marking/loss within their control strategies, often choosing to follow the tactics of TCP Reno when marking or loss is selected.
4. Analytical Approaches
In terms of characterizing and providing analytical accepting of TCP congestion evasion and control, several approaches based on stochastic modeling, control theory, game theory, and optimization theory have been presented. (S.Kunniyur, 2003)
In particular, Frank Kelly gave a general analytical framework based on distributed optimization theory. In terms of providing analytical guidance to TCP congestion avoidance methods utilizing delay-based feedback, Low (S.H.Low, 2002) urbanized a duality model of TCP Vegas, interpreting TCP congestion control as a distributed algorithm to solve a global optimization problem with the round-trip delays acting as pricing information. Throughout this structure, the resultant performance improvement of TCP Vegas and Fast TCP are better understood. Nonetheless, the expansion of extra analytical framework of TCP congestion avoidance is necessary.(S.Moscolo, 2006)
Network calculus (NC) offers a scientifically thorough approach to analyze network performance, permitting a system theoretic method of decomposing network demands into impulse responses and service curves by using the notion of convolution developed within the context of a certain min-plus algebra, Previously in (R.Agrawal, 1999), window flow control strategy based on an NC using a feedback instrument was urbanized, on condition that consequences concerning the impact on the window size and performance of the session. In terms to determine the most advantageous window size, the work by R.Agrawal (1999) merely recognizes that the window size ought to be reduced when the network is crowded, and augmented when extra resources are obtainable. In (C.S.Chang, 2002), the authors extend NC analysis to time-variant settings, providing a framework useful for window flow control. However, they do not develop an optimal controller. In (F.BAcclli, 2000), a (max, +) approach similar to NC-based techniques is utilized to describe the packet-level dynamics of the loss-based TCP Reno (S.Moscolo, 2006) and Tahoe, and calculate the TCP throughput. The work in (H.Kim, 2004) utilizes NC to model and bound the throughput of Reno-type TCP flows in order to speed up simulations. (S.Moscolo, 2006)
In (J. Zhang, 2002), several NC based analytical tools useful for general resource allocation and congestion control in time-varying networks are developed. In particular, the concept of an impulse response in a certain min-plus algebra has been used and extended to characterize each network element, and the methods are utilized within a distributed sensor network scenario.
In a study on Internet related traffic, published in 1998, the dominant process transmitting over TCP were file transfer, web, remote login, email, and network news. The applications related to these processes were File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), and Telecommunication network (TELENET) (Willinger, Paxton, and Taqqu, 1998). This study focuses in arrival patterns, data load, and duration with respect to packet transfer. The most frequent flow size for HTTP was about 1KB or less. At the same frequency, FTP flows sizes were about 10 times larger than HTTP.
Six years later, a flow-based traffic study of Internet application at a university campout, found that the data bulk was transferred over TCP. Two sets of data were collected for this study during a year. For each set TCP dominated the byte and packets count over the other observed protocols by about 90%. However, in terms of flows, UDP almost double the flow count of TCP for each set. In this study they found that TCP flows were over five times greater than UDP flows. They also found that over 50% of the collected flows had duration of less than 1 second. They found that in addition to FTP, new file transfer type applications had emerged. Applications such as Peer-to-Peer (P2P) and instant messaging (IM) had immerged and hand taken over as that most popular applications in terms of flows, packets, and bytes. HTTP was one of the most popular applications in terms of byte transmission and IM applications dominated in terms of flow duration (Kim, 2004).
In a similar study conducted in 2006, some of the authors of the previous article found that TCP was still dominant protocol based on bytes and packet count. UDP was still the dominant protocol in terms of flows, dominating TCP flows by twice its count. At the application level, the applications transmitting over TCP had a small changed. HTTP was the dominant application, but abnormal traffic over port 80 might had been the cause of excess bytes. One of the most popular P2P applications was eDonkey. They also found that 50% of the traffic flows were composed of 3 packets, 500 bytes or less, and duration of 1 second less (Kim, Won, and Hong, 2006).
One year later, in an hourly analysis of user-based network utilization from two Internet providers, Internet applications transmitting over TCP were found dominant. File sharing applications over TCP were found to dominate in terms of flow frequency and duration. HTTP processes was displaced to a second place (De Oliveira, 2007). The same year, a 3-year study on inbound and outbound network flows showed that the overall network traffic was dominated by HTTP flows. This study was done at a university campus where students were discouraged from accessing file sharing applications such as P2P. Data for this study was collected in 2000, 2003, and 2006. For every year of collected data, the TCP packet count significantly dominated that of UDP and Internet Control Message Protocol (ICMP). They found that flows bytes and packets were highly correlated and that flow size and duration were independent from each other (Lee and Brownlee 2007).
In 2006, a study conducted in campus wide wireless network, showed the dominant applications were web and P2P. The two types of applications contributed over 40% of the total bytes more than P2P applications. The study does not mention whether P2P application is blocked by campus network administrators. Also, the study categorizes other types of network processes and finds that although many applications do not contribute with a significant percentage of the total bytes transferred, their contribution to the total flows has an impact on the network performance (Ploumindis, Papadopouli, and Karagiannis, 2006).
These studies have tested the behavior of Internet protocols and popular applications in terms of flows, bytes, packets and duration. For the different studies, the datasets collected included data from Internet providers and university campus networks.
A weakness of the current TCP slow start mechanism becomes apparent, when there is a large delay bandwidth product (delay £ bandwidth) path. In a network…