Congestion is a known problem for packet switching networks like the Internet, and customarily, congestion occur on the Internet as packets traverse shared-networks. It is not uncommon for packets to arrive in bursts from sources in a network. Typically, congestion occurs when packet arrive in a burst or when sources are sending more data than the network can accommodate. It is true that switching devices have buffers and buffers can aid to absorb burst traffic, however, if burst traffic persists, buffers get filled up and incoming packets are dropped. Increasing the buffer size may compound the problem because huge buffer size can result to huge delay and an eventual congestion collapse. This is known problem in multifarious networks. The proposed system sets to prevent congestion in the network by combining traffic shaping and network border monitoring and control mechanisms. It uses exchange of feedback among edge routers to control unresponsive traffic tending to enter the network, thereby, preventing congestion in the network.
1.1 Background to the Study
According to Netcraft Research (2017), by some estimates, the number of internet users worldwide is about 3.2 billion, also, there are about 971 million websites currently online. The rise of the internet has significantly affected the way we live. New technologies has seen to the invasion of more smart devices around us. These days we have smart TVs, smart Cars, smart billboards, etc. More so, many individuals have more than one smartphones, tablets and so on. Overtime, however, there has not been a proportional increase in the investment in network infrastructure to sustain this growth of internet of things. What this means is great pressure on available network infrastructure which had really never been enough. The rise of the internet as well as the continued growth of access around the world through different technologies has continued to pose challenges of quality of service (QoS) on available network resources. Quality of Service (QoS) for networks is an industry-wide set of standards and mechanisms for ensuring high-quality network performance for critical applications. By using QoS techniques, network administrators can use existing resources efficiently and ensure the required level of service quality (What is QoS, 2009).
Common Quality of Service mechanism include classification, traffic shaping, link efficiency, traffic policing, admission control, etc. QoS mechanisms are measured based on elements of network performance. The elements of network performance within the scope of QoS include uptime, bandwidth, throughput, delay and packet loss (Bradley, 2018). The TCP host-based congestion control mechanisms have been key to the robustness of the Internet. However, the Internet is no longer a small user community, it has grown exceedingly and it is no longer practical to rely on hosts, using end-to-end congestion control for best-effort traffic. Unresponsive applications have been identified as a culprit in the challenge of fairness in utilization of limited network resources and, worse still, developers have not demonstrated readiness to revolutionise into integrating end-to-end congestion control in the development of their Internet applications. The new argument is that the network must now contribute in controlling its own resource and handling congestion as end-to-end approach has left the Internet with lots of pitfalls.
A number of traffic shaping techniques exists which attempts to improve packet transmission. Traffic Shaping involves buffering traffic temporarily and dropping packets when suitable to avoid congestion. It involves enforcing a limit on the bandwidth size a connection uses. Although these mechanisms attempt to prioritize network traffic for improvements, they are basically, host-to-host implemented network control mechanisms and alone, are not able to prevent network congestion. It is to address this limitation that an optimized network-based congestion control system which advances QoS beyond the end-to-end devices to the network core and ensuring unresponsive traffic flows are prevented from entering the network in the first place is proposed.
The proposed system uses a traffic shaping mechanism as additional congestion control layer to regulate the flow of potential burst traffic arriving at the network. It also involves exchange of feedback between routers at the edges of the network in order to detect and control unresponsive traffic flows attempting to enter the network.
1.2 Statement of the Problem
Congestion collapse, a dreaded locked-down condition which results from a network congestion, has been a serious challenge facing network administrators in the management of organisation networks especially since the upsurge of internet multimedia traffic demands. There is need to provide an efficient and reliable system that will combat this menace to ensure continual quality of service. Most congestion control mechanism which try to prevent network congestion are basically host-based and have not been able to avert congestion collapse resulting from dropped packets. To solve this problem, an improved system of congestion control needs to be designed to detect and prevent congestion at the edges of a network before they can occur in a time efficient manner.
1.3 Aim and Objectives of the Study
The aim of this study is to develop a system that can detect and prevent congestion in an internet protocol based network.
The specific objectives are to:
1. Design a system that can detect and prevent congestion collapse in an IP based network.
2. Develop a system that can detect and prevent congestion collapse in a network using a combination of the leaky bucket algorithm, rate control algorithm and time sliding window algorithm for improved QoS.
3. Implement the system using Java programming language.
1.4 Significance of the Study
This study is important in promoting good network quality of service (QoS) in the organisation where it is implemented. It will also be useful to the network administrator whose job has been plagued with incessant complaints of network disruptions due to congestion collapse. The users and administrators of the network will benefit from this study when they employ the optimized QoS mechanism presented by this study in a competitive or limited bandwidth environment. The study will be useful to network professionals in their strategy to be, essentially, more preventive and proactive rather than reactive in achieving their objectives. It will also serve as a future reference for researchers on the subject of network congestion control.
1.5 Scope of the Study
This study borders around network congestion, state of congestion collapse, consideration of time efficient network QoS mechanisms, and use of flow control algorithms for controlled packet delivery. Congestion control is examined in the light of traffic shaping techniques and border-routers based control. Other relevant issues related to network congestion control are reflected such as out-of-order packet delivery, broadcast storm, transportation protocols, re-transmission or automatic repeat requests. Light was shed on buffer and queue management.
1.6 Limitation of the Study
Although this research was carefully prepared, it is mostly simulated and there was no opportunity to deploy it in a large, multiple-node corporate network. Hence, cannot guarantee it is fail-safe.
1.7 Definition of Terms
Network Congestion: The state in which throughput put becomes compromised and stifled due to an upsurge in data transmission (Network Congestion Management, 2015).
Congestion Collapse: The state of deteriorated network communication involving dropped packets usually as a result of incoming packets exceeding the available bandwidth.
Quality of Service (QoS): This refers to the overall quality of performance of network considering bandwidth, delay, error rate and availability.
Latency: this is the total time it takes a data packet to traverse a network.
Bandwidth management: This is the process of measuring and controlling the communications (traffic, packets) on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network (“Bandwidth Management” n.d).
Flow Control: Flow control ensures the activity of a fast user does not overwhelm the activity of a slow user. Though it is useful in regulating traffic and helpful at reducing congestion, but it is not an answer to congestion.
Congestion recovery: The restoration of the network to its normal operating state after a congestion.
Congestion avoidance: detect imminent congestion and prevent in it from happening.
Traffic Management: In computer networking, network traffic control is the process of managing, controlling or reducing the network traffic, particularly Internet bandwidth, e.g. by the network scheduler
Packet Shaping: This is a computer network traffic management technique which regulates incoming variable or burst datagrams into a desired constant rate.
2.1 Concept of Network Congestion
Network congestion is an issue that has continued to plague computer networks today whether wired or wireless network. Gaya et al. (2015) opined that the advent of super speed networks has brought about increased number of network users and applications. This also means increased pressure on existing network infrastructure and inevitably, an invitation to the problem of network congestion. Network Congestion is the state in which throughput is compromised and stifled due to an upsurge in data transmission (Network Congestion Management, 2015). In other words, when there is network congestion, the more attempt to send data, the less data is transmitted eventually.
The issue of network congestion has long being identified though it has not really being successfully defined quantitatively. As more and more smart devices emerge and the internet of things grows so also does the challenge of network congestion. Network congestion affect the quality of service (QoS) of the user in form of slow browsing experience, poor audio and video over IP quality, etc. The service providers are not spared either as furious subscribers engage customer care and even more are likely to dump the service which is bad business for the service provider.
Network congestion occurs when more packets are entering a network than they are leaving. This is a situation that has to be managed to avoid congestion collapse. The service provider needs to achieve a balance between subscriber satisfaction and cost.
When a network becomes congested, the operator can expect numerous support calls or high subscriber loss. Often times the operator tries to respond by increasing capacity but, while it is not a capacity problem, the damage has already been done. Network access resources are limited and not infinite and it is not uncommon for demand to exceed that capacity. Today, the popular choice of service providers is to over provision or “throw bandwidth” at the problem in order to avoid congestion within their network. The reasons for this has purely a financial undertone:
1. Bandwidth has become less expensive. Service providers prefer to overprovision a network as compared to losing in a case a customer complains.
2. It is easier to manage an overprovisioned network than one with just enough bandwidth. Network administrators will need more time to do their task.
As users come and go, so do the packets they send. The performance of the internet is largely governed by these usual activities. By implication, the network would sometimes see traffic spikes that supersedes the capacity limit as a certain number of customers maximize their rates concurrently. Since the surplus cannot be pushed through the link, there are only two things the routing device can do; buffer or drop the packets. Because such traffic bursts are usually limited in time, typically, routers place excess packets in a buffer and only drop packets if the buffer is full. The basic notion is that a consequent traffic reduction would eventually free the queue (Weilz, 2005).
According to Ignaciuk and Bartoszewicz (2013), the absence of regularization in the initial days of communication protocol development made the initial flow control approaches mainly about the actual transmission technology for which they were designed. So the research concentrated on the proprietary solutions of different interest groups. One of the earliest congestion control mechanism was developed for TYMNET. At setup time, a throughput limit is calculated for each virtual circuit which is applied at the points along the established route. Ignaciuk and Bartoszewicz opined that:
Flow control is obtained by assigning memory quota at each intermediate node and sending transfer permits based on the quota exhaustion between the neighboring switches. A transmitter sets a counter equal exhaustion between the neighbouring switches. A transmitter sets a counter equal to maximum buffer size (quota), which is decremented with each data piece (character in TYMNET) relayed on the transmission path. Periodically, each node sends backpressure vector to its neighbours, containing a binary flag for each virtual circuit passing through the node. The flag is set to zero if the assigned buffer is entirely filled with data (the maximum permitted allocation is reached). The transmitter stops data transfer when the counter reaches zero and resumes it once the received backpressure bit for the corresponding virtual circuit is equal to one. Backpressure propagates from node to node back to the source and finally slows it down or turns off (Ignaciuk and Bartoszewicz, 2013. 14)
While this flow control approach may be inexpensive and modest, however it does not handle larger volumes of data transmission and may introduce a communication overhead.
Houmkozlis and Rorithakis (2013) opined that “managing Internet traffic and resources to enforce the necessary network characteristics to permit the deployment of the Future Internet is recognized as a crucial task, whose solution (congestion control algorithm) is imperative”. Houmkozlis and Rorithakis believe that congestion control presents some new challenges that have to be confronted towards future internet. In what follows some will be reviewed:
Heterogeneity: According to Houmkozlis and Rovithakis (2012), the internet today is an aggregation of highly heterogeneous networks created by diverse technologies that bring about high variations in link and path characteristics. For example, “link capacities may range from several Kbps in low speed radio links, to several Gbps in high-speed optical links”. Also, delays exist in 1ms in Ethernet connections as against seconds in satellite links. As such transmission time of the hosts and the available bandwidth may likely differ remarkably. Future work on congestion control must effectively tackle the heterogeneity problem without losing compatibility (Houmkozlis and Rovithakis, 2012).
Stability: Houmkozlis et al. (2012) have stated that stability imply that “introducing a bounded input, the network’s output should also remain bounded”. Though from the network standpoint that is not really correct as the presence of strong fluctuations in the link buffers is not factored. They believe that the “asymptotic stability” that suggests “convergence to equilibrium” offer the proper perception for congestion management. According to Houmkozlis et al. (2012), the major criterion for achieving such the goal of stability is an effective “mathematical representation of the internet procedure we want to control”, such that there is a relation between the control and controlled variables . Houmkozlis et al. (2912) stated that:
Clearly, mathematical modeling of the internet is a tedious task owing to its inherent complexity, as well as its time-varying and dynamic character. Furthermore, extreme modeling complexity raises significantly the difficulty of designing a controller following formal control-system methods. It is therefore common practice to propose control schemes based on simplifications in both the operating assumptions and in the actual mathematical representation of the internet, hoping that the produced discrepancies will be taken care of by the robustness inherited when closing the loop.
Network models under certain appropriate operating setup are simulated extensively as a way of verification.
Fairness: One challenge for internet solutions is fairness. In network designs, fairness measurements aid in determining whether a fair share of network resources are delivered to users or applications. And attaining fairness among network applications and users and same time guaranteeing efficient network utilization has always been a big challenge.
Houmkozlis and Rovithakis (2012) opined that network congestion control mechanisms should ensure fair distribution of resources among contending nodes, and this should be in addition to ensuring stable equilibrium.
Robustness: Much as high link utilization should be ensured in congestion control as well as fair network resource sharing, it is important to maintain robust operation with variations in traffic and critical network variables, i.e., capacity of buffer, capacity of links, etc. Nonetheless, complication in the network should be avoided as much as possible and ensuring not to compromise performance.
2.2 Congestion Control Misconceptions
Clearly, congestion follows when data traffic exceeds available network resources. It is believed, hence, that as resources become more affordable congestion will be handled automatically. According to Jain (1990), this has led to the misconception that:
1. As memory becomes more affordable congestion will be resolved automatically since it occurs because of a limited buffer capacity.
2. Congestion will be resolved as high-speed connections become available because it results from slow links.
3. Congestion will be resolved automatically as faster processors emerge and are deployed because it is caused by low processing speed.
Contrary to these beliefs, more congestion and poor performance may be the case if concerted efforts is not made towards developing an appropriate protocol design. High memory capacity devices and low memory capacity devices are both vulnerable to network congestion. For high memory capacity devices, the packets are buffered and delayed in long queues such that they time out and would have been retransmitted. Whereas, for low memory capacity devices, excess traffic begin overflow and discard (Jain, 1990).
2.3 Related Works
Floyd et al. (2000) proposed an “equation-based congestion control for unicast applications”. According to them, (TCP) has been doing well in managing most “best-effort traffic” in today’s internet. Nevertheless, having a congestion control mechanism that is compatible with TCP, which avoids reacting to packet drop by slowing down rate of transmission by half will be a good way of handling best-effort unicast streaming multimedia traffic. (Floyd et al. 2000). With this mechanism, the source regulates rate of sending depending on the loss event measurement. Meanwhile, a loss event is the situation where some packets are dropped within a one round-trip time (Floyd et al. 2000).
Yang, Y (2011) presented a paper on “Network Congestion Control”. He shared the opinion that in shared networks (eg. Internet), the nodes ought to react to congestion by adjusting their rate of transmission thereby preventing congestion collapse and maximizing the available network resources. Yang (2011) believed that the internet today is robust partly due to the TCP’s end-to-end congestion control mechanisms. Much as TCP congestion control performs well for large data transmission applications, it would not work well for other newer applications which will find TCP behavior of halving transmission rate in response to congestion as harsh. As such, since internet traffic is mainly TCP based, it becomes pertinent that emerging congestion control mechanism still be TCP compatible.
Yang (2011) then examined the fairness, assertiveness and responsiveness of TCP, General Additive Increase and Multiplicative Decrease (GAIMD) and some other two typical TCP-compatible congestion control protocols. These protocols are analysed and simulated and their responses to network changes closely observed. Yang (2011) considered the integral instabilities in a static network environment and studied “protocol responsiveness and aggressiveness by evaluating their responses to a step increase of network congestion and a step increase of available bandwidth”. Another problem Yang (2011) examined was the congestion control problems in multicast environments. Multicast sessions may have large number of receivers with various or heterogeneous receiving capacities determined by the fairness requirement by device capacity. Different multi-rate schemes that are based upon the use of layering or replication were used to absolve this heterogeneity.
Sharadchandra et al. (2017) presented a congestion controlling system that uses the network border patrol (NBP). NBP is a congestion control mechanism implemented on the network layer that controls congestion collapse by “a combination of per flow rate monitoring at an out-router and per flow rate control at an in router”. This is done using forward and backward feedback instructions from a feedback controller in the out-router. The out-router sends backward feedback to the in-router to inform about the rate at which each flow’s packet are exiting the network and the in-router regulates, accordingly, the rate at which packets are entering the network. According to Sharadcandra et al. (2017), there are two unique types of routers introduced to the network called edge routers. An edge router maybe viewed as in-router or as out-router depending on which flow it is operating on. For example, an edge router operating on a flow entering into a network will be termed an in-router, whereas, an edge router operating on a flow leaving a network will be termed an out-router. The NBP framework uses exchange of forward and backward feedback between routers to detect and limit unresponsive packets from entering the network, thereby, preventing congestion from occurring in the first place.
Mawhinney et al. (2004) designed a network congestion control mechanism that smoothen the transfer of data packets in a network by monitoring the data packets leaving the network for congestion indications, and effectively rate controlling the host application sessions based on such congestion indications or notifications to relieve the network congestion. This design is quite similar to the Congestion Controlling using Network Border protocol designed by Sharadchandra et al. (2017) except that, for the former, the rate controlling is implemented in the host end and not in the network. According to Mawhinney et al. (2004), the system can prevent congestion from reaching the point where data has to be discarded or packets being dropped. The host sessions is divided into mission critical and non-mission critical sessions. The session types are prioritized in such a way that during periods of congestion the mission critical session remain unaffected while the non-mission critical sessions are rate controlled. The mission critical sessions basically enjoy a reserved specific size of bandwidth from the available bandwidth even up to the point of congestion.
Firoiu et al. (2001) examined some internet congestion controlling schemes generally and the random early detection (RED) particularly. The current proposals for RED implementation was first examined, and then identified other fundamental problems like generating burst traffic fluctuations and presenting redundant overhead in the fast path forwarding. Firoiu et al. (2001) demonstrated RED as a system with feedback control and determined essential laws prevailing on TCP/IP traffic networks. A set of guidelines for the design and implementation of congestion control modules in routers like RED were derived based on this understanding.
Kloth (2008) developed techniques for congestion control in IP network such as a fiber channel network. Methods are provided to detect congestion in a host. As a controller sends packets through a link, the time elapsed between the sending and the receiving is measured. It the time elapsed is large, the path towards the destination is assumed to be congested and the port is blocked from receiving subsequent data.
In the fibre channel, a data sequence at a buffer controller is received with a source matching one of the many ports coupled to the buffer controller and destination accessible via a link also coupled to the buffer controller. Traffic from the numerous ports attached to the buffer to reach different destinations shares the link. The received packets are forwarded to the link and transmission acknowledgment is received. Depending on the time the acknowledgement is provided, the port associated with the received packets is blocked (Kloth, 2008).
In another example, a congestion controlling setup in a fibre channel network is provided. The setup involves a buffer controller and various input ports. The various input ports are designed to receive packets having destinations accessible via a shared resource. The buffer controller is designed to receive packets from of the various inputs, send the packet through the link and receive acknowledgement from the link. If the acknowledgement is not provided within a specified time, the input port matching the packets is blocked.
Rejaie et al. (2002) opined that the stability and robustness of today’s internet is mostly a function of end-to-end congestion control schemes. That internet traffic today is mostly TCP and will remain so for quite a while. Therefore, it is important to have new applications to be “TCP friendly”. Rejaie et al. (2002) presented a rate adaptation protocol (RAP) which was TCP Friendly and employs the TCP additive-increase, multiplicative-decrease AIMD algorithm. RAP was well suited for unicast real-time playback streams and the key goal was to separate network congestion control from application-level reliability and at the same time maintaining fairness and TCP compatibility.
After evaluating RAP through thorough simulation, it could be establish.hed that bandwidth is usually evenly shared between RAP traffic and TCP traffic. Rajaie et al. (2002) stated that “
unfairness to TCP traffic is directly determined by how TCP diverges from the AIMD algorithm”. Though RAP performs similarly to TCP in a number of possible situations, however, a simple rate control scheme is devised to widen the scope.
Floyd et al. (1999) presented a paper which considered the importance of sustaining deployment of congestion controlled best-effort traffic on the internet and highlighting the bad effects of adopting non-congestion controlled best-effort traffic. According to Floyd et al. (1999), the bad effects span from severe unfairness against other TCP traffic to the possibility of having a congestion collapse. They believe that router mechanisms are necessary to spot and limit the bandwidth of some high-bandwidth best effort traffic in a congestion situation and as a result promote inclusion of end-to-end congestion control in the development of tomorrow’s protocols. Various approaches for determining suitable flow for bandwidth regulation were examined in the paper. A flow is regarded not “TCP-friendly” if for a long while the arrival rate is higher than that of any compliant TCP under same conditions. On the other hand, a flow is regarded as unresponsive if it is not slowing down its traffic in the network in reaction to an increased dropped packet rate and bandwidth flow is regarded as imbalanced if the flow is uses significantly more bandwidth over other flows in a time of congestion.
MATERIALS AND METHODS
3.1 Analysis of the Existing System
Sharadchandra et al. (2017) attempted to address the integral issues associated with end-to-end congestion control schemes by taking congestion control away from host systems (end-to-end) to the network using a system they called network border patrol (NBP). Network border patrol involves the use of entry and exit routers (in-routers and out-routers respectively) which compares at the border of the network the rate at which packets are entering and leaving the network. A rate control algorithm is responsible for monitoring and regulating the flow entering and leaving the network. The rate control algorithm, much like the TCP congestion control, operates in two phases the slow-start phase and the congestion avoidance phase. Flows entering the network typically start with the slow start phase and proceed to the congestion avoidance phase only when the flow has signaled imminent congestion. The rate control algorithm an effective but relatively complex algorithm and introduced a communication overhead in the network.
According to Sharadchandra et al. (2017), the NBP framework offers an appropriate strategy which involves the exchange of forward and backward feedbacks between routers at the edges of a network so as to limit unresponsive traffic flows even before they enter the network, thus averting congestion within the network. Due to growing usage of bandwidth hungry applications, congestion in networks has increased remarkably. Though there are a number of traffic scheduling mechanisms which exist to manage congestion in the network, they are mostly end-to-end or host oriented and are not able to avert congestion collapse. Sadly, the internet as it is, features strict adherence to end-to-end congestion control and most of the existing congestion control mechanisms are TCP based and are implemented in the hosts. This has exposed the internet to two fundamental problems of congestion collapse due to dropped packets and unfairness by unresponsive applications.
3.1.1 Network Border Patrol (NBP)
The NBP compare, at the edges of a network, the rates at which packets from applications are entering and leaving the network. The network is possibly buffering when packets are leaving slower than they are entering the network. This means more packets are arriving the network than it can accommodate. Sharadchandra et al. (2017) opined that NBP comes in by “patrolling the network’s borders”, observing the transmission rate at the input and output of the main-router and making sure that flow’s packets do not enter the network faster than they are able to leave. This way congestion collapse is prevented as unresponsive flows are not allowed to enter the network. However, Sudhakar (2012) stated that “NBP introduced an added communication overhead, in order for an edge router to know the rate at which packets are leaving the network and must exchange feedback with other edge routers”. Figure 3.1 shows the Internet architecture assumed by NBP and figure 3.2 shows the system design of the existing system.
3.1.2 Forward/Backward Feedback Packet
NBP features a feedback controller which sends forward and backward feedback to the edge routers. The feedback controller determines when feedback packets are exchanged between edge routers. Feedback packets are like ICMP packets and are very useful in the implementation of BGP. The out-routers use forward feedback to determine which in-router is a source for particular flows that are being monitored. Also, the out-routers use the backward feedback to send per-flow bit rates to the in-routers. Lastly, the in-routers use forward and backward feedback packets to determine imminent congestion by observing and comparing the round-trip times.
The in-router forward feedback packet contains a time stamp and list of flow originating from the in-router. The time stamp field of the forward feedback packet is used to compute the round-trip time between two edge routers and to determine whether packets are beginning to take longer time to traverse the network, thereby signaling a possible congestion. As the out-router receives a forward feedback packet, it immediately produces a backward feedback packet and sends to the in-router. The backward feedback packet contains the forward feedback packet’s original time stamp, list of bit rates observed, and a hop count. The hop count is the number of edge routers along the path of delivery. The hop count is determined by the out-router using the time-to-live (TTL) information of packets arriving the edge router.
3.1.3 Advantages of the Existing System
The existing system has the following advantages
1. The NBP tries to prevent congestion in the network before they occur
2. NBP properly compliments congestion control feature at the end nodes by taking congestion control to the intermediary network.
Figure 3.1: Internet Architecture assumed by NBP (Sharadchandra et al., 2017).
Figure 3.2: System design of Network Congestion Controlling using Network Border Protocol (Sharadehandra et al. 2017).
3.1.4 Disadvantages of the Existing System
1. Lacks flexibility and scalability
2. The algorithm employed by the NBP is relatively complex and can be time consuming when fully triggered.
3. It lacks some important QoS features.
3.2 Proposed System
The proposed system employs additional layer of congestion control mechanism called “leaky bucket algorithm” to efficiently avert congestion collapse in an IP based network.
Leaky bucket algorithm is a traffic shaping algorithm and is applied to smoothen-out burst traffic. It steps in to smoothen out initial flow of burst traffic in the network, thereby, preventing the rate control algorithm from triggering fully into the ‘thorough but slow’ congestion-avoidance stage all the time (the congestion-avoidance stage is triggered any time a burst traffic is experienced). This is to avoid the added communication overhead introduced by the fully invoked rate control algorithm. Figure 3.3 graphically describes the Leaky bucket algorithm.
Leaky bucket normalizes the packet flow from the input port as, described in figure 3.3, to the output port of the router. Assume a bucket with a puncture at the bottom, regardless of the burst rate of in-flow to the bucket, it will release the flow at a controlled rate. It is also assumed the limit of the bucket is infinity. Hence, there is no case of dropped packets due to the bucket getting over filled
Figure 3.3: Graphical representation of the leaky bucket algorithm as adopted.
3.2.1 Advantages of the Proposed System
1. Implementation of Network border patrol ensures congestion is prevented from happening.
2. Use of multiple-layer rate control algorithms ensure control over problem of burst traffic flow in the network.
3. It empowers the network administrator to be proactive in managing the network.
Figure 3.5 shows the architecture of the Improved Congestion Control Mechanism using Network Border Protocol.
3.3 Networking Basics
In this sub-section, the system is analyzed based on its critical demands.
3.3.1 Network Packet
In networking, data are bundled into packets. A network packet is the unit of data that is routed between a source and a destination on the Internet or any other packet-switched network. When a message is sent from a source to a destination in a packet-switched network, the TCP layer of divides the file into pieces of an efficient size for routing. Each of these packets is uniquely numbered and includes the Internet address of the destination. The various packets may travel different routes to their destination through the network. They are reassembled into the original file by the TCP layer when they arrive their destination at the receiving end. (Rouse, n.d.).
3.3.2 Transmission Control Protocol (TCP).
TCP is a connection-oriented Layer 4 protocol that is designed to allow packets of data to be sent across the internet. TCP compliments IP by providing consistent, stream-oriented connections and can also support multiple concurrent upper-layer exchanges. Most TCP/IP protocols are founded on TCP which is based on IP, hence the name TCP/IP. TCP is very useful to the IP service. Figure 3.4 and figure 3.5 shows the TCP packet format and IP Packet format respectively.
The TCP packet format consists of these fields:
a. Source and Destination Port fields: Specifies the end-points of the link.
b. Sequence Number field: Bears the number allocated to the first byte of data in the current message. It can sometimes be used to identify an initial sequence number to be used for the upcoming transmission.
c. Acknowledgement Number field: Bears the next sequence number that the segment sender expects to receive, if the ACK control bit is set.
d. Data Offset: States how many 32-bit words are contained in the TCP header.
e. Reserved field: This is for future use and value must be 0.
f. Flags field: Contains the various flags.
g. Window field: Identifies the size of buffer space available for incoming data.
Figure 3.4 TCP packet format. (www.techrepublic.com)
h. Checksum field: Indicates whether the header was damaged in transit.
i. Urgent pointer field: Points to the first urgent data byte in the packet.
j. Options field: Specifies various TCP options.
k. Data field: Contains upper-layer information.
TCP packets feature several mechanisms to confirm reliability and control of data flow:
a. Streams: Data is structured as a stream of bytes, like a file.
b. Reliable delivery: Coordinate which data has been sent and received using sequence numbers. TCP retransmits if data has been lost.
c. Network adaptation: TCP adapts to network performance to maximize throughput.
d. Flow control: Manages buffers to avoid traffic overflow. Sources with excess data are stopped to accommodate slower receivers.
e. Round-trip time estimation: TCP computes estimate of how long it should take to receive an acknowledgement, and retransmits if this time is exceeded.
3.3.3 Internet Protocol (IP)
IP is the Layer 3 protocol or set of rules by which data is send from one node to another on the internet. IP, along with TCP represents the fundamental of the Internet protocol suite. Figure 4 shows the IP packet format.
The IP packet format consists of these fields:
a. Version field: Shows the IP version.
b. IP Header Length field: Shows how many words are in the IP header.
c. Type-of-service field: Indicates how the current datagram is to be handled.
d. Total Length field: Indicates the IP Packet length.
e. Identification field: Identifies the current datagram.
f. Time-to-live field: Maintains a counter that keeps packets from looping endlessly.
g. Protocol field: Indicates which protocol will receive incoming packets after IP processing is complete.
h. Header Checksum field: Ensures IP header reliability.
i. Source Address field: Indicates the source node.
j. Destination Address field: Indicates the destination node.
k. Options field: Allows support for various options.
l. Data field: Bears upper-layer information.
3.3.4 Network Socket
A socket is one terminal of a two-way communication link between two running programs on the network. A socket is attached to a port number so that the TCP layer can identify the application that data is intended to be sent to. The terminal comprises of a port number and an IP address. A network socket is similar to the electrical socket in the home. Anything that understands the standard protocol can “plug in” to the socket and communicate.
Figure 3.5 IP packet format. (www.techrepublic.com)
3.4 Process of the Proposed System
The proposed system is composed of 5 modules, they are: Source module, in-router module, main-router module, out-router module and destination module.
i. Source Module: This module sends the packet to the in-router.
ii. In-router Module: This module is an edge router. The proposed system avoids congestion collapse using a combination of leaky bucket algorithm and rate control algorithm. Leaky bucket operates as a per-flow traffic shaper and the rate control allows an in-router to police the rate at which each flow’s packets are entering the network.
iii. Router Module: This module accepts the packet from the in-router and routes it to the out-router.
iv. Out-router Module: This module is also an edge router. It operates on flows leaving the network. Essentially, rate monitoring allows an out-router to determine how rapidly each flow’s packets are leaving the network. The out router ensures in-order delivery of packets using the Time Sliding Window (TSW) algorithm. Out-router contains a rate monitor and a feedback controller.
v. Destination Module: the task of this Module is to accept the packet from the Out-router router and delivered to the Destination machine.
3.5 Process Description of the Proposed System
The process flow of the proposed system is further detailed in this section.
i. Source module: Send data in the form
a) Input: message to be transmitted to the destination node.
b) Algorithm: not applicable
c) Output: Packet with appropriate information for delivery to the destination node.
ii. In-router module: Using leaky bucket and rate control algorithm to regulate packet traffic.
a) Input: Determine the rate of packets transmission.
b) Algorithm: Leaky bucket algorithm and rate control algorithm.
c) Output: Packets to be sent to the main-router.
iii. Router Module
a) Input: Receives data from neighboring nodes and transfer to another neighboring nodes.
b) Algorithm: Not applicable.
c) Output: Transfer packets to nearby nodes.
iv. Out-router Module: using rate control feedback and time sliding window algorithm to regulate and ensure in-order delivery of packets respectively.
a) Input: Receives packets flow in the network.
b) Algorithm: Time sliding window
c) Output: Packets are sent to destination.
Figure 3.6: System design of the Improved Network Congestion Controlling using Network Border Protocol.
v. Destination: Receive packets from the Neighboring nodes
a) Input: Receive message from the out-router.
b) Algorithm: not applicable
c) Output: formatted packets bearing complete message.
...(download the rest of the essay above)