Essay: Improving the quality of service audio for real time media

Essay details:

  • Subject area(s): Information technology essays
  • Reading time: 47 minutes
  • Price: Free download
  • Published on: February 29, 2016
  • File format: Text
  • Number of pages: 2
  • Improving the quality of service audio for real time media
    0.0 rating based on 12,345 ratings
    Overall rating: 0 out of 5 based on 0 reviews.

Text preview of this essay:

This page of the essay has 11884 words. Download the full version above.

Real time media applications such as image sharing, video ne conferencing and online chat are increasing in usage. These bandwidth intensive applications put high demands on a network and often the quality experienced by the user is suboptimal.
In a traditional network stack, data from an application is transmitted in the order that it is received. This thesis proposes a scheme called ‘Priority Packet Scheduling Scheme (PPSS)” where the most important data is transmitted first and data that will not reach the receiver before an expiry time is not transmitted. For example, it has been shown that image is more important to users than data say in WhatsApp, Viber, WeChat etc [1]. PPSS could be considered to be Quality of Service (QoS) within an application data stream, compared to network routers that provide QoS to whole streams, but do not differentiate between data within the stream or which data gets transmitted by the end nodes. It is only required for PPSS to be implemented on the sender so that much of the benefit for one way transmission can be implemented without requiring existing clients to be changed.
Data transmission end-to-end delay Sis of vital importance in Wireless Sensor Networks for scheduling different types of packets, such as real-time and non-real-time data packets, at sensor nodes. Most of the existing packet-scheduling mechanisms of WSN use First Come First Served (FCFS), non-preemptive priority and preemptive priority scheduling algorithms. These algorithms incur a high processing overhead and long end-to-end data transmission delay due to the FCFS concept, starvation of high priority real-time data packets due to the transmission of a large data packet in non-preemptive priority scheduling, starvation of non-real-time data packets due to the probable continuous arrival of real-time data in preemptive priority scheduling, and improper allocation of data packets to queues in multilevel queue scheduling algorithms. Moreover, these algorithms are not dynamic to the changing requirements of WSN applications since their scheduling policies are predetermined [2].
In this work, we propose a Dynamic Priority Packet scheduling scheme (PPSS). All Real-time packets are placed into the highest-priority queue and can preempt data packets in other queues. Non-real-time packets are placed into the queues based on a certain threshold of their estimated processing time. We evaluate the performance of the proposed PPSS through Network Simulator Ns2 for real-time and non-real-time data. Simulation results illustrate that the PPSS outperforms conventional schemes in terms of average data waiting time and end-to-end delay.
I would like to thank for his support and guidance as my supervisor during different periods of my study.
Without the support of Dr. L Rathaia, Chairman- Vignan Group of Institutions, Mr. B Shravan, CEO- Vignan Group Hyderabad I could not have completed the work. Principal Dr. M Venkata Ramana gave advice on both thesis and on my research and supported in all ways.
There were others also who reviewed papers and my thesis and gave many helpful suggestions along the way. Thank you all.
I would also like to thank all the members of the Vignan Institute of Technology & Science, Deshmukhi who supported me in many different ways.
I would like to thank Mrs Kanakarathan my mother, my brothers N Deepak and N Madhukar Reddy and my in-laws and family members for their continuous support during my research work. I especially thank Mrs. P Sireesha my wife, for her support in so many ways – without it I wouldn’t have been able to undertake this work and she convinced me to keep going despite all the difficulties life throws out.
1 Introduction 1
1.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Increasing media traffic and congestion control . . . . . 1
1.1.2 The proposed solution . . . . . . . . . . . . . . . . . . 2
1.2 Overview of thesis . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Contribution of this work . . . . . . . . . . . . . . . . . . . . . 5
2 Background 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Congestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Media applications and congestion . . . . . . . . . . . 9
2.2.2 Approaches to Ensuring Service Quality . . . . . . . . 10
2.3 Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.3 RTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.4 SCTP . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.5 DCCP . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Network protocol interaction with media applications . . . . . 21
2.4.1 The use of layers with congestion . . . . . . . . . . . . 22
2.4.2 Transmission queues . . . . . . . . . . . . . . . . . . . 25
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Proposed solution 27
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Use of DCCP . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.2 TFRC congestion control . . . . . . . . . . . . . . . . 29
3.3 Sending the Best Packet Next . . . . . . . . . . . . . . . . . 30
3.3.1 Not all packets are created equal . . . . . . . . . . . . 32
3.3.2 Passing information to the operating system . 32
3.3.3 Allow altered packet sending and discard order33
3.4 SBPN1 algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5 SBPN2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Implementation 37
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Implementation of DCCP . . . . . . . . . . . . . . . . . . . . 38
4.2.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2.2 DCCP Tools . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2.3 Implementation challenges . . . . . . . . . . . . . . . . 40
4.3 Implementation of SBPN in Linux DCCP . . . . . . . . . . . 42
4.4 Video conference model . . . . . . . . . . . . . . . . . . . . . . 44
4.4.1 Data captured . . . . . . . . . . . . . . . . . . . . . . . 44
4.4.2 Ekiga results . . . . . . . . . . . . . . . . . . . . . . . 45
4.4.3 MSN Messenger results . . . . . . . . . . . . . . . . 46
4.4.4 Skype results . . . . . . . . . . . . . . . . . . . . . . . 47
4.4.5 Tra c model . . . . . . . . . . . . . . . . . . . . . . . 48
4.5 Building of test framework . . . . . . . . . . . . . . . . . . 49
4.5.1 Netem . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.5.2 Test applications . . . . . . . . . . . . . . . . . . . . . 50
4.6 Measuring audio quality . . . . . . . . . . . . . . . . . . . . . 52
4.6.1 On time arrival . . . . . . . . . . . . . . . . . . . . . . 52
4.6.2 Mean Opinion Score . . . . . . . . . . . . . . . . . . . 53
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5 Results from SBPN 57
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2 Testing with loss . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.2.1 SBPN1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.2.2 Testing SBPN2 . . . . . . . . . . . . . . . . . . . . . . 63
5.2.3 Analysing packets transmitted . . . . . . . . . . 66
5.3 Testing with congestion . . . . . . . . . . . . . . . . . . . . . . 67
5.3.1 Testing SBPN2 with congestion . . . . . . . . . . . . . 67
5.3.2 Testing differing queue lengths with SBPN2 . . . . . . 69
5.3.3 Testing SBPN2 with varying RTTs . . . . . . . . . . . 75
5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6 Faster Restart and LIFO 81
6.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.4 SBPN with TFRC Faster Restart . . . . . . . . . . . . . . . 81
6.5 LIFO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7 Ring bu ers 87
7.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.3 Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.4 Ring bu ers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.5 Combining SBPN2 and Ring buffers . . . . . . . . . . . . 91
7.6 Overall improvement . . . . . . . . . . . . . . . . . . . . . . . 99
8 Conclusion 107
8.1 Best results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.2 Future research . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.2.1 Do tests at time of changing bandwidth . . . 107
8.2.2 Sending all packets . . . . . . . . . . . . . .. . . . . 107
8.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.4 Summary of Contributions . . . . . . . . . . . . . . . . . . 107
8.5 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.5.1 Investigating changes to real time media applications to support API . . . . . . . . . . .. . . 110
8.5.2 Investigating and de ning APIs to interface to applications110
9 Appendices 112
9.1 Development of DCCP – .112
9.1.1 Talk about implement floating point in kernel and producing lookup table . . . . . . .. . . . . . . 112
9.1.2 Talk about 64 bit division issue . . . . . . . . . 112
9.1.3 Talk about implementing loss interval for non-loss interval112
10 Acronyms 113
List of Figures
List of Tables
Chapter 1
A wireless sensor network (WSN) consists of sensor nodes capable of collecting information from the environment and communicating with each other via wireless transceivers. The collected data will be delivered to one or more sinks, generally via multi-hop communication. The sensor nodes are typically expected to operate with batteries and are often deployed to not-easily-accessible or hostile environment, sometimes in large quantities. It can be difficult or impossible to replace the batteries of the sensor nodes. On the other hand, the sink is typically rich in energy. Since the sensor energy is the most precious resource in the WSN, efficient utilization of the energy to prolong the network lifetime has been the focus of much of the research on the WSN. The communications in the WSN has the many-to-one property in that data from a large number of sensor nodes tend to be concentrated into a few sinks. Since multi-hop routing is generally needed for distant sensor nodes from the sinks to save energy, the nodes near a sink can be burdened with relaying a large amount of traffic from other nodes.
Sensor nodes are resource constrained in term of energy, processor and memory and low range communication and bandwidth. Limited battery power is used to operate the sensor nodes and is very difficult to replace or recharge it, when the nodes die. This will affect the network performance. Energy conservation and harvesting increases the lifetime of the network. Optimize the communication range and minimize the energy usage, we need to conserve the energy of sensor nodes. Sensor nodes are deployed to gather information and desired that all the nodes works continuously and transmit information as long as possible. This address the lifetime problem in wireless sensor networks. Sensor nodes spend their energy during transmitting the data, receiving and relaying packets. Hence, designing routing algorithms that maximize the life time until the first battery expires is an important consideration. There are other objectives like scalable architecture, routing and latency. In most of the applications of wireless sensor networks are envisioned to handled critical scenarios where data retrieval time is critical, i.e., delivering information of each individual node as fast as possible to the base station becomes an important issue. It is important to guarantee that information can be successfully received to the base station the first time instead of being retransmitted. In wireless sensor network data gathering and routing are challenging tasks due to their dynamic and unique properties. Many routing protocols are developed, but among those protocols cluster based routing protocols are energy efficient, scalable and prolong the network lifetime. Sensor nodes periodically send the gather information to the base station. Routing is an important issue in data gathering sensor network, while on the other hand sleep-wake synchronization is the key issues for event detection sensor networks.
A wireless sensor network (WSN) consists of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enabling control of sensor activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such networks are used in many industrial and consumer applications, such as industrial process monitoring and control, machine health monitoring, and so on. Size and cost constraints on sensor nodes result in corresponding constraints on resources such as energy, memory, computational speed and communications bandwidth. The topology of the WSNs can vary from a simple star network to an advanced multi-hop wireless mesh network. The propagation technique between the hops of the network can be routing or flooding.
1.1 The problem
1.1.1 Increasing media traffic and congestion control
There is an increasing demand for the use of real time media on the Internet and private networks. Applications such whatsApp, Viber, WeChat and SnapChat are being used widely on the mobile phones using Internet [3] and these feature images and video as components. Live television is not yet widely deployed on the Internet but there is potentially high consumer demand for this as the rate of broadband usage around the world continues to rise [4].
The ongoing debate around net neutrality indicates that there is congestion occurring on the Internet and that ISPs are carrying out traffic shaping because of this. The congestion may be due to operational constraints such as existing network segments running at full capacity, or due to financial constraints such as cost of bandwidth purchased from third parties. Regardless of the cause of the congestion the consumer suffers from a reduced quality experience.
Quality of Service (QoS) is deployed on network routers and other equipment on the middle boxes to give priority to network streams. From personal experience, and from discussions with network managers, it is used particularly by medium to large businesses. This prioritization applies to the entire network stream from devices or PCs and not within a stream. However it has been shown that users have a strong preference for image over audio or text [5] and to have it within a timely manner [6]. As such there is a need to prioritize data within a stream, as well as between streams.
With real time media if the rate that data can be received at is below the rate that the application has requested then the quality will drop until the rate improves or the application protocol changes to a lower transmission rate such as by increasing compression or changing codecs if that is possible. During this period of lower quality the user experience will suffer due to data not arriving, or not arriving on time. The transmission scheme for real time media needs to be able to adapt to these situations. This is particularly applicable when the amount of data being transmitted is being reduced, or is already at the lowest possible. It is appropriate to consider if the traditional methods of data transmission for these applications is still the best method possible.
1.1.2 The proposed solution
The hypothesis of this thesis is that by prioritizing data within a network stream the user experience can be improved. In a typical application data will be prepared and transferred to the transport layer where it is segmented into packets. A packet is queued and then sent when it reaches the head of the queue, after all other data before it has been transmitted. This however takes no account of time requirements on the delivery of data, or the possibility of different priority for different packets. It is proposed that the data can be re-ordered or dropped and that more of the most important data can be transmitted.
This thesis discusses a scheme where the transport layer, in cooperation with the application, determines which packets should be sent and in what order they should be sent in. The motivation for this is two-fold. It seems reasonable for the transport layer to prioritize the sending of the most useful data first, and even discarding packets which will no longer be useful for the receiver. It would also seem reasonable for images packets to be sent before text packets, as the image is more useful than text to a user’s experience in a real time image sharing media situation.
The hypothesis being put forward is that real time media applications will give a higher quality result when there is suitable feedback between the network stack and the real time media applications. It is proposed that by using this cross-layer communication around the type of data and the time constraints for the data that the data can be re-ordered or dropped and that more of the most important data can be transmitted. It is important to deliver the ‘right” data as well as increasing the throughput. With real time media applications it is not sufficient to just deliver more data – it is also important to deliver the highest priority data.
1.2 Overview of thesis
This thesis demonstrates the improvements that can be made to real time images by making changes to the transmission of packets using an algorithm called Priority Packet Scheduling Scheme. This is done by implementation of the algorithm in the simulator environment NS2.
This chapter states the thesis problem and outlines the contribution made to advancing quality in real time media. In Chapter 2 the use of congestion control and quality adaptation for real time media is discussed. It is looked at how quality can be altered by different encoding methods. Congestion control is also introduced with the reason for it discussed. Bringing together the media quality and congestion control the literature concerning the interaction between the two is discussed.
PPSS is introduced in Chapter 3. PPSS is proposed to improve the timeliness of image, and to a lesser degree video, that arrives thus improving the user experience. PPSS looks at which data to send next considering the priority of the data and the expiry time of that data. With real time media applications it is not sufficient to just deliver more data, it is also important to deliver the highest priority data.
In Chapter 4 the implementation of the necessary tools and software to test proposed improvements is covered. The framework is approached from a practical implementation and testing approach, rather than a theoretical and simulated solution. A model of image traffic was constructed and the steps taken to create a test suite are described.
In Chapter 5 the results from tests carried out are presented and analysis of the results are given. These tests are carried out with varying network conditions, hop based, Priority based, Round Trip Time (RTT)s and queue lengths to determine which factors affect the performance of the PPSS algorithm when tested compared to non-prioritized algorithm. From the findings the algorithm is introduced and tested. Chapter also compares the use of First in And First (FIFO) queues to see if these offer improvements to video quality.
The conclusions are in Chapter 6 and ideas for future investigation are also discussed.
1.3 Contribution of this work
The main methods of improving quality at present are from QoS on middle-boxes and via rate changes from codecs. The contribution of this thesis is in showing that a significant improvement to quality can also be made by passing information between a real time media application about the priority and expiry time of data to the operating system which then can use this information.
The PPSS algorithms were created to share information between the application and the transport layer. PPSS reorders packets and discards packets that are also considered.
Chapter 2
2.1 Introduction
This chapter sets the background for the thesis and outlines related research.
The properties that are desirable in Ad-Hoc Routing Protocols are Distributed operation, Loop free, Demand based operation, Unidirectional link support, Security, Power conservation, Multiple routes, and Quality of Service Support.
In this chapter the problem of congestion is introduced, and then how this affects real time media applications. The chapter then outlines the two main ways that this has been addressed in the past through network protocols being improved, and the interaction between media applications and the network protocols. This background prepares the way for our research which combines element of both to propose a way to improve QoS quality in real time media applications.
In Section 2.2 the problem of congestion is introduced and it is discussed in relation to the Internet, and real time media requirements. Two differing approaches to dealing with congestion are discussed – using Quality of Service and dealing with congestion using the end-to-end principle.
Section 2.3 describes different network protocols and their characteristics and it is discussed how the protocols deal with congestion. The two main transport layer protocols that are used on the Internet are discussed, TCP and UDP, and we also discuss a newer transport layer protocol SCTP and RTP which is commonly used by media applications. AODV routing protocol is also discussed. Hop- greedy and Priority algorithms used as part of PPSS, is introduced and the background to this protocol being created.
Section 2.4 discusses how network protocols interact with media applications. In this section previous work aimed at improving the transmission of media is examined. Differing strategies are examined such as queue management, experimental network protocols, and modifications to applications.
2.2 Congestion
The Internet is not one network but is a collection of networks joined together and the word Internet is derived from internetwork or interconnecting networks. The Internet is based on data being divided into packets that are transmitted and received using Internet Protocol (IP) as the underlying protocol at the network layer. IP is an unreliable protocol in that it does not guarantee that traffic will be received at the other end. An IP packet will traverse a path from the source to the destination on the Internet potentially travelling over a number of networks and connections.
As IP is unreliable the packets may get discarded, corrupted or delayed. This can then cause congestion which occurs on a network whenever there is any form of ‘bottleneck” occurring. This bottleneck may be due to factors such as links not having enough capacity to handle the network traffic being transmitted and routers not being able to forward traffic at the rate it is arriving. Congestion will result in packets that were transmitted being lost or delayed. Packets may also be lost due to factors such as radio interference on wireless links, which may then generate congestion if packets need to be transmitted again. In practice it can be difficult to tell whether loss or congestion occurs unless tools such as Explicit Congestion Notification (ECN) [7] are utilized. At present ECN is not yet widely used [8] on the Internet. The focus of this thesis is on all the effects of congestion, rather than just loss.
2.2.1 Media applications and congestion
The quality of media applications depends on a range of factors such as frame rates, coder-decoder (codec) and picture resolution. The frame rate is how many frames of image are transmitted per second These factors then lead to a bit rate which is how much data is transferred per second. Real time media applications, like other applications, will face congestion where there is any form of bottleneck restricting traffic. This congestion will result in lost or delayed traffic. With real time media, congestion may manifest itself. It is therefore desirable for real time media applications to respond to congestion effectively, both to improve the user experience and to improve the stability of networks.
From the application the desired result is that the application adapts to the congestion and improves the user experience. From an application perspective timeliness may be more important than having a perfect picture. From a network perspective it is important to adjust the transmission rate during periods of congestion to stop networks becoming overloaded.
Once the application has been notified of congestion the application needs to adjust its behavior. The application can adjust its behavior by reducing the quality of the transmission, dropping parts of the transmission or stopping the transmission.
In the history of the Internet this has not proved a major problem as most applications have not had a real time requirement where packets need to be delivered in a fixed time frame. For example e-mail is delivered on a queued basis using Simple Mail Transport Protocol (SMTP) and normally arrives quickly but can take a number of hours to arrive. Another example is web traffic which uses Hypertext Transfer Protocol (HTTP) – the emphasis is on all content being delivered rather than the content being delivered in the absolute quickest time. Services such as email and web pages may seem to be real time, but they are normally just delivered very quickly.
Real time traffic needs to meet a deadline. Media applications have a different set of requirements; the data must be delivered within a specified time frame or the data is not worthwhile particularly for live applications.
2.2.2 Approaches to Ensuring Service Quality
With the Internet the individual hosts are responsible for controlling congestion, rather than congestion being centrally managed on the network as often occurs on carrier or corporate networks. Individual hosts being responsible for application specific functions is known as the end-to-end (e2e) principle as described by Saltzer et al. [9]. As such the responsibility for congestion control occurs within the transport layer protocols on the hosts. Different transport protocols take different approaches as described in Section 2.3.
This contrasts with Quality of Service (QoS) where each and every node that a packet goes through and these uphold the rules. Much of the earlier research on real time media focused on building QoS that gives guaranteed quality and bandwidth for real time media applications [10] [11] [12] [13].
As an example of the difficulty of making changes on the Internet Medina discusses the adoption rate of Transmission Control Protocol (TCP) extensions on the Internet and shows that it takes many years for changes to be adopted on the Internet. Recognizing this Bolot and Turletti [14] put in place a feedback loop to measure the data rate achievable over the Internet (or private network) without using QoS and then to adjust the output bit rate from this. They decreased the bit rate by decreasing the frame rate, changing movement detection thresholds and decreasing frame quality through changes to the quantizer. Movement detection reduces the amount of data transmitted by looking at changes between frames and quantization is the process of mapping a large range of values to a smaller range of values such as reducing the number of colors used in a frame.
HuiTian, Hong Shen and Matthew Roughan, proposed a method to place Sensor networks (SN) by use of a minimal number to maximize the coverage area when the communication radius of the SN is not less than the sensing radius, which results in the application of regular topology to WSNs deployment. Their findings resulted in WSN topology lifetime by more than eight times on average in which is significantly better than existing alternatives. The drawback is that It considers WSNs that are mostly static with a small number of mobile relays not practically declared for Dynamic WSNs.
Soochang Park, Euisin Lee, Min-Sook Jin, and Sang-Ha Kim, analyzed that the proposed approach is scalable in maintenance overheads, performs efficiently in the routing performance and provides continuous data delivery during the user movement. The routing transitions during the movement of the mobile user are not optimized in their discussion.
2.3 Protocols
In Section 2.2.2 it was discussed that e2e principles mean that congestion is handled by the individual hosts. The response to congestion is done at the transport layer which is the layer above the network layer (IP). This section discusses the approach taken by different transport layer protocols.
2.3.1 TCP
Transmission Control Protocol (TCP) [15] is a connection based, reliable protocol that is widely used on networks and the Internet. The protocol being reliable means that TCP packets are acknowledged as being received to ensure that every packet has been received. If packets are not received then they are retransmitted. TCP congestion control
During the 1980’s the Internet became unstable in large parts due to congestion caused by TCP traffic. This was where packets were being transmitted again due to acknowledgements not being received for packets that had already been received, or were still in the process of being transmitted. As queues would build up the Round Trip Time (RTT) would increase and the retransmission rate would also rise which would further exasperate the issue and lead to a state that Nagle called ‘congestion collapse”.
Nagle [16] and Van Jacobson [17] studied this and proposed a number of changes to TCP to respond to this congestion, which has become commonly known as congestion control.
Nagle proposed in 1984 that no new data be sent until an acknowledgement is received for previous data as this will prevent spurious retransmission. Nagle also proposed improved use of Internet Control Message Protocol (ICMP) source quench. ICMP source quench packets are sent when a device is running low on buffer space and requests that the sender reduces it’s sending rate so that less packets arrive to reduce the risk of the buffer space being exhausted.
Van Jacobson in 1988 proposed that if conservation of packets was observed then TCP flows would be generally stable. Conservation of packets is when a connection is running without congestion and a packet is only put on the network after an old packet is taken off the network. The conservation of packets was implemented by a congestion window which would be dynamically resized until the connection reached an initial state of stability, and adjusted as conditions changed. The congestion window provides a maximum amount of unacknowledged packets that can be in flight at any point in time. Additional packets would not be added to the congestion window when it was full until another was removed after receiving an acknowledgement. The congestion window is initially set to one packet and is then increased by one packet per acknowledgement, which has the effect of almost doubling the window size per RTT. This continues until either: if the maximum window size is reached if the slow start threshold is reached if congestion occurs. When the slow start threshold is exceeded the congestion window then increases by a maximum of one packet per RTT. If congestion occurs (detected through multiple duplicate acknowledgements or timeout) then the congestion window and slow start threshold are altered. A more detailed explanation is provided by Stevens [18] and RFC 5681.
These changes by Nagle and Van Jacobson are credited with preventing ongoing TCP collapse [19] [20] and has given stability on the Internet as it has undergone rapid growth. TCP and real time media traffic
At present TCP does not provide direct support for adjusting to congestion in a way that is friendly to real time media applications as TCP attempts to resolve packet loss by retransmission of packets and scaling back transmission rapidly.
With real time media applications retransmission in the event of a loss or corruption of a packet is usually not the desired outcome as the data is time sensitive and will add delays to the data. The way most TCP implementations adjust their congestion window is useful for other applications, however for real time media applications it is less suitable due to the sudden change in transmission rates that can occur. With TCP data being sent and received in order a retransmission will cause the receiving operating system to wait for the retransmitted packet before any further data is given to the receiving application. This causes jitter which is a variation in the delay between the sender and the receiver. The receiving application will either have to drop the data with subsequent drop in quality, or have a larger playout buffer to smooth over this jitter. A playout buffer is a buffer at the receiver which holds data for a period of time before it is played to allow the necessary packets to arrive. Retransmitting data also uses extra bandwidth and if the packet was dropped due to congestion (as opposed to an error in transmission) then the retransmitted data will probably take the place in the queue of what would be more timely data. If the TCP buffers are large enough and a queue builds up, potentially the time the data is held in the queue could result in all packets arriving later than required for the real time media application. The way most TCP implementations adjust their congestion window is useful for other applications but tends to have a more detrimental effect for many real time applications. TCP uses an Additive increase/multiplicative decrease (AIMD) approach to increasing it’s rate in that it increases slowly, but decreases rapidly. In many cases TCP will halve the congestion window when a packet is lost.
If the allowed transmission rate is reduced rapidly it may be required to change the quality of the video stream by increasing the compression rate, dropping the resolution, dropping the frame rate or other measures. The allowed rate will then slowly increase again, with quality able to increase until it then experiences congestion again and then the transmission rate has to be reduced rapidly again. This creates an oscillation type effect of alternating between lower and higher quality streams constantly which does not lead to a good user experience.
The TCP protocol is also not a datagram based protocol and as such can combine packets and split packets which can have a detrimental effect on real time media transmission. If a small packet is queued for transmission then TCP implementations will often wait for a period of time to see if another packet is going to be transmitted and it can combine the later packet/packets to enable only one packet to be transmitted. This increases delay and jitter. Improvements to TCP
Development work continues on TCP and congestion control with variations to the TCP protocol such as Vegas [21], Westwood [22] and Binary Increase Congestion control (BIC) [23] being produced. Most of these improvements are based on experiments but there are also efforts such as Paganini et al. [24] who are approaching TCP congestion control based on provable mathematical modeling.
RFC 4653 [25] proposes to improve TCP in it’s response to events that are not congestion related, particularly packet reordering. The RFC proposes that the test for congestion should change from three duplicate acknowledgements to a larger number as this would allow more time for out of order packets to arrive.
Floyd et al. introduce quick start [26] that looks at setting up the maximum rate needed through TCP options therefore bypassing the need for slow start. For quick start to be implemented every router that the connection passes through must support it which is a high barrier for widespread adoption on the Internet.
Goel [27] reduces the TCP send buffer to reduce the latency of TCP packets sent. With a larger send buffer, larger queues can result which increases latency as the packets are delayed being sent. This is an effective approach but can have the negative effect of reducing send rate, particularly with high RTTs. Goel explicitly targets their work at lower RTT networks. The importance of TCP
TCP is the default protocol for most traffic on the Internet, John and Tafvelin [28] have shown that TCP traffic is over 90%. As such it is important that any changes to protocols are friendly to TCP traffic flows.
2.3.2 UDP
User Datagram Protocol (UDP) [29] is a connectionless, unreliable protocol without congestion control. Originally UDP was used for services such as Domain Name System (DNS) which required just one packet to be transmitted and then one packet would be received as a response and as such a connection based protocol, congestion control or retry mechanism at the transport layer are not needed.
Real time media applications will often attempt to use UDP to overcome the limitations of TCP as described in the previous section. As UDP is datagram based this overcomes some of the timing problems of TCP for real time media applications. This means that, provided the network is able to, data will be transmitted when requested rather than being combined or split. The unreliable delivery of UDP means that it is not guaranteed that packets will reach their destination and they will not be retransmitted if they are lost. This is an advantage for real time traffic in most cases though as their will be more recent data to transmit, rather than retransmitting stale data. The primary advantage of UDP is therefore that the application developer can choose the timing of packet transmission and how the application responds to network conditions.
A common approach for UDP based media applications is to implement their own form of congestion control. This is not optimal because each application has to code the congestion control which results in a large amount of duplicated effort or congestion control which is not efficient or unfair to other network traffic. This could be overcome by standardized libraries although the use of these does not appear widespread. In the worst case scenario a real time media application may use no congestion control at all and cause packets to be discarded or create congestion in a network.
Guo et al. [30] has shown that most streaming multimedia traffic uses TCP instead of UDP. This is because the lack of a session causes UDP difficulty in traversing many Network Address Translation (NAT) [31] devices such as home routers or company firewalls. As such some media applications are written to use either UDP or TCP and will revert to using TCP if they are unable to use UDP. This is becoming a bigger problem as home users switch to broadband connections behind firewalls, utilize NAT due to limited IPv4 and extensively use real time media applications. Some applications such as Skype have largely overcome this, firewalls are improving support for UDP and also IPv6, in the longer term, will overcome these problems.
In Balan et al. [32] VoIP quality is compared between UDP, TCP and DCCP. UDP gave the best quality with high loss rates, followed by TCP and the lowest quality was DCCP. TCP quality was worse than UDP because of the retransmissions required which lead to delays in receiving packets. They proposed that DCCP quality was lower than the others because the congestion control was more simplistic than TCP congestion control.
Phelan discusses in an Internet Draft a CCID for DCCP called Media Friendly Rate Control (MFRC) [33]. Phelan recognizes that media applications can change their rate quickly, including between frames and will often have periods of silence where network activity can drop to almost zero. CCID2 and CCID3 respond to congestion by using slow start and will also drop the allowed transmission rates in idle periods. MFRC proposes that applications can immediately start transmitting at their desired rate provided it is within a pre-defined limit) and that applications can quickly return to a proven transmission rate provided congestion is not experienced.
2.3.3 RTP
Real-time Transport Protocol (RTP) [34] was designed as a protocol for transferring real time data that sits above the transport layer and is commonly used for media applications on the Internet. It includes the use of a control protocol Real-time Transport Control Protocol (RTCP) that monitors the delivery and notifies to the sender data including jitter and loss which can be used in real time media applications. RTP is usually implemented on top of UDP but can be implemented on top of other transport layer protocols also. It is not discussed further in this thesis, although it may be practical to implement aspects of our findings in real time media application using RTP given it’s common usage.
2.3.4 SCTP
Floyd and Fall discussed in a 1999 paper that the main danger to the stability of the Internet is lack of congestion control causing congestion collapse due to flows that do not reduce their transmission rates when packet drops occur, and in particular UDP with it’s lack of congestion control as outlined in the previous section. A number of protocols such as Stream Control Transmission Protocol (SCTP) [35] have been proposed for newer applications such as real time media which aim to overcome this. Although the Internet stability has not had widespread congestion collapse there are still risks and Internet Service Provider (ISP)s often use ‘traffic shaping” for applications such as BitTorrent to protect their network from heavy traffic.
Stream Control Transmission Protocol (SCTP) is a protocol that is at Proposed Standard status with the IETF and is a congestion controlled, message and stream based reliable transport.
SCTP shares the characteristic with UDP that it is message based. This means that as soon as data is able to be sent, it can be sent unlike TCP which may wait to combine packets if needed. If a packet is too large then SCTP will split the packet (like TCP, and unlike UDP) but will not combine the data with other packets. SCTP also shares characteristics with UDP in that it allows out of order delivery, removing the need to wait for a retransmission before any further data can be received which is beneficial for real time media based applications. It is stream based which means that audio and video can be separated into different streams if desired.
The similarities that SCTP has with TCP is that it is session based, has congestion control based on TCP congestion control and is a reliable protocol so retransmits lost packets. The congestion control methods, and retransmission means that SCTP will face the same issues as TCP as outlined in Section for real time media traffic.
2.3.5 Adhoc On-Demand Distance Vector (AODV) Routing Protocol:
The AODV Routing protocol uses an on-demand approach for finding routes, that is, a route is established only when it is required by a source node for transmitting data packets. It employs destination sequence numbers to identify the most recent path. The major difference between AODV and Dynamic Source Routing (DSR) stems out from the fact that DSR uses source routing in which a data packet carries the complete path to be traversed. However, in AODV, the source node and the intermediate nodes store the next-hop information corresponding to each flow for data packet transmission. In an on-demand routing protocol, the source node floods the Route Request packet in the network when a route is not available for the desired destination. It may obtain multiple routes to different destinations from a single Route Request. The major difference between AODV and other on-demand routing protocols is that it uses a destination sequence number (DestSeqNum) to determine an up-to-date path to the destination. A node updates its path information only if the DestSeqNum of the current packet received is greater than the last DestSeqNum stored at the node.
A Route Request carries the source identifier (SrcID), the destination identifier (DestID), the source sequence number (SrcSeqNum), the destination sequence number (DestSeqNum), the broadcast identifier (BcastID), and the Time to Live (TTL) field. DestSeqNum indicates the freshness of the route that is accepted by the source. When an intermediate node receives a Route Request, it either forwards it or prepares a Route Reply if it has a valid route to the destination. The validity of a route at the intermediate node is determined by comparing the sequence number at the intermediate node with the destination sequence number in the Route Request packet. If a Route Request is received multiple times, which is indicated by the BcastID- SrcID pair, the duplicate copies are discarded. All intermediate nodes having valid routes to the destination, or the destination node itself, are allowed to send Route Reply packets to the source. Every intermediate node, while forwarding a Route Request, enters the previous node address and it’s BcastID. A timer is used to delete this entry in case a Route Reply is not received before the timer expires. This helps in storing an active path at the intermediate node as AODV does not employ source routing of data packets. When a node receives a Route Reply packet, information about the previous node from which the packet was received is also stored in order to forward the data packet to this next node as the next hop toward the destination.
DSR includes source routes in packet headers. Resulting large headers can sometimes degrade performance-particularly when data contents of a packet are small; AODV attempts to improve on DSR by maintaining routing tables at the nodes, so that data packets do not have to contain routes. AODV retains the desirable feature of DSR that routes are maintained only between nodes which need to communicate. Route Requests (RREQ) are forwarded in a manner similar to DSR. When a node re-broadcasts a Route Request, it sets up a reverse path pointing towards the source-AODV assumes symmetric (bi-directional) links. When the intended destination receives a Route Request, it replies by sending a Route Reply (RREP).Route Reply travels along the reverse path set-up when Route Request is forwarded. Route Request (RREQ) includes the last known sequence number for the destination. An intermediate node may also send a Route Reply (RREP) provided that it knows a more recent path than the one previously known to sender. Intermediate nodes that forward the RREP, also record the next hop to destination. A routing table entry maintaining a reverse path is purged after a timeout interval. A routing table entry maintaining a forward path is purged if not used for a active route timeout interval.
A neighbor of node X is considered active for a routing table entry if the neighbor sent a packet within active route timeout interval which was forwarded using that entry. Neighboring nodes periodically exchange hello message. When the next hop link in a routing table entry breaks, all active neighbors are informed. Link failures are propagated by means of Route Error (RERR) messages, which also update destination sequence numbers. When node X is unable to forward packet P (from node S to node D) on link (X, Y), it generates a RERR message. Node X increments the destination sequence number for D cached at node X. The incremented sequence number N is included in the RERR. When node S receives the RERR, it initiates a new route discovery for D using destination sequence number at least as large as N .When node D receives the route request with destination sequence number N, node D will set its sequence number to N, unless it is already larger than N. Routes need not be included in packet headers. Nodes maintain routing tables containing entries only for routes that are in active use. At most one next-hop per destination maintained at each node-DSR may maintain several routes for a single destination. Sequence numbers are used to avoid old/broken routes. Sequence numbers prevent formation of routing loops. Unused routes expire even if topology does not change.
Routes are established on demand and destination sequence numbers are used to find the latest route to the destination. Lower delay for connection setup.
AODV doesn’t allow handling unidirectional links. Multiple Route Reply packets in response to a single Route Request packet can lead to heavy control overhead. Periodic beaconing leads to unnecessary bandwidth consumption.
2.4 Network protocol interaction with media applications
In the previous section how network protocols have been adapted to cater for media applications was discussed and in this section the interaction between the applications and network protocols is examined. de Cuetos and Ross [36] present what they call a ‘unified optimization framework” where they combine scheduling, Forward Error Correction (FEC) and error concealment to improve the performance of video transmission in lossy environments. FEC is where extra data is transmitted and if there is an error it may be possible to reconstruct the data. Error concealment is a technique where information is gathered from other sources such as preceding frames to replace missing data.
Wang, Banerjee and Jamin [37] examine whether a protocol being TCP friendly means that it is also media friendly. TFRC is used in their tests and they conclude that it is not media friendly as less than the fair bandwidth is used and large variations in rates occur even in steady state networks. Fair bandwidth is that the same bandwidth is obtainable as TCP. Phelan [38] also discusses the limitations of TFRC and how real time media applications can adapt to the limitations and variations that are proposed for VoIP for TFRC.
Rejaie et al. [39] and Feamster et al. [40] propose that video codecs should add and remove video layers as network conditions permit. Rejaie, Handley and Estrin further the earlier work of Bolot and Turletti by adjusting the bit rate through adding and removing layers of detail for video and audio transmission. In their approach they smooth the transitions between layers so that the buffers are not drained or overflow. As part of this they average the bandwidth available and take a conservative approach to adding extra layers to the transmission. This work is further developed in Rejaie and Reibman [41] where the transport and the encoder become ‘aware” of each other to enable decisions to be made in regards to changing the number of layers transmitted, and adjusting for loss events.
In Gharavi [42] a similar scheme is proposed but the transmission rate is calculated by measuring the number of hops between the source and destination and adjusting the number of frames per second with more hops resulting in a lower transmission rate. If the number of hops increases or decreases during the session then the transmission rate is decreased or increased respectively.
Their research showed that this reduced the loss rate of packets when compared with making no changes as the number of hops changed.
In a study of multimedia streaming Guo et al. shows that the median time to change to a lower bit-rate stream was around 4 seconds. This length of time indicates that there is scope to improve the user experience during the transition time as a lower bit-rate is being used because of congestion.
Feng [43] builds upon changing which video layers are used and proposes a priority queue for delivery of pre-recorded video streams where the minimum needed to create a low quality video stream at the desired frame rate is sent first and then enhanced quality is only sent after the high priority video layer have been sent. This ensures that the frame rate is maintained and the picture quality improves as the network conditions improve. Krasic [44] investigates storing streaming video in multiple layers which are tailored to the type of client that is requesting the data. The streaming server, which they call priority-progress, then decides the most appropriate layers for the client and maps these to a priority order within each time segment. At the end of each time segment, if the data has not been transmitted, then the data is discarded by a ‘progress regulator”.
In Tsaoussidis [45] Multimedia Transmission Protocol (MTP) is introduced, which is based on TCP Reno but without guaranteed reliability. Packet priority information is sent as a 2 bit field which determines whether packets are retransmitted or not. This enables MTP to not retransmit data which is of lower priority, as compared to TCP which retransmits all missing data. Timelined TCP (TLCTCP) [46] introduces the concept of tracking expiry time for data. TLCTCP marks data with an expiry time, and discards data if the expiry time is reached before the packet is sent. TLCTCP does not take the RTT into consideration but proposes that it is worthwhile to do so. TLCTCP is a partially reliable protocol – if the data has not expired and retransmission is requested by the receiver then the data is retransmitted. If the data has expired, a more recent packet is sent instead as the expired packet no longer has value.
As a practical example of the importance of layers Wireless Home Digital Interface (WHDI) [47] protects the data in higher priority layers more than lower priority layers as shown in Figure 2.4.1 which was published by Ars Technica [48]. In WHDI control and audio data is completely protected and video data is given varying degrees of protection depending on the importance.
The specification of WHDI has not been published, but it is assumed that fully protected data relies on retransmits of lost data and that varying degrees of protection are achieved by differing degrees of FEC.
2.4.2 Transmission queues
Packets are queued in a buffer before they are transmitted. In a FIFO queue the data is transmitted in the order that it is received. TCP and UDP implementations typically use FIFO queues as packets are transmitted in the order received from the application. A LIFO queue works on the principle that the last packet received into a queue is the first transmitted. FIFO and LIFO queues can be fixed or variable length queues.
In a ring buffer there is a buffer that the packets will get overwritten if they are not transmitted. It is described as a ring buffer as the start and the end of the queue can move. If the buffer is full then the next packet inserted overwrites the oldest packet in the queue which is discarded. This contrasts to FIFO and LIFO queues where the newest packet is discarded if the queue is full.
2.5 Summary
This chapter has discussed congestion and measures taken to stop congestion collapse on the Internet. Congestion control has an effect on real time media applications and there is scope for improving the user experience on congested networks, particularly as network conditions change.
The previous work shows that improvements can be made in applications, that improvements can be made in network protocols and improvements can be made in congestion control but this past research has largely not covered the interaction between these all areas. We plan to show that by taking account of the interaction between all of these areas a significant improvement can be made to the quality of the user experience. In particular if the network transport protocol shares information with the media applications and allows a media friendly transport protocol to make decisions at the time of transmission about which is the most appropriate data to transmit then potentially significant improvements are possible we believe.
In the following chapters we introduce how we propose to make improvements in this area and then test these ideas experimentally.
Chapter 3
Proposed solution
3.1 Introduction
In this chapter a way of improving the quality of service audio for real time media is proposed in Priority Packet Scheduling Scheme (PPSS). The background decisions around the choice of transport protocol are given.
3.2. Existing system:
Indeed, most existing Wireless Sensor Network operating systems use First Come First Serve schedulers that process data packets in the order of their arrival time and, require a lot of time to be delivered to a relevant base station (BS).
3.3 Problem statement:
However, to be meaningful, sensed data have to reach the BS within a specific time period or before the expiration of a deadline. Additionally, real-time emergency data should be delivered to BS with the shortest possible end-to-end delay.
Hence, intermediate nodes require changing the delivery order of data packets in their ready queue based on their importance and delivery deadline. Further more, most existing packet scheduling algorithms of WSN are neither dynamic nor suitable for large scale applications.
3.4 Proposed system
Data packets that are sensed at a node are scheduled among a number of levels in the ready queue. According to the priority of the packet and availability of the queue, node will schedule the packet for transmission. Also node can check whether expire packets are buffered or not, if buffered then node deletes dead packet.
Packet Header Fields
Separated queue availability reduces packet transmission delay. Due to reduction in packet transmission delay, node can goes to sleep mode as soon as possible. So energy management is possible. Since expired packets can be removed from the buffer, buffering delay is reduced.
A packet with the following fields is designed to transfer the data from source to destination by considering as real time and non real time packet. They are Source address, Destination address, Packet type, Packet id, Sending time, Life time, Hop count and Priority.
Each node can route the data to specified destination by using the destination address, and the packet type indicates whether the packet is a data or controlling packet. The hop count is used to check the data is local data or non local data, the sending time and life time is used to know the time to live (TTL) value of the packet. The priority field is used to denote the packet which is transferred from the node is real time or non real time packet. In real time the packet size will be vary accordingly.
3.5 HOP Greed Algorithm:
Let assume each node can know neighbor node availability. BS has the total node availability. is Source node, is Base Station node, is hop count, is neighbor list, is maximum path needs, is maximum hop count which is greater than the actual max possible hop count, is final route
1) gets the Neighbor info
b. Set
c. Foreach
i. If N
2. If
b. Break
d. ; // remove the path which not contains destination address
e. If
i. Foreach
1. If
3.6 Priority Packet Scheduling Algorithm
Let is buffer, is End of Buffer, sequence number, is string, is image
1) If ON
a. Info available
i. If information is
1. Encode
2. Store
4. Store in buffer
ii. If !
1. Generate dummy pkts
b. If
i. Set = 1,
ii. If
1. While ( )
b. If
ii. Break
iii. If
1. Send
2. Set
3. Set
a. If
i. If
ii. Else
b. Else
iv. Else
1. Wait
c. If recv
i. Store
3.7 Topology formation
Scheme assumes that nodes are virtually organized as hierarchical structure. Nodes that are at the same hop distance from the base station (BS) are considered to be located at the same level. Nodes in zones that are one hop and two hops away from the BS are considered to be at level 1 and level 2, respectively. Whole structure divides in zone. Data are transmitted from the lowest level nodes to BS through the nodes of intermediate levels
3.8 Priorities
Three queues in Sensor node According to priorities tasks are scheduling in queues (Pr1, pr2, pr3). Real-time and emergency data should have the highest priority the priority of non-real-time data packets is assigned based on the sensed location (i.e., remote or local) and the size of the data. In case of two same priority data packets the smaller sized data packets are given the higher priority.
3.9 TDMA
Data packets of nodes at different levels are processed using the Time- Division Multiplexing Access (TDMA) scheme. Every level has given fixed time slot. if that time is greater than the time calculated for PR1 queue then all Pr1 packet will proceed as FCFS. Whatever time remains that is used for PR2 queue and Pr3 queue. In between this, if any higher priority calculated time is greater than total remain time then higher priority queue task send as FCFS no lower priority task send.
If pr1 queue is empty then it will send pr2 queue packet unless until remaining time less than total pr2 proc time. If pr3 packet arrives it means it pre-empted pr2. At the time execution of pr3 if highest priority packet arrives in the buffer, it saves the context of pr3 and highest priority packets will be provided with the route and then the saved context of pr3 will be processed.
3.10 QUEUE
A queue is a particular kind of abstract data type or collection in which the entities in the collection are kept in order and the principal operations on the collection are the addition of entities to the rear terminal position, known as en-queue, and removal of entities from the front terminal position, known as de-queue. This makes the queue a First-In-First-Out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. Often a peek or front operation is also entered, returning the value of the front element without de-queuing it. A queue is an example of a linear data structure, or more abstractly a sequential collection.
Queues provide services in transport, and operations research where various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of a buffer.
Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes. Common implementations are circular buffers and linked lists.
3.10.1 Queue implementation
Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again.
Fixed length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a high level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high level language that does not have pointer syntax.
The array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified fixed capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue. A bounded queue is a queue limited to a fixed number of items.
3.10.2 Priority queue:
Boosted by a growing demand for sensor networks, the development of WSN has improved steadily in term of integration, survivability, and functionality. Recently, a number of medium access control (MAC) protocols have been proposed. The main advantages of these protocols are improved power conservation and scalability where sensor nodes are able to operate in low power consumption with lengthy sleep period and form an autonomous network for data gathering. The primary objectives of these protocols are always focused on energy conservation, scalability, and self-configuration, whereas priority access and temporal delay are often secondary and sometimes might not even being considered. In [4], sensor network traffic has been defined as event-based considering the traffic pattern is mainly correlated to phenomena which have been observed in a sensor network. The amount of transmissions carried out by the sensor nodes generally increase in response to an external event. The network might be overloaded with traffic as the number of sensors’ transmissions increases. This situation has been outlined as one of the potential problems since some of the strategically important sensor nodes might not be able to communicate an event to the sink quickly. To understand this problem further, an experiment has been conducted to evaluate a network performance in a fire outbreak event where various number of sensor nodes is configured to simulate the detection of a fire alert and begin competing access to the shared channel.
In basic queuing technique the packet are transferring in the order of one by one with first entered as first leave. But this technique is not suitable for the wireless sensor network due to the different type of data packet. In wireless sensor network has the different packet types such as low priority and high priority. In wireless sensor network the high priority packet should not wait for long time due the basic first came first order. To confirm the high priority data not available in queue, the node has to search the high priority data in queue first then only the low priority packet has to transfer.
So in priority queuing technique the node will search the buffer for high priority data by checking the field of priority info. In the field if the high value found then the node will consider the packet is high priority data or else the node will think that packet is low priority data. If the high priority data not available in the queue then the node will considers the all the packet into the fifo technique, or else the node will send the high priority packet as first.
Packet scheduling schemes can be classified based on the priority of data packets that are sensed at different sensor nodes. Non-preemptive: In non-preemptive priority packet scheduling, when a packet t1 starts execution, task t1 carries on even if a higher priority packet t2 than the currently running packet t1 arrives at the ready queue. Thus t2 has to wait in the ready queue until the execution of t1 is complete. Preemptive: In preemptive priority packet scheduling, higher priority packets are processed first and can preempt lower priority packets by saving the context of lower priority packets if they are already running. The widely used operative system of WSN and classify them as either cooperative or preemptive. Cooperative scheduling schemes can be based on a dynamic priority scheduling mechanism, such as EDF and Adaptive Double Ring Scheduling (ADRS) that uses two queues with different priorities. The scheduler dynamically switches between the two queues based on the deadline of newly arrived packets. If the deadlines of two packets are different, the shorter deadline packet would be placed into the higher-priority queue and the longer deadline packet would be placed into the lower-priority one. Cooperative schedulers in TinyOS are suitable for applications with limited system resources and with no hard real-time requirements. On the other hand, preemptive scheduling can be based on the Emergency Task First Rate Monotonic (EF-RM) scheme. EF-RM is an extension to Rate Monotonic (RM), a static priority scheduling, whereby the shortest-deadline job has the highest priority. EF-RM divides WSN tasks into Period Tasks, (PT) whose priorities are decided by a RM algorithm, and non- period tasks, which have higher priority than PTs and can interrupt, whenever required, a running PT.
Software requirement
System modeling refers to an act of representing an actual system in a simply way. System modeling is extremely important in design and development, since it gives an idea of how the system would perform if actually implemented. Traditionally, there are two modeling approaches: analytical approach and simulation approach.
Analytical Approach:
The general concept of analytical modeling approach is to first come up with a way to describe a system mathematically with the help of applied mathematical tools such as queuing and probability theories, and then apply numerical methods to gain insight from the developed mathematical model. When the system is simple and relatively small, analytical modeling would be preferable (over simulation). In this case, the model tends to be mathematically tractable. The numerical solutions to this model in effect require lightweight computational efforts.
If properly employed, analytical modeling can be cost-effective and can provide an abstract view of the components interacting with one another in the system. However, if many simplifying assumptions on the system are made during the modeling process, analytical models may not give an accurate representation of the real system.
Simulation Approach
Simulation is a process of designing a model of a real system and conducting experiments with this model for the purpose of understanding the behavior of the system and/or evaluating various strategies for the operation of the system.
Simulation is widely-used in system modeling for applications ranging from engineering research, business analysis, manufacturing planning, and biological science experimentation, just to name a few. Compared to analytical modeling, simulation usually requires less abstraction in the model (i.e., fewer simplifying assumptions) since almost every possible detail of the specifications of the system can be put into the simulation model to best describe the actual system. When the system is rather large and complex, a straightforward mathematical formulation may not be feasible. In this case, the simulation approach is usually preferred to the analytical approach.
In common with analytical modeling, simulation modeling may leave out some details, since too many details may result in an unmanageable simulation and substantial computation effort. It is important to carefully consider a measure under consideration and not to include irrelevant detail into the simulation.
Introduction to NS2
Network Simulator 2 (Version 2), widely known as NS2, is simply an event driven simulation tool that has proved useful in studying the dynamic nature of communication networks [49]. Simulation of wired as well as wireless network functions and protocols (e.g., routing algorithms, TCP, UDP) can be done using NS2. In general, NS2 provides users with a way of specifying such network protocols and simulating their corresponding behaviors.
Due to its flexibility and modular nature, NS2 has gained constant popularity in the networking research community since its birth in 1989.
NS is an object oriented discrete event simulator so Simulator maintains list of events and executes one event after another and Single thread of control: no locking or race conditions. Back end is C++ event scheduler Fast to run, more control. Front end is OTCL so Fast to write and change. Also Creating scenarios, extensions to C++ protocols is easy.
System implementation is a stage in a stage in the project where the where the theoretical designs turned into working system. The performance of reliability of the system was tested and it gained acceptance. The system was implemented successfully. Implementation is a process that means converting a new system into operation.
Proper implementation is essential to provide a reliable system to meet organization requirements. During the implementation stage a live demon was undertaken and and made in front of end-users.
Implementation is a stage of project when the system design is turned into a working system. The stage consists of the following steps.
‘ Testing the developed program with sample data.
‘ Detection and correction of internal error.
‘ Testing the system to meet the user requirement.
‘ Feeding the real time data and retesting.
‘ Making necessary change as described by the user.
Black box testing also called behavioral testing focuses on the functional requirements of the software. That is black box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black box testing attempts to find errors in the following categories. Our project does the functional testing of what input given and what output should be obtained.
System Testing-black box type testing that is based on overall requirements specifications; covers all combined parts of a system. The system testing to be done here is that to check with all the peripherals used in the project.
Performance Testing-term often used interchangeably with ‘stresses’ and ‘load’ testing. Ideally ‘performance’ testing is defined in requirements documentation or QA or Test Plans.
Black box testing also called behavioral testing focuses on the functional requirements of the software. Black box testing attempts to find errors such as Incorrect or missing functions, Interface errors, Errors in data structures or external data base access Behavior or performance errors and Initialization and termination errors.
White box testing sometimes called glass box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Using white box testing methods, the software engineer can derive test cases that guarantee that all independent paths within a module have been exercised at least once. Exercise all logical decisions on their true and false sides. Execute all loops at their boundaries and within their operational bounds. Exercise internal data structure. Testing is been conducted on path discovery module by following screen shot, which worked perfectly.
5.3.1 Application Considered
In this case, we are considering that both users are looking for the communication. For communication, the main thing considered here is better connectivity through the network region, instead of looking cost of the service, bandwidth, etc. As per the parameter values of the different kind of networks, net C is providing the better connected network according to the distance function value. So, both the users are connected with the net C in the process of communication. that is shown in Fig 5.
Fig 5.2. Network selection for application.
Fig 5.4 data collection in user
The most ‘micro’ scale of testing to test particular functions or code modules. Typically, it is done by the programmer and not by tester, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well designed architecture with tight code; may require developing test modules or test harnesses. The most ‘micro’ scale of testing to test particular functions or code modules. This unit testing requires detailed knowledge of the internal program design and code. We successfully tested all the units in the program.
And also we verified the each node position by following picture.
The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensures that the software has been built is traceable to customer requirements.
Verification and validation encompasses a wide array of SQA activities that include formal technical reviews, quality and configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, qualification testing and installation testing.
Test-case conclusion:
In our project we have tested our program with the different type of testing, while testing we have solved so many errors and we verified our project output result. After error debugging we got perfect result as our main objective. Due to time duration we have not implemented in real time and we have not tested in real time, in future we will implement it in real time and we will test it.

About Essay Sauce

Essay Sauce is the free student essay website for college and university students. We've got thousands of real essay examples for you to use as inspiration for your own work, all free to access and download.

...(download the rest of the essay above)

About this essay:

This essay was submitted to us by a student in order to help you with your studies.

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Improving the quality of service audio for real time media. Available from:<> [Accessed 06-06-20].

Review this essay:

Please note that the above text is only a preview of this essay.

Review Title
Review Content

Latest reviews: