CHAPTER ONE
1.0 QUALITY OF SERVICE (QoS)
Quality of Service (QoS) refers to the assurance of acceptably low delays and packet losses for particular traffic or application types. It is defined as the provision of service differentiation as well as performance assurance for varied internet applications, offering different treatment applications depending on their urgency and bandwidth requirements. It effectively seeks to address bandwidth delays, delay variations and losses/jitters.Bandwidths represent a foundational network resource, whose allocation heavily determines every applications highest throughput as well as the end-to-end delays, Badard, Diascorn, Boulmier, Vicard, Renard, & Dimassi (2001). The differentiation in services may be based on the aggregate or a per flow manner, where a flow refers to a five-tuple i.e. source port number, source IP address and UDP or TCP protocol.
While the fine granularity helps in protecting from other application that may misbehave, it is poorly scalable when used in networks that have thousands of different flows, Choi, Fershtman, & Gandal (2009). This is the reason for the increasing popularity of coarse granularity classification, which groups multiple classes of traffic that are subsequently accorded different treatments. The approach assumes that data packets in the same class have the same QoS priority regardless of their respective flows. The aggregate classification of traffic attempts to combine best of both approaches, Zhao, Olshefski, & Schulzrinne (2009).Zhao, Olshefski, & Schulzrinne (2009) not only explains the QoS but argues for its need as well as considerations on the internet. QoS offers differentiation of internet services to assure performance through limited delays and losses in traffic or applications and may be offered through the restrictions of traffic competition or anticipation of traffic and engineering networks to limit QoS violations.
Traffic is partitioned into varied priority packets so that jams in any category would result only in delays in that particular dividion, with overflows possibly being discarded. This also spreads traffic to bolster network utilization despite the varied QoS requirements of applications and it is likely that further differenntiation should occur in the future. The QoS assurance of performance does not however cater for increased scalability. In order to achieve QoS, DiffServ and integrated services (IntServ) are commonly used and offer both guaranteed and controlled loads, Chiu, Huang, Lo, Hwang, & Shieh (2003). The data path mechanisms fall into control pand data paths are do direct the actions taken by routers in regard to separate packets for varied services determined by the differentiation mechanisms.
In addition, controlling packet losses is crucial to ensuring the quality of the sevrices, and this is heavily dependent on the effectiveness of queue management, other than just TCP protocols. This includes the introduction of buffers to reduce packet lock outs and full queues by use of Random Early Detection founded on exponentnial decays, Cho & Okamura (2010). Other QoS aspects include scheduling, control path mechanisms, policy controls and traffic engineering and end host support. Scalability is perhaps the most important consideration in QoS and it affects both control and data paths, and while aggregation mostly solves the problem it reduces performance guarantees, control and monitoring (SANS, 2009).
1.1 Converged Network QoS
The service needs are specified in multiple forms, with the traffic specifications being given in service-level agreement or on a per flow. These in turn facilitate QoS, which can be accomplished using various strategies. These include the efficient allocation of bandwidth to varied service classes in order to avoid congestion affecting interactive service classes. In addition, QoS can be achieved through the acquisition of networks that support QoS for varied applications, which must assume an active role the process of allocating of resources, Xiao, Chen, & Li (2010).However, these are limited by the fact that the existing resources hardly support standardized resource/hard core allocation strategies and controlling TCP traffic has specific drawbacks on the internet.There are multiple ways of addressing these challenges. The first approach involves the better management of the available bandwidth to bolster efficiency. This includes upgrading of the link, employment of efficient queuing mechanism that prioritizes crucial packets. Queuing marks urgent (interactive data) data packets with RTP and LLQ headers, while less urgent traffic is marked with CBWFQ and TCP headers, followed by the prioritization of the most urgent traffic, Hentschel, Reinder, & Yiirgwei (2002).
However, there are multiple types of delays, especially prior to the prioritization of the data packets. These include processing, queuing, serialization and propagation delays. There are other ways to reduce delays, but their application ids dependent on the identification of the traffic types, classifying them and defining QoS policies for individual classes, Ahmad (2001).