Section A
Best effort packet forwarding means the network will attempt to forward the packet to its destination but may take different amount of times for successive packets or even lose a packet entirely.
As this method tries to send the data packets without ensuring the ACK for the received data packet at the receivers end thus we have selected this option.
When designing a receiver for streaming audio or video content playout buffers are usually implemented to ensure regular availability of audio or video samples for decoding at the receivers end.
The play buffers are responsible for delivery of the un-interrupted media samples to the decoder at the receivers end.
In order to have most consistent and positive improvement for the services to the customers is to place a transparent web cache right next to the content server.
The most suitable option will be use of priority queuing and scheduling to guarantee to guarantee no packet loss in the lower priority queues.
The TCP three way handshake takes 1.5*RTT before I begin sending the first bytes of application layer information to the web server.
- Maximum window size=8000bytes
Maximum rate of transmission= 460 kbps.
Link speed=1mb/sec.
Thus time for a single data packet transmission
=800/1048576=.00762939sec.=7.62ms.
Generally for some specific applications the data packets are required to be received at the destination in some specific time with regular time intervals such as (VoIP, PWE, video content streaming, ATM/TDM). On the contrary, due to adverse network conditions (retransmissions, data traffic congestion, translations between different network protocols and network architecture) data packets are received at the destination in clumped manner.
In this scenario, the receivers end should be able to arrange this clumped data packets into the normal application stream rates – depending on the arrival time- as any application will most likely be unable to manage the data packets received in variable rate. With numerous streams and various applications in a data packet stream, the packet arrival rate may likewise arrive at blasts of data packets. Thus burstiness in network can be described as the group of large sequence of data packets with unusual inter-packet gaps after a large burst of data packets. The inter-packet arrival time is defined by difference between the last bits of one data packet to the arrival of the first bit of immediate.
Thus in order to avoid the burstiness issue in IP traffic, the TCP can play an important role. Through using the TCP pacing technique, it can be guaranteed to lessen burstiness of IP data packet traffic and ease the effect of under buffered and underutilized switches and route red on flow throughput of the network.
QA1
The aggressive recovery techniques of TCP protocol functions well in wired network. In this scenario, data packets are lost mainly due to the congestion in the network. Opposed to wired network, in case of wireless networks, the data packets are lost in the network due to diverting of a node and transmission errors. The aggressive recovery technique of TCP reacts to this erroneous events by implementing congestion avoidance algorithm. Which leads to the data packet transmission controls by the senders or receivers. This leads to the degradation of throughput of the network. The Transmission control protocol or TCP presumes the reason for data packet losses network congestion. This reason is valid for it is true in wired infrastructure that consist of reliable links. On the contrary the disconnections as a result of signal level decay or movements of a node inside a network. Thus congestions control mechanism which is a part of aggressive recovery mechanism slows down the efficiency of the network.
- The traffic polishing and shaping is mainly used for limiting the output rate of data packets in any network.
The policing is used to propagate data packet bursts. In any network at the point data traffic rate touches the predetermined and configured maximum rate, the extra data traffic is dropped to defend the network from failure.
On the other hand, traffic shaping preserves excess number of packets in a queue. Maintaining this queue, traffic shaping schedules the extra number of data prepares them for later transmission in the network when the data packet traffic is comparatively low. Traffic shaping helps in achieving smoothed packet output rate for the network.
In case the round robin scheduling is used, different data flows or for each queue, they are recognized by its source and final destination address. The scheduling technique gives every queue to have its turn to send the data packets in the shared medium or channel in a periodic manner available for each queue. This technique is time efficient and helps in optimal of the network resources, implying that if one of the queues goes out of data packets, the next queue will replace it which lead to proper utilization of the data link resources.
In priority scheduling the data packet can be ordered based on the priority of the data packets detected at various sensor in the different network nodes. Priority scheduling can be grouped into two sorts as preemptive and non-preemptive. At the point when data packets are in the ready queue of the scheduler, its priority is compared with the currently streaming data packet queue. This type of scheduler dynamically changes among the four queues based on the priority of newly arrived packets. In case the priority of two data packets queue are different, the queue with the lesser priority would be placed into the higher-priority queue and vice versa.
QA2
There are several factors that affect the performance of the DSL negatively such as crosstalk, data attenuation The cross talk is considered as the disruption in a particular communication circuit. This mainly causes due to the existence of magnetic/electric fields of other telecommunication signal near the DSL. This magnetic or electric field disturbance can severely degrade the quality of data transmission. There are mainly two types of crosstalk disturbances. This are NEXT (Near End Crosstalk) and FEXT (Far End Crosstalk). Near End Crosstalk happens in case a strong magnetic or electric transmitter affects the nearest DSL receiver and considered as the factor that mostly degrades the performance. On the other hand the Far End Crosstalk happens due to the interference in the signals at the receiving end of the Digital subscriber line. Other DSL lines in similar bunch of connections also may cause crosstalk on the transmission through DSL.
On the other hand the data attenuation depends on the length of the DSL and the data rate. If the length of the DSL is too much then the rate of data drops gets increased with the length. The degradation in the signal (loss of amplitude) of the data signals in the transmission channel is denoted as attenuation of signal. As the length of the DSL increases, the frequency of data stream in the DSL suffer from attenuation problem.
Yes, it is possible to use the same pair for ADSL and regular phone. This can be done by using the frequency division multiplexing. As the ADSL signals requires on higher frequencies from several kHz to several MHz, on the contrary the telephone requires lesser frequencies starting from 0 kHz to 3.5 kHz. Both the frequencies can be separated by using low pass and high pass filters.
Zipf’s law is mainly a statistical law which is developed by observing the behavior of different complex systems having different nature and functionalities. This law provides explanation or description about the relationship among the frequency of occurrence of a result in an experiment or of a particular event and its rank in the total set of results or events, in case the events /results are listed (ranked) according to the frequency of occurrence of any particular result.
Zipf’s law holds is also applicable in order to effective use of network resources in order to caching of the multimedia objects frequently requested and accessed by the users. Advanced caching techniques can help in the management of distribution non-streaming and streaming data or multimedia data objects. Effective utilization of Zipf’s law can help in next-generation multimedia caching systems and serve the user requests more efficiently.
QA3
Let us take that N1 be the set of data objects requested by a certain number or certain set of users. The Zipf’s law helps in calculating the number of uses/accesses to those certain set of data objects based on its popularity or frequency of access. Precisely, Zipf’s function quantifies the probability that an access is made to object which leads to the better caching prediction of the data object. This leads to the better and faster content delivery of the content or services to the requesting client.
Both replication and caching is used to temporarily store the copied information requested by the users. Along with this they are helpful in improving the quality of service for the different web browsers by delivering contents with reduced amount of latency and higher amount of bandwidth.
According to the definition the replication of the content is a process of circulating different changes from one database server to other databases in the same network. On the other hand the caching is the technique of prefetching the frequently accessed or requested content /data and storing it in to the servers close to the applications so that the users can get their requested content or service faster.
Caching helps in reduction in network latency by bringing the requested content/ service closer location or sever to the users. It is reactive process as a data object/ requested content is cached or stored at the nearest server only in the scenarios when any user requests for the certain data or content. Helps in data traffic reduction as the content is only stored to the cache servers if it is requested by the users. The process of caching have consistency issues due to its reactive nature for the data/service requests. In addition to that, the caching technique may have reliability issues as the contents are usually placed at the entry points of any network thus a cache failure may lead to break down of total network.
Mirroring/Replication method has the idea about an object and exactly when it is changed and push the required content immediately to the requesting user. This technique ensure content freshness due to their reactive nature while pushing the content and services. In addition to that, this methodology have very high fault tolerance capability in the replication process or mirroring of data. It helps in ensuring that if any web server or data bases server fails or goes down and is unable to process user requests thus requests can be redirected to origin server to process the requests. Which is not available in case of caching and thus failure of a cache server can lead to the breakdown of the whole network. In order to provide such robust functionality it consume more disk space compared to the caching technique. Complex and efficient algorithms are required in order to manage load balancing of the requests on the servers. Opposed to the caching technique the mirroring may lead to increase the network traffic if Multicast is not used sensibly.
Transparent proxy is the intermediate system in between a user and any web service or content providers. When a user connects to a service or requests for content to the servers, the transparent proxy first intercepts and interprets the content request before passing it on to the content server. It uses the request in order to complete different actions such as authentication, caching of data and redirection to the requested pages or contents. This proxy servers are called transparent proxy as the end users are completely un-aware of about the existence of this severs in the architecture or design. On the other hand, the content hosting servers are recognize that the request traffic is coming from a transparent proxy servers and not from the user directly.
The proxy servers of the used as the proxy caches without putting lot of work load on the content servers. This transparent proxy servers can create and store frequently requested content data or create cached content data. This in turn helps in reducing the work load on severs by serving the cached data to the users from the transparent proxy.
Use of the transparent proxy servers helps the administrators to have better control on the services or content of the server. In addition to that, the end users or the intended users can easily interact and get the requested data service through the transparent proxy servers.
To export a reference to this article please select a referencing stye below:
My Assignment Help. (2022). Understanding Best Practices For Network Performance Optimization Essay.. Retrieved from https://myassignmenthelp.com/free-samples/elec6500-telecommunication-networks/broadband-multimedia-networks-file-A8E47A.html.
"Understanding Best Practices For Network Performance Optimization Essay.." My Assignment Help, 2022, https://myassignmenthelp.com/free-samples/elec6500-telecommunication-networks/broadband-multimedia-networks-file-A8E47A.html.
My Assignment Help (2022) Understanding Best Practices For Network Performance Optimization Essay. [Online]. Available from: https://myassignmenthelp.com/free-samples/elec6500-telecommunication-networks/broadband-multimedia-networks-file-A8E47A.html
[Accessed 23 December 2024].
My Assignment Help. 'Understanding Best Practices For Network Performance Optimization Essay.' (My Assignment Help, 2022) <https://myassignmenthelp.com/free-samples/elec6500-telecommunication-networks/broadband-multimedia-networks-file-A8E47A.html> accessed 23 December 2024.
My Assignment Help. Understanding Best Practices For Network Performance Optimization Essay. [Internet]. My Assignment Help. 2022 [cited 23 December 2024]. Available from: https://myassignmenthelp.com/free-samples/elec6500-telecommunication-networks/broadband-multimedia-networks-file-A8E47A.html.