International Journal of Computer Networks and Security, ISSN:2051-6878, Vol.25, Issue.2
1383
Optimization of Data Packet Transmission in a Congested Network Promise T. K. Akiene
1
Computer Science Department, Rivers State Polytechnic, Bori, Rivers State, Nigeria. Email:
[email protected]
Ledisi G. Kabari2 Computer Science Department, Rivers State Polytechnic, Bori, Rivers State Nigeria. Email:
[email protected]
ABSTRACT Optimization of data packet transmission in a congested network is a research work that examines the various protocols involved in the transmission of data/packet from source to destination or sink and uses the modified FIFO Queue system to manage the problem of packet loss during transmission in a congested network. The motivation for this research is to develop a system that will simulate control and elimination of data loss during transmission in a congested network. The paper develops a system that proffers a means of pinning down the degree of packet losses at any given router or link, within which a transmission path is seen. Thus, using the prototyping software methodology software that is capable of handling data packet loss during transmission was developed in Python Programming Language and implemented. From the research work, it was observed that the causes of loss of data/packet during transmission are largely dependent on the protocol, congestion of traffic way, speed of the sender and the speed of the receiver‟s machine. Hence the research work successfully addressed the issue of loss of data packet during transmission.
Keywords-
Packet, Datagram, Packet Switching, Protocol, Congestion management, Packet Loss, Transmission.
1. INTRODUCTION
number of connections compete for scare network bandwidth. Congestion in a network may occur when users send data at a rate greater than what are acceptable by network resources. For example, congestion may occur because the switches in a network have a limited buffer size of memory to store packets for processing. Congestion management is the process of controlling congestion by determining the order in which packets are transmitted out, based on priorities assigned to those packets. Congestion management deals with the creation of queues based on the packets classification, and scheduling of the packets in the queue for transmission. When no congestion exists (light traffic), packets are transmitted out as soon as they arrive. During periods of heavy traffic, packets arrive faster than the interface can transmit them. If you use congestion management features, packets accumulating at the interface are queued until the interface is free to transmit them; they are then scheduled for transmission according to their assigned priority and the queuing mechanism configured for the interface. The router determines the order of packet transmission by controlling which packets are placed in which queue and how queues are serviced with respect to each other.
It is important to avoid high data loss rate in the internet. When a packet is dropped before it reaches its destination, all of the resources it has consumed in transit have been wasted. In extreme cases, this situation can lead to congestion collapse.
When multiple machines are sending traffic on an Ethernet network segment, data packets are transmitted in frames over the entire segment. When two machines send packets at once, the frames hit the network at the same time and collide with each other. This is another cause of network congestion. Within a network that is working correctly, there is a small but manageable amount of network collisions. As network activity increases, rates of collision inevitably increase as more packets hit the network at the same time. As collision rates increase, it affects performance, latency, and reliability of Ethernet frame delivery.
Over the last decades, Transmission Control Protocol(TCP) and its congestion control mechanisms have been instrumental in controlling packet loss and in preventing congestion collapse across the internet. Optimizing the congestion control mechanisms used in TCP has been one of the most active areas of research in the past few years. While a number of proposed TCP enhancements have made their way into implementations, TCP connections still experience a high loss rate. Loss rates are especially high during times of heavy congestion, when a large
A critical issue in the design of fast packet-switch-based networks is the avoidance of data loss due to congestion. Good and bad network performance is largely dependent on the effective implementation of network protocols. TCP, easily the most widely used protocol in the transport layer on the Internet (e.g. HTTP, TELNET, and SMTP), plays an integral role in determining overall network performance. Amazingly, TCP has changed very little since its initial design in the early 1980‟s. A few “tweaks” and “knobs” have been added, but for the most part, the
Presently, the computer network and the Internet accommodate simultaneous transmission of audio, video, and data traffic among others. Efficient and reliable data transmission is essential for achieving high performance in a networked computing environment.
© RECENT SCIENCE PUBLICATIONS ARCHIVES | September 2015|$25.00 | 27704548| *This article is authorized for use only by Recent Science Journal Authors, Subscribers and Partnering Institutions*
International Journal of Computer Networks and Security, ISSN:2051-6878, Vol.25, Issue.2
protocol has withstood the test of time. However, there are still a number of performance problems on the internet and fine tuning TCP software continues to be an area of more research. The aim of this paper is to present a novel model to handle loss of packets during data transmission in a congested network. Hence the following objectives are expected to be achieved: (i) To examine the causes of data packet loss during transmission and develop a program that can be used to trace packets on transmission. (ii) To develop an application program that will simulate packets loss on transmission, control and elimination of data loss during transmission. (iii) Finally, the system proffers a means of pinning down the degree of package losses at any given router or link, within which a transmission path is seen. This goes a long way to identify which router(s) or path where congestion can be felt (where source and destination are incommunicado).
2. DATA TRANSMISSION The issue of packet loss in a congested network is a global issue that has attracted the attention of not just Information Technology(IT) professionals but the entire populace. High Speed Networks refer to the networks supporting high data rates like high speed LAN‟s and Ethernets[1]. Network congestion is caused by a variety of factors, including network topology, bandwidth and usage patterns. Faulty hardware, network broadcasts and expansion also slow networks. Many of these causes can be difficult to locate and the problem can be intermittent, making detection even more elusive. A systematic approach to planning, testing and maintaining network equipment such as routers, hubs, switches, bridges and cabling is needed to avoid network congestion and keep the data flowing smoothly. Policies can also be put into place to limit access to high-bandwidth video streaming and other leisure activities in the workplace to increase speed.
2.1 Packets and Packet-Switching Networks A packet is a unit of data that is transmitted across a packet-switched network. A packet-switched network is an interconnected set of networks that are joined by routers or switching routers. The most common packet-switching technology is TCP/IP, and the Internet is the largest packet-switched network. Other packet-switched network technologies include X.25 and IPX/SPX (the original Novell NetWare protocols). This topic focuses on the Internet and TCP/IP packet-switched networks[2]. The advantage of the connectionless packet model is that packets are forwarded independent of other packets. Packets are forwarded on-the-fly by routers, based on the most current best path to a destination. If a link or router fails, packets are quickly diverted along another path. Since routers don't maintain information about virtual circuits, their job is greatly simplified. In contrast, ATM and frame relay networks are connection oriented. A virtual circuit must be established between sender and receiver across the network before packets (cells or frames, respectively) can start to flow. One reason the
1384
Internet has scaled so well is that there is no need to build virtual circuits for the millions of flows that cross the network at any one time. Routers simply forward packets along the best path to the destination[2]. However, in the interest of speed and QoS, virtual circuits are being implemented on the Internet by using protocols such as MPLS (Multiprotocol Label Switching). Stream Control Transmission Protocol (SCTP) is a new reliable, message-oriented transport layer protocol. SCTP, however, is mostly designed for Internet applications that have recently been introduced. These new applications, such as IUA (ISDN over IP), M2UA and M3UA (telephony signaling), H.248 (media gateway control), H.323 (IP telephony), and SIP (IP telephony), need a more sophisticated service than TCP can provide. SCTP provides this enhanced performance and reliability. We briefly compare UDP, TCP, and SCTP[2].
2.2 Network Access Methods Some primary networks access methods viable within a network setting are CSMA/CD (Carrier Sense Multiple Access/Collision Detection), CSMA/CA (Carrier Sense Multiple Access/Collision Avoidance) and Token Passing[3]. CSMA is a network access method used on shared network topologies such as Ethernet to control access to the network. In CA (collision avoidance), collisions are avoided because each node signals its intent to transmit before actually doing so. This method is not popular because it requires excessive overhead that reduces performance. A token is a special control frame on token ring, token bus, and FDDI (Fiber Distributed Data Interface) networks that determines which stations can transmit data on a shared network. The node that has the token can transmit.
2.3 Packet Loss Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination[4]. Packet loss is distinguished as one of the three main error types encountered in digital communications; the other two being bit error and spurious packets caused due to noise. Packets can be lost in a network because they may be dropped when a queue in the network node overflows. The amount of packet loss during the steady state is another important property of a congestion control scheme. The larger the value of packet loss, the more difficult it is for transport-layer protocols to maintain high bandwidths, the sensitivity to loss of individual packets, as well as to frequency and patterns of loss among longer packet sequences is strongly dependent on the application itself.
2.4 Congestion An important issue in a packet-switched network is congestion. Congestion in a network may occur if the load on the network-the number of packets sent to the networkis greater than the capacity of the network-the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion
© RECENT SCIENCE PUBLICATIONS ARCHIVES | September 2015|$25.00 | 27704548| *This article is authorized for use only by Recent Science Journal Authors, Subscribers and Partnering Institutions*
International Journal of Computer Networks and Security, ISSN:2051-6878, Vol.25, Issue.2
and keep the load below the capacity. Congestion in a network or internetwork occurs because routers and switches have queues-buffers that hold the packets before and after processing. A router, for example, has an input queue and an output queue for each interface. When a packet arrives at the incoming interface, it undergoes three steps before departing, as shown in the figure 1 below.
Figure 1 Queues in a router (source:[5]). 1.
The packet is put at the end of the input queue while waiting to be checked.
2.
The processing module of the router removes the packet from the input queue once it reaches the front of the queue and uses its routing table and the destination address to find the route.
3.
The packet is put in the appropriate output queue and waits its turn to be sent.
We need to be aware of two issues. First, if the rate of packet arrival is higher than the packet processing rate, the input queues become longer and longer. Second, if the packet departure rate is less than the packet processing rate, the output queues become longer and longer.
2.5 Congestion Time Signatures One of the concepts used in this research is the „„time signature‟‟ of congestion, which refers to the evolution across time of some critical measurement (such as demand for a resource or performance of a network component) before and after each episode of congestion. The time signature heavily influences which responses to congestion are possible or appropriate. Two important aspects of a time signature are the speed of congestion onset and the speed of congestion abatement[6]. Assuming congestion is not currently visible, congestion onset describes the phenomenon in which the demand for the resources of some network component increases with time until performance degradation becomes visible to the user. Congestion abatement describes the phenomenon in which the demand for critical resources decreases over time until performance degradation ends. Abatement applies only to the traffic offered to the resource, and is not affected by congestion management actions taken for that resource.
1385
2.6 Congestion Control in Packet-Oriented Networks In packet-oriented networks, two fundamental types of congestion-control mechanisms were identified by Van[7] regarding the role of the network protocol: (1). In packet oriented networks the network protocols and routers play important roles. Network protocols frequently inform the sending sources about the current load conditions in the network. The sources store current load conditions of the network in congestion control variables which are used for controlling the congestion. This leads to high utilization of Bandwidth and increase in performance. This advantage of such a congestion-control mechanism is combined with two disadvantages like First, the congestion-control information transferred by the network protocol requires some additional overhead. There is a trade-off between the frequency/overhead and the benefit that can be expected if such a congestioncontrol mechanism is performed. Second, the upper-layer protocols working on top of the network protocol are limited in their flexibility, as they have to evaluate and react on the congestion control information supported by the network protocol. (2) Congestion control can be excluded from and not supported by the network protocol and the routers of the network. Then, the protocols working on top of the network protocols are responsible for the congestion control in the network. In this case, each source has to frequently collect network information, store them in its congestion-control variables, and locally perform congestion control based on values of these variables. One main problem of this approach is that the network information collected by a sender does not reflect very well the current network conditions. The result is a suboptimal congestion control in terms of network utilization and data stream performance. Another problem of this approach is that the source of each new data stream entering the network does not know anything about the current load conditions in the network. Therefore, such a source starts sending its data very conservatively using a small sending rate, estimates and probes the current network-load conditions by continuously increasing its sending rate; after a while in which the TCP sender has raised its local knowledge about the current network load little by little that it is able to perform a more accurate congestion control based on the so far collected network information. In the meantime, the congestion control of this data stream might be also far from optimality. Besides being fair, efficient, responsive and stable; a congestion control technique must be robust against the loss of information also it must scale well with the increase in the speed of the link, the distances and the users[8].
© RECENT SCIENCE PUBLICATIONS ARCHIVES | September 2015|$25.00 | 27704548| *This article is authorized for use only by Recent Science Journal Authors, Subscribers and Partnering Institutions*
International Journal of Computer Networks and Security, ISSN:2051-6878, Vol.25, Issue.2
2.7 Open-Loop Congestion Control In open-loop congestion control, policies are applied to prevent congestion before it happens. In these mechanisms, congestion control is handled by either the source or the destination[9]. Some of these policies that can prevent congestion are: Retransmission Policy, Window Policy, Acknowledgment Policy, Discarding Policy, and Admission Policy. Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or corrupted, the packet needs to be retransmitted. Retransmission in general may increase congestion in the network. However, a good retransmission policy can prevent congestion. The retransmission policy and the retransmission timers must be designed to optimize efficiency and at the same time prevent congestion. For example, the retransmission policy used by TCP is designed to prevent or alleviate congestion. The type of window at the sender may also affect congestion. The Selective Repeat window is better than the Go-Back-N window for congestion control. In the GoBack-N window, when the timer for a packet times out, several packets may be resent, although some may have arrived safe and sound at the receiver. This duplication may make the congestion worse. The Selective Repeat window, on the other hand, tries to send the specific packets that have been lost or corrupted. The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver does not acknowledge every packet it receives, it may slow down the sender and help prevent congestion. Several approaches are used in this case. A receiver may send an acknowledgment only if it has a packet to be sent or a special timer expires. A receiver may decide to acknowledge only N packets at a time. We need to know that the acknowledgments are also part of the load in a network. Sending fewer acknowledgments means imposing fewer loads on the network. A good discarding policy by the routers may prevent congestion and at the same time may not harm the integrity of the transmission. For example, in audio transmission, if the policy is to discard less sensitive packets when congestion is likely to happen, the quality of sound is still preserved and congestion is prevented or alleviated. An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual-circuit networks, Switches in a flow first check the resource requirement of a flow before admitting it to the network. A router can deny establishing a virtual-circuit connection if there is congestion in the network or if there is a possibility of future congestion.
2.8 Flow-Control Mechanism Flow controls are necessary because senders and receivers are often unmatched in capacity and processing power. A receiver might not be able to process packets at the same
1386
speed as the sender. If buffers fill, packets are dropped. The goal of flow-control mechanisms is to prevent dropped packets that must be retransmitted[10]. FlowControl mechanism include flow characteristics (Reliability, Delay Jitter and Bandwidth) and policy architecture (Policy clients, policy servers and Policy information system)
2.9 Transmission Performance Technically speaking, network performance can be measured through the evaluation of the following network Performance parameters; Fairness, Latency, Jitter, Packet Loss, Throughput, Link/Channel Capacity (Bandwidth), Link utilization, Availability and Reliability. Fairness measures or metrics are used in networks to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness. The Jain's equation [11] state as in equation 1
(1) This equation rates the fairness of a set of values. The result ranges from 1/n (worst case) to 1 (best case). This metric identifies underutilized channels and is not unduly sensitive to typical network flow patterns. A common measure of latency is the Round Trip Time (RTT), the time between dispatch of a packet from source and receipt of an acknowledgement that it has reached its destination but in general it is as in equation 2 Latency = RTT + Wt + Pt
(2)
Wt: wait time at queues at routers, Pt: packet processing time at receiving host & generate Acknowledgement. Jitter is a short-term variation in the rate at which packets travel across a network is called jitter. The jitter can be of two types viz. delay jitter & latency jitter. Variation in the time it takes for packets to reach their destination is delay jitter. The corresponding variation in the latency is latency jitter[8]. Packet loss is the fraction (usually expressed as a percentage) of packets dispatched to a given destination host during some time interval for which an acknowledgement is never received. Such packets are referred to as being lost as stated in equation 3.
(3) TCP uses the fraction of lost packets to gauge its transmission rate: if the fraction becomes large then the transmitting host will reduce the rate at which it dispatches packets. As a rule of thumb, a network with a packet loss of 5-15% is said to be severely congested, and one with a higher rate is likely to be unusable for most practical purposes.
© RECENT SCIENCE PUBLICATIONS ARCHIVES | September 2015|$25.00 | 27704548| *This article is authorized for use only by Recent Science Journal Authors, Subscribers and Partnering Institutions*
International Journal of Computer Networks and Security, ISSN:2051-6878, Vol.25, Issue.2
Throughput is the rate at which data flow past some measurement point in the network. It can be measured in bits/sec, bytes/sec or packets/sec. Throughput is measured by counting the traffic over an interval and a care must be taken to choose this interval appropriately. A long interval leads to averaging out transient bursts and lulls in the traffic. A shorter interval will record these temporary effects, even if they are not important in the context of the measurement. Maximum Throughput which a link can offer for transferring bits reliably. Utilized Link capacity: It is defined as the current traffic load excluding the traffic from host, Available Capacity = Link Capacity − Utilized Capacity, It is the fraction of the available capacity which can utilized. Access Rate is the Maximum data rate. Link Utilization is defined as simply the throughput (as defined above) divided by the access rate and expressed as a percentage. Link Ultilization = Throughput/Access Rate. The Availability is the fraction of time during a given period when the network is unavailable. Reliability is related to both availability and packet loss. It is the frequency with which packets get corrupted (due to network malfunction); as distinct from being lost.
3. RESEARCH METHOD We used the prototyping methodology of software engineering. The process involves: Identifying the basic requirements such as input and output information needed, developing an initial prototype including user interfaces, reviewing with customers, revising and enhancing the prototype using a feedback, both from the specifications and the customer to improve the prototype. Benefits of using Prototyping Methodology include: Provides proof of concept to attract funding, Encourages active participation by users and producers, Development cost is reduced, Increases systems development speed, Identifies any problems with the efficacy of earlier design, requirements analysis and coding activities, Detect faults early enough to avoid project abandonment, Delivers product quality easily.
3.1 Analysis of the Existing System Packets can be lost in a network because they may be dropped when a queue in the network node overflows. Hence, it shows that more arrivals of data packets lead to overflow and network congestion. This leads to packet drop and hence packet loss. Considering the equation 4, N(K+1) = N(K) +(Nentry(K) – Nexit(K))
(4)
N(K+1) = N(K) +(Nentry(K) – Nexit(K))
(4)
Queue Conservation Principle Equation, Where: N(K+1) is the expected data packets at a particular point in time, N(K) is the number of data packets in the receiver Nentry is the number of data packets that are entering the queue,
1387
Whereas Nexit is the number of data packets that are leaving the queue. From the above equation, it can be seen that it is expected to determine the congestion of data packets at a certain point where there is a queue. A data packet congestion detector should notify the congestion state of the network, thereby causing packet drop. The dropped packets are lost. It cannot effectively manage congestion nor handle data packet loss. This is the main problem considered and identified in this old system/model This old system, do not have a good quality of service(QoS), it encourages data/packet loss during transmission in a congested network. The old system does not effectively control congestion when it arises.
3.2 Design of the New System From equation 4, the Queue Conservation Principle Equation, Packets can be lost in a network because they may be dropped when a queue in the network node overflows. Hence, it shows that more arrivals of data packets lead to overflow and network congestion. This leads to packet drop and hence packet loss. That is, arrival of packets is greater than the exit of packets. Considering this equation for queue conservation principle, increasing the number of packets leaving/exiting the queue(Nexit), will definitely increase the flow in the queue. With this, there will be no overload, no congestion, no packet drop hence no packet loss. It can be seen that it is essential to determine the congestion of data packets at a certain point where there is a queue. A data packet congestion detector/switch that notifies the congestion state of the network; thereby causing diversion to the second and third routes(queue) of the proposed novel model. Hence, we propose equation 5. (5) Equation 5 becomes our Novel Data Packet Loss Control Mathematical Model, Where:N(K+1) is the expected data packets at a particular point in time, N(K) is the number of data packets in the receiver (section between the congestion point and the congestion detector, Nentry is the number of data packets that are entering the queue, Whereas Nexit is the number of data packets that are leaving the queue, ∑ is the summation of the three queues/processors so proposed. The pictorial representation of the above equation is shown in figure 3. As seen in the diagram above, arrival packets are received first into a switch, which checks for the status of the main queue and makes decision whether to send arrival packet(s) to the main queue or auxiliary queues. If main queue is full then packets are sent to the auxiliary queues. In all cases, a processor is attached which processes the packet(s) before delivery. Auxiliary queues and processors are used only when main queue is full.
© RECENT SCIENCE PUBLICATIONS ARCHIVES | September 2015|$25.00 | 27704548| *This article is authorized for use only by Recent Science Journal Authors, Subscribers and Partnering Institutions*
International Journal of Computer Networks and Security, ISSN:2051-6878, Vol.25, Issue.2
Proces s
FULL? Arrival Packet s
Main Queue
1388
Departing Packets
Process Auxiliary Queue 1
Process Auxiliary Queue 2 Figure3. Modified FIFO Queue: Novel Model for Data Loss Control (High Level Model)
4. DESIGN AND IMPLEMEMTAION The design and implementation of the novel system is takes the form and style of a milestone development; where several networking components (both software and hardware) are included in transit. The programming language so used is Python programming language. However, there were other hardware and software requirements that were used in course of the design and implementation purposes. There are mainly two major classes to the system specification. One of it is identifying the offensive location (the non-behaving source) and the next is simulation of practical model that demonstrates clear evidences of minimizing network congestion, packet drop, data/packet loss. The first class of specification has several modules viz connection, query module, report module. All these are batched separately. They are executed only when program is in active state. The next specification class combines threads of processes by way of simulation and displays a propitious model of First-In First-Out Queue Theory; where arrival packets are screened under the proviso of queue size. This check is to ensure that when queue is full, arriving packets are redirected to auxiliary queues for onward processing.
4.1 Algorithm for System Display the flash screen Display the control centre Start simulation def f(size Q) def f(flowrate) Nentry = f(size Q) Nexit = f(flowrate) Nexp = Nentry + Nexit Nexp =N + Nentry – Nexit
i=1 k=1 do move data packet k into Main Queue k=k+1 loop while N(k) length of Queue Display Main Queue full drop packet into Auxiliary Queue1 do Nexp = N(i) + (Nentry – Nexit(i)) i=i+1 loop until i = 3 do move Nexp into Auxiliary Queue1 loop until length of Auxiliary Queue1 = N(k) Display Auxiliary Queue1 full drop packet into Auxiliary Queue2 If Length of Main Queue < Threshold then Call module move data else move data packet into Auxiliary Queue2
4.2 Why Python? Python was in the implementation of the modelled system because Python is an open-source object oriented language. Its efficacy cuts across, Application Programming Interface (API), Platform independence, simulation, lowlevel programming, object linker embedding, network configuration etc. Once a Python program is developed, it does not need to be compiled each time it is about to run. The overwhelming reason for choosing Python for the implementation of this work is the way and manner in which python interacts with network devices. Besides, short codes are used to implement great task.
© RECENT SCIENCE PUBLICATIONS ARCHIVES | September 2015|$25.00 | 27704548| *This article is authorized for use only by Recent Science Journal Authors, Subscribers and Partnering Institutions*
International Journal of Computer Networks and Security, ISSN:2051-6878, Vol.25, Issue.2
1389
4.3 Hardware and Software Requirements
[3]
Hardware: Monitor, System Unit ( processor speed of 1GHz and above, RAM size of 2GB and above), Keyboard/mouse, Uninterrupted power supply (UPS), Modem, Network Interface Card, Hub, Routers, Bridges, Gateway, Connecting cables, Satellite Dish.
Allman, M., Paxson,V. and Stevens, W. (1999), TCP Congestion Control, RFC 2581. Retrieved 15/09/2014 from https://tools.ietf.org/html/rfc2581.
[4]
Ravindra, R. S., Varalaxmi, G and Andhra Pradesh (2013), International Journal of Research in Engineering and Technology. eISSN: 2319-1163 | pISSN: 2321-7308 Volume: 02 Issue: 10.
[5]
Behrouz A. F. and Sophia C. F. (2007), Data Communication and Networking, 4th ed ISBN13978-0-07-296775-3. McGraw-Hill Forouzan Networking series.
[6]
Aggarwal, A., Savage, S. and Anderson, T. (2000), Understanding the Performance of TCP Pacing, IEEE InfoCom 2000. Retrieved 15/09/2014 from http://cseweb.ucsd.edu/~savage/papers/Infocom20 00pacing.pdf
[7]
Van, Jacobson. (2005), Congestion Avoidance and Control. Computer Communications Review, Volume 18 number 4, pp. 314-329.
[8]
David X. Wei “Congestion Control Algorithms for High speed Long Distance TCP connections”, Master Thesis, Caltech, 2004.
[9]
Anderson, E., Greenspun, P., and Grumet, A.,(2006), Internet Application Workbook. MIT Press 2006; ISBN 0262511916.
[10]
Behrouz A. F. (2006), Business Data Communication, Fourth Edition, McGraw-Hill Forouzan Networking Series.
Software: Python Interpreter, Operating System(Unix, Linux, Windows etc), Component driver programs,Text Editor, Integrated Development Environment, Proposes System source code, Web browser program
5. CONCLUSION Packet/data loss has caused some level of psychological phobia and trauma to some network and computer users today. Most of which are novices to the pragmatic events involved in packet/data transmission. The study exposes computer users to the nitty-gritty of packet transmission; showing behaving and nonbehaving transmitted packets. Packet transmission permits waiting for process completion, this is crucial to the experience of packet/data loss in a congested network. The paper thus show that network congestion is tangential to system speed, rate of transmission, bandwidth, switching device types( router or hub), transmission medium, number of arrival packets against number of processed packet and that of departure packets. This implies that if packet arrival is greater than packet departure then there is bound to be network congestion hence packet drop and packet loss, and if packet arrival is less than packet departure then there is bound to be no network congestion hence no packet drop and no data/packet loss . The traditional packet drop as a solution to network congestion is discouraged by this paper. Rather a modified FIFO Queue system was adapted and implemented, which is more trusted than the former. However, if packets are constantly put below queue capacity then network congestion will be avoided and there will be no packet/data loss.
Jain, R., Chiu, D. and Hawe, W (1984), A quantitative measure of fairness and discrimination for resource allocation in shared computer system, Eastern Research Laboratory, Digital Equipment Corp., Tech. Rep., 1984.
This new system, improves the throughput and quality of service(QoS). It controls data/packet loss during transmission in a congested network to over 80%. This new model effectively controls congestion in a network when it arises.
ACKNOWLEDGEMENT We thank the Management of the Polytechnic for allowing us to use the facility of the Polytechnic during the course and demonstration of this research.
REFERENCES [1]
Stallings, W. (2005). Wireless Communications and Networks. Second Edition. Pearson Education Inc. Upper Saddle River, New Jersey, USA.
[2]
Evans, J. and Washburn, K.,(1993), TCP/IP Running a Successful Network. Addison Wesley Publishing Company, Wokingham, England.
© RECENT SCIENCE PUBLICATIONS ARCHIVES | September 2015|$25.00 | 27704548| *This article is authorized for use only by Recent Science Journal Authors, Subscribers and Partnering Institutions*