Get Instant Help From 5000+ Experts For
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
myassignmenthelp.com
loader
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
wave

On completion of the module, students will be able to:

1. Analyse, Design and implement a load-balanced multi-server system.

2. Test and benchmark a load-balanced multi-server system.

3. Review and critically evaluate the different approaches to providing scalable and highly-available system and the migration to cloud computing.

4. Justify the need for scalable server and highly-available servers.

What is High Availability Architecture?

In reality, it’s possible to experience circumstances whereby when a dip in execution of the servers may happen from occasions running from an abrupt spike in rush hour gridlock can prompt an abrupt power blackout. This maybe more terrible and the servers may be disabled independent of whether the applications are facilitated in the machines that are physical or cloud. It’s not possible to avoid such situations.

The response to such issue is the utilization of a High Availability (HA) setup or structural design. The structural design of High availability is a methodology of characterizing the parts, modules or usage of administrations of a framework that guarantees ideal operational execution, also during the loads that are high at times. Despite the fact that there are no settled tenets of executing High availability frameworks, one can find that for the most part a couple of worthy practices that an individual ought to take after with the goal that you pick up the most out of the minimum assets.

The prime target of actualizing High Availability engineering is to ensure the framework or application is arranged to deal with various burdens and distinctive disappointments with insignificant or no downtime.

Present day schemes take into consideration dissemination of workloads over various instances, for example, a cluster or network, that aides in advancing asset utilization, output maximization, limiting reaction times and maintaining a strategic distance from overburden of any framework in the event known as balancing of the load. It likewise includes changing to a standby asset such as a server, part or network on an account of failure of a dynamic one, referred to as systems of Failover.

In occasions that an organization only use a single server and then traffic spikes, this situation may lead the server failure. The only solution in such situations is restarting of the server which causes downtime. The obvious solution here is to deploy your application over multiple servers. You need to distribute the load among all these, so that none of them are overburdened and the output is optimum. You can also deploy parts of your application on different servers(Bhattacherjee,2008). The undeniable arrangement in such situations is sending the application over various servers. One can distribute the load among the servers to ensure that none is overloaded optimizing output. One may also send portions of the application on various servers.

Redundancy is a procedure that makes frameworks containing elevated amounts of availability through accomplishing detectability of failure as well as evading normal reason failures. To accomplish this, slaves’ maintenance may be utilized since it can advance in when the crash of the main server occurs. Sharding is another fascinating idea of scaling databases. In sharding, same tables rows are executed on different servers.

Why use High Availability Architecture

Applications and database Scaling is a huge advance, however for a scenario whereby every one of the servers are located in the same geographical area and a horrible situation occurs such as a catastrophic event that influences the center of data where the servers are found(Chevance,2009). This can prompt possibly tremendous downtimes. It ‘s, along these lines, basic that one spreads the servers in various areas. In present day the services of the web enable one to choose the land area of the servers. One ought to pick carefully to ensure the servers are disseminated everywhere throughout the nations unlimited in terms of location.

To ensure that environments of computing maintain good levels of continuity in operations in times of production, designing of the environments with availability that is high is the only solution.

  1. Load Balancing of the Network

Load balancing is a powerful method for expanding the accessibility of basic online applications. After the traffic is redistributed automatically to servers that are still running, the failures of the server are easily replaced. Besides increasing the availability, load balancing also enhances increase in scalability. The model of push or a pull can be used to implement balancing of Network Loading. Levels of higher tolerance of fault are also enhanced inside the applications service (Krishnamurthi, 2010).

  1. Clustering

In case a fault occurs, clustering may be used to offer application services for failover that is instant. A cluster of high availability comprises of a number of nodes which share data over the grids of memory of shared data. This implies any hub may be separated or closed from the system and whatever is left of the group will keep on operating typically, as long as no less than a solitary hub is completely useful. Every hub can be redesigned independently and rejoined while the group works. The large cost of acquiring extra equipment to execute a bunch can be moderated by setting up a virtualized group that uses the accessible equipment assets.

  1. Data Backups, Recovery and Replication

Important information ought to never be put away without appropriate reinforcements, the capacity to reproduce the information or replication. Each center of data ought to get ready for loss of data or debasement ahead of time. Information blunders may make client confirmation issues, harm monetary records and in this manner business group validity. The prescribed methodology for keeping up information trustworthiness is making a full reinforcement of the essential database at that point incrementally testing the source server for information debasements. Making full reinforcements is at the bleeding edge of recuperating from cataclysmic framework disappointment.

  1. Fail Over Solutions

Best Practices for Implementing High Availability Architecture

Failover is fundamentally a reinforcement mode of operation whereby the elements of a framework part are expected by an auxiliary framework if the essential one goes disconnected, either because of disappointment or arranged down time. In cold and hot failover situations, errands are consequently offloaded to a standby framework segment with the goal that the procedure stays as consistent as conceivable to the end client. Failover can be overseen through DNS, in an all-around controlled condition.

  1. Geographical redundancy

Geo-excess is the main mode of protection with regards to avoiding administration failure despite cataclysmic occasions, for example, catastrophic events that lead to framework blackouts. In case of account of geo-replication, numerous servers are conveyed at topographical unmistakable destinations. The areas ought to be universally circulated and not limited in a particular territory. It is vital to run autonomous stacks of applications in every one of the areas, so that on the off chance that a failure occurs in one area, the next server may keep running. In a perfect world, these areas ought to be totally free of each other.

A load balancer receives traffic of the network coming from a user and the distributes it according to some traffic criteria to at least one server in the back end.

In this report, Amazon web service of load balancers of the second generation is implemented. In the load balancer, the user sends data down to the stack which then comes back after reaching the load balancer. From there, the load balancer directs the data once again to the stack to send it to the server it’s supposed to go to and after that it comes back up to the stack once more.

SSL/TLS

.SSL/TLS implementation can be done by use of a few approaches for the traffic of the network between the web server and the user such as, termination of SSL/TLS. Terminating SSL/TSL is one way of implementing network load balancing. It reduces the utilization resources of web server by unburdening the lifting of decryption and encryption of the traffic of the network to the load balancer. (Irlam et al., 2012)

HTTP Host and Path Based Routing

The load balancer is configured in a manner that it has to confirm the headers of the HTTP of the requests coming in. Upon the matching of the path of the request and the pattern, the load balancer then sends the task to the machine but in case it fails to match, the request is sent to another machine.

Load Balancing for Optimal Performance

The network load balancer supports dynamic ports by use of a resource of the AWS referred to as the Target Group. The target group keeps track of the number of ports that are receiving traffic and directs the load balancer on how to share the traffic equally among all the ports.

In the design above, two fifth of the traffic is sent to the instance in the first location hosting the container containing two ports and the rest I sent to the three thirds of the free open ports.

Integrating Network Load Balancing with EC2 Container Service

ECS is Amazon has a Web Service referred to as ECS overseen orchestration framework for conveying and working containers of docker over several cases. It is intended to give a simple method to interface the expansive biological community of AWS administrations to compartments. This is implemented by configuring a docker after launching a container. ECS uses task definition which is a kind of a document that directs ECS on what features to load the container of the docker with. Using this, one machine can be used to deploy several instances of a container as indicated in the design below. (Chevance, R.J., 2009)

Traffic on port 32768 is sent to port 8081 in one container and traffic on port 33487 is sent to port 8081 in a different container.

A Network Load Balancer associated with the Group of Target may utilize the list to choose a port and an instance to send traffic of the network to.

  1. The application of the client starts a new connection.
  2. A load balancer then accepts the traffic and selects a target.
  • The load balancer then directs to the selected port and instance, the traffic.
  1. The traffic is then accepted by the layer of networking of the docker and sends it to the port already configured within the correct container.
  2. The application that is being run within a container then accepts traffic on the bounded port.

Validate may be carried out whenever a feature of clustering of Failover has been introduced in the system as well as including afore deploying a cluster, in the time of creating a cluster and also while running a cluster. Actually, extra tests are performed while the cluster is being used that confirms whether the practices that are best are in use for the workloads that are available highly(Hutchison,2015). At the point which the wizard of Validate a Configuration has been initiated, the wizard is propelled to provide decisions to perform the tests or a part of the tests. The tests are classified into five classes(Mazieres,,2017):

  • Configurationof the Cluster – The tests are just executed on clusters which have already been sent to guarantee that prescribed procedures are being taken after. They give a basic method to audit cluster settings and decide if they are legitimately designed.
  • Inventory– The tests stock the software, hardware and settings, (for example, settings of the network) regarding the servers as well as regarding the capacity of the data.
  • Network– The tests guarantee that the networks  for the clustering are properly set up.
  • Storage– The tests break down the common group stockpiling to verify whether it’s carrying on accurately and bolsters the required elements regarding the cluster.
  • Configurationof the System – The tests verify whether the framework of the software and set up settings crosswise over servers are compatible (Yu,2011).

Conclusion

Clusters are great answer for enabling availability and ability to servers which are the foundation of any activity, for example, Intranet and Internet servers. The additional nodes help guarantee increment in the server's throughput and capacity limit so developing requests may be achieved. The high accessibility administrations of a framework are intended to fit the clients' destinations. These administrations require an in-depth appraisal of the client' s condition. An assortment of techniques, advances, and services ought to be joined to raise the framework or system to its availability objective. Further studies should be carried out to study the logical methods through which an effective plan for the targeted system may be achieved.

References

Bhattacherjee, A. and Hirschheim, R., 2008. IT and organizational change: Lessons from client/server technology implementation. Journal of General Management, 23(2), pp.31-46.

Chevance, R.J., 2009. Server architectures: Multiprocessors, clusters, parallel systems, web servers, storage solutions. Elsevier.

Daniels, J., 2009. Server virtualization architecture and implementation. Crossroads, 16(1), pp.8-12.

Davidson, L. and Moss, J.M., 2012. Pro SQL server 2012 relational database design and implementation. Apress.

Hutchison, D., Kepner, J., Gadepally, V. and Fuchs, A., 2015, September. Graphulo implementation of server-side sparse matrix multiply in the Accumulo database. In High Performance Extreme Computing Conference (HPEC), 2015 IEEE (pp. 1-7). IEEE.

Krishnamurthi, S., Hopkins, P.W., McCarthy, J., Graunke, P.T., Pettyjohn, G. and Felleisen, M., 2010. Implementation and use of the PLT Scheme web server. Higher-Order and Symbolic Computation, 20(4), pp.431-460.

Irlam, G.R., Maggi, B. and Petry, S., Postini Inc, 2012. Value-added electronic messaging services and transparent implementation thereof using intermediate server. U.S. Patent 7,236,769.

Mazieres, D. and Kaashoek, M.F., 2017, November. The design, implementation and operation of an email pseudonym server. In Proceedings of the 5th ACM Conference on Computer and Communications Security (pp. 27-36). ACM.

Riihijarvi, J., Mahonen, P., Saaranen, M.J., Roivainen, J. and Soininen, J.P., 2001. Providing network connectivity for small appliances: a functionally minimized embedded web server. IEEE Communications Magazine, 39(10), pp.74-79.

Yu, J., Zhu, Y., Xia, L., Qiu, M., Fu, Y. and Rong, G., 2011, August. Grounding high efficiency cloud computing architecture: HW-SW co-design and implementation of a stand-alone Web server on FPGA. In Applications of Digital Information and Web Technologies (ICADIWT), 2011 Fourth International Conference on the (pp. 124-129). IEEE.

Cite This Work

To export a reference to this article please select a referencing stye below:

My Assignment Help. (2020). Essay: High Availability Architecture - Best Practices For Optimal Performance.. Retrieved from https://myassignmenthelp.com/free-samples/itc548-server-solutions-research-and-proposal/high-availability-and-scalability.html.

"Essay: High Availability Architecture - Best Practices For Optimal Performance.." My Assignment Help, 2020, https://myassignmenthelp.com/free-samples/itc548-server-solutions-research-and-proposal/high-availability-and-scalability.html.

My Assignment Help (2020) Essay: High Availability Architecture - Best Practices For Optimal Performance. [Online]. Available from: https://myassignmenthelp.com/free-samples/itc548-server-solutions-research-and-proposal/high-availability-and-scalability.html
[Accessed 13 November 2024].

My Assignment Help. 'Essay: High Availability Architecture - Best Practices For Optimal Performance.' (My Assignment Help, 2020) <https://myassignmenthelp.com/free-samples/itc548-server-solutions-research-and-proposal/high-availability-and-scalability.html> accessed 13 November 2024.

My Assignment Help. Essay: High Availability Architecture - Best Practices For Optimal Performance. [Internet]. My Assignment Help. 2020 [cited 13 November 2024]. Available from: https://myassignmenthelp.com/free-samples/itc548-server-solutions-research-and-proposal/high-availability-and-scalability.html.

Get instant help from 5000+ experts for
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing: Proofread your work by experts and improve grade at Lowest cost

loader
250 words
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Plagiarism checker
Verify originality of an essay
essay
Generate unique essays in a jiffy
Plagiarism checker
Cite sources with ease
support
close