Question:
Discuss about the Cloud Computing in Multinational Companies.
Introduction
The concept of cloud computing is very common in many multinational companies. It manages and manipulates different segments of data and information and make efficiency in an organizational management. In this assignment, the concept of management and remote administration, resource management, application resilience and data backup and recovery has been discussed by the researcher.
Hardware Requirements
Remote administration in a cloud-based environment enables consumers to perform remote administration tasks through shared IT resources requiring minimal effort. It takes account of cloud-based resources, used for the management of cloud services. Cloud service providers make this available for the end consumer, who can administer it.
For Central Desktop Servers, it will need an Intel Core i3 (2.0 GHz, 3MB cache), RAM size of 2GB and Hard Disk Size of 5GB for up to 250 computers or devices. For 251 to 1000 devices, RAM size needed is 4GB and Hard drive space is up to 20GB. For 1001 to 3000 users it will require a processor of Intel Core i5 (4 cores/ four threads, 2.3GHz, and 3MB cache), 8GB Ram and 30GB hard disk space. For 3301 to 500 users, processor required is an Intel Core i7 (6core/12 threads, 3.2GHz, and 12MB cache), 8GB RAM and 40GB hard disk space. And for 5001 to 10000 users, processor required is Intel Xeon E5 (8 cores/16 threads, 2.6GHz speed, and 20 MB cache), 16GB Ram and 60 GB or more hard disk space.
For Distribution servers hardware requirement, if Distribution server can manage some computers from 1 to 250, processor needed is Intel Core i3 (2.0GHz speed and 3MB cache), 4GB Ram, and 4GB or more hard disk space. For 251 to 500 computers, processor required is same but with 4GB Ram and 8GB hard drive space. And for 501 to 1000 computers, processor required is an Intel Core i5 (4 core/ four threads, 2.3GHz speed and 6 MB cache), 4GB Ram and 12GB hard disk space (CloudLabs.org - Remote Administration Environment, 2016).
For desktop clients, they will need Intel Pentium processor (1.0 GHz speed), with 512MB ram and 30MB hard drive
Operating System required for Desktop central server and Distribution servers are Windows 7, 8, 8.1, 10, Windows Server 2003, 2003R2, 2008, 2008R2 and 2012R2. For Desktop clients, operating system supported are Windows XP, Vista, 7, 8, 8.1, 10 and for Server OS, Windows Server 2003, 2003R2, 2008, 2008R2, 2012R2. For Linux OS with later versions are Ubuntu 10.04, Red Hat Enterprise Linux, CentOS 6, Fedora 19, Mandriva 2010, Debian 7, Linux Mint 13, OpenSuSE 11, SuSe Enterprise Linux 11.
Resource Management
According to Tychalas & Karatza, (2016), more and more devices connected via cloud-computing network needs to scale dynamically with proper infrastructure and available resources. For this to happen, the load-balancing technique is useful to distribute load among many virtual systems for the cloud service to be always accessible and resources utilized efficiently. In turn, it will improve the cost and save energy wherever possible (Younge et al., 2010).
Resource Management
Commonly used load-balancing techniques
In Weighted Round Robin, incoming connections distributed sequentially among the virtual machines while the “static” assigned to every Virtual Machine. This is a preferred method for heterogeneous VM’s. In Round Robin, the load distribution technique is same as that of Weighted Round Robin, but the virtual machines need to be homogenous. In the least Connection, incoming connections are distributed among virtual machines based on the connections they have. VM with the least number of connections is selected automatically. In Weighted Least Connection, incoming connections are distributed among virtual machines with the lesser number of active connections while predetermining the static weight of every VMs (Kansal & Chana, 2012).
Latest algorithms are developed with latest techniques for Web Framework, which can use more than one VM, but problems remain including power outage, system or hardware error, requirement of expensive hardware during scalability and server overloading when many users connect at the same time. The most efficient way to balance loads is using ‘weights' which determines the number of connections for each server and the time it will take to complete each request. The second method is to use Java Parallel Processing Framework (JPPF), which enables a significant workload to split into many smaller parts and execute them on many machines (Tychalas & Karatza, 2016). Cloud Resource Management.
SLA Management
A Cloud service provider has to be capable enough to deliver the services promised within due time, efficiently with ample resources. To guarantee this QOS or Quality of Service, SLAs or Service Level Agreements made between the client and the service provider. SLA can be divided into smaller criteria called SLOs or Service Level Objectives, which creates a threshold for a service to be delivered. The outage in service or service degradation can result in monetary loss. To remedy this, SLA management is needed to make sure that, run-time service properties meets the requirements that are created by the agreement (Rajavel, & Mala, 2014). SLAOCMS. It is a 2-step process –
Agreement of SLA with the cloud service provider capable of delivering the service
Monitoring and evaluation of the performance of the service so that quality service is delivered on time meeting strict guidelines as mentioned in SLA (Hammadi, 2013).Application Resilience
As stated by Chang et al., (2016), various classifications of Cloud Computing Resiliency are:
Failure forecasting and removal – It contains measurements or predictions for failures and its possible consequences. Cloud service providers reorganize their infrastructure or remove degrading software through this strategy.
Protection
Replication: It is a standard method to duplicate full or partial data in case of failure. It can be active or passive. Example: Active network use RAID for data storage, 1: N protection for communication networks and Passive network uses a dedicated data backup storage (Benameur, Evans & Elder, 2013).
Checkpoint setting: It saves data at an interval for saving often and restoring when needed. Example: Dynamically adaptive fault-limiting strategy
Recovery: It is the effort to recover data after an unexpected failure. Example: standard method is to re-route the traffic and its capacity to restore from a physical failure of networks.
Resilient techniques used in cloud infrastructures are as follows:
Commonly used load-balancing techniques
In Design and Operation the techniques are - Power redundancy, resiliency against power quality, heat accumulation, facility access, and facility operation (Colman-Meixner et al., 2015). In Server resiliency – failure isolation, PLR or Process-level replication, process checkpoint setting, EDCC or Error detection and correction coding, RAID or Redundant array of independent disks.
Backup Plan
The customer according to his requirements schedules online backup systems. If it is daily, the data is collected, compressed, encrypted and then send to the service provider’s server every 24 hours. To limit bandwidth consumption, the service provider may allow incremental backup after the first full backup (Tan et al., 2013).
In organizations cloud backup is performed for archiving less critical data. Traditional backup is a better option for critical data, and it requires short recovery time. In the case of significant amount of data, it can be shipped to the provider on a portable storage media. Remote Data Backup Server is when the full backup server of the central cloud is located at a remote location with complete access. For Remote Data backup service, it needs to fulfill the following criteria: Data Privacy, Data Integration, Data Security, Cost Efficiency and Trustworthiness (Kulkarni, 2014).
Disaster Recovery Plan
Continuity of Cloud service is important in a case of a critical disaster, but not all organizations are equipped to survive the calamity. The disaster can be intentional like DDoS (Distributed Denial of Service) or unintentional, like power or network outage. Therefore, the organization must have a DRP (Disaster Recovery Plan) or BCP (Business Continuity Plan) proven tested, executed and manageable. Any one of this scheme must fulfill its target while meeting constraints, which includes Recovery Time Objective and Recovery Point Objective (Alhazmi & Malaiya, 2013).
For firms not being able to afford a plan disaster recovery, it can opt for traditional backups where data is stored on a secondary site. But it can lead to extra expenses and it is the reason only 40 – 50 percent organizations opt for this. With the emergence of cloud technology, small and medium-sized businesses have opted for it because it reduces costs, requires less infrastructure and with no extra staffing. Also, with ‘pay-per-use’ model secondary site requirement is obliterated which saves additional expense and the cost is shared among users who use these services (Alhazmi & Malaiya, 2013).
Amazon Web Service is the provider of choice for the user for the following reasons
For Data Resiliency and Recovery, it uses Bucket versioning which creates multiple versions of data and data recovery in case of accidental deletion or overwritten. In the event of a network outage, it has Service Commitment for which it pays Service Credit according to the downtime.
Business Policy
Amazon Web Service underlines the following policies and clients using it expected to agree with them. Violation of any of them, Amazon will terminate or suspend the services. Some of these are:
Refraining from illegal activities including advertising, using gambling sites or services, child pornography, offering fake goods, services or schemes, phishing, etc. Copyright content that may violate the intellectual rights of someone. Offensive content that is obscene, objectionable, invasion of privacy and harmful content that may damage with system and applications including virus, Trojans, etc.
SLA Management
No security violations including unauthorized access, interception of traffic without permission and falsifying data origin, like forging TCP-IP packet headers, etc. No network abuse including monitoring or crawling of data, Denial of Service, avoiding system permits. Other forms of abuse include email or message abuse like spamming or assuming a sender’s identity without proper permission (AWS Acceptable Use Policy, 2016).
Guarantee
AWS make service commitment that it will make efforts to keep Amazon S3 available online commercially during the monthly billing period. If it does not meet the service commitment, the user will receive a service credit. A ‘Service Credit' is a credit in dollars that is credited back to the eligible Amazon S3 account. It applies to the following condition: Monthly uptime percentage is equal or greater than 99.0% but less than 99.9% then the service credit is 10%. If the uptime percentage is less than 90%, then the credit percentage is 25% (AWS Customer Agreement, 2016).
Services
The user needs to use the services from his account and in the case of account breaching or loss of information or theft, the client must contact AWS, and he can also terminate the account which is by their Section 7. If the customer needs support other than the support offered by Amazon, then he must contact AWS Customer Support. If the client uses Third Party Content from his account, then any security or system risk is the sole responsibility of the client, as the Third Party Content was not tested by AWS (Service Level Agreement - Amazon Simple Storage Service (S3), 2016).
Governance and Versioning
AWS provides its user with a robust and secure infrastructure in the form of Versioning. Versioning is an additional protection to the data and using it means the recovery of data when the user deletes the data accidentally, overwrite or delete objects. The client can also use Versioning for archival or data retention purposes.
Figure: Bucket versioning in AWS
(Source: Amazon.com, 2016)
AWS uses Bucket Versioning. It allows recovering data from accidental deletion or overwriting. Example, if the user deletes a data instead of removing it permanently, Amazon S3 gives a removed marker, which identifies the current object version. One can always restore the previous version. Another instance, if the client overwrites an object, a new object version created in the bucket. He can restore the earlier version.
Support
AWS provides three different supports – Developer support, Business Support and Enterprise Support. DSI is an enterprise, and so AWS provides Enterprise Support, which provides resources for customers who want to focus on management for increasing availability and efficiency, build a robust architecture with good solutions and best practices and utilizing AWS expertise in migration and launch support. Besides AWS have Technical Account Manager, Support Concierge, and Trusted Advisor for full checks, Infrastructure Event Management, Enterprise Architecture support, Operations Support, AWS supported API, and third party software support.
Conclusion
The entire assignment concludes with the broad concept of a cloud computing with an appropriate baseline of application resilience and data management and recovery. The researcher has maintained and manipulated several aspects of cloud computing services with respect to organizational managerial service management. The researcher has manipulated the Amazon Web Service for the consideration of operational checklists with references to Morad and Dalbhanjan’s concept.
References
Alhazmi, O. H., & Malaiya, Y. K. (2013, January). Evaluating disaster recovery plans using the cloud. In Reliability and Maintainability Symposium (RAMS), 2013 Proceedings-Annual (pp. 1-6). IEEE.Benameur, A., Evans, N. S., & Elder, M. C. (2013, August). Cloud resiliency and security via diversified replica execution and monitoring. In Resilient Control Systems (ISRCS), 2013 6th International Symposium on (pp. 150-155). IEEE.
Chang, V., Ramachandran, M., Yao, Y., Kuo, Y. H., & Li, C. S. (2016). A resiliency framework for an enterprise cloud. International Journal of Information Management, 36(1), 155-166.
CloudLabs.org - Remote Administration Environment. (2016). Cloudlabs.org. Retrieved 23 May 2016, from https://www.cloudlabs.org/remote_administration_environment
CloudLabs.org - Remote Administration Environment. (2016). Cloudlabs.org. Retrieved 23 May 2016, from https://www.cloudlabs.org/remote_administration_environment
Colman-Meixner, C., Develder, C., Tornatore, M., & Mukherjee, B. (2015). A Survey on Resiliency Techniques in Cloud Computing Infrastructures and Applications.
Garrison, G., Kim, S., & Wakefield, R. L. (2012). Success factors for deploying cloud computing. Communications of the ACM, 55(9), 62-68.
Hammadi, A., Hussain, O. K., Dillon, T., & Hussain, F. K. (2013). A framework for SLA management in cloud computing for informed decision making. Cluster computing, 16(4), 961-977.
Kansal, N. J., & Chana, I. (2012). Cloud load balancing techniques: a step towards green computing. IJCSI International Journal of Computer Science Issues, 9(1), 238-246.
Kulkarni, T., Dhaygude, K., Memane, S., & Nene, O. (2014). Intelligent Cloud Back-Up System. Int. J. Emerg. Eng. Res. Technol., 2(7), 82-89.
Oliveira, T., Thomas, M., & Espadanal, M. (2014). Assessing the determinants of cloud computing adoption: An analysis of the manufacturing and services sectors. Information & Management, 51(5), 497-510.
Patidar, S., Rane, D., & Jain, P. (2012, January). A survey paper on cloud computing. In Advanced Computing & Communication Technologies (ACCT), 2012 Second International Conference on (pp. 394-398). IEEE.
Rajavel, R., & Mala, T. (2014). SLAOCMS: a layered architecture of SLA oriented cloud management system for achieving agreement during resource failure. In Proceedings of the Second International Conference on Soft Computing for Problem Solving (SocProS 2012), December 28-30, 2012 (pp. 801-809). Springer India.
Tan, Y., Jiang, H., Sha, E. H. M., Yan, Z., & Feng, D. (2013). SAFE: A Source Deduplication Framework for Efficient Cloud Backup Services. Journal of Signal Processing Systems, 72(3), 209-228.
Tychalas, D., & Karatza, H. (2016). Cloud Resource Management.
Zissis, D., & Lekkas, D. (2012). Addressing cloud computing security issues. Future Generation computer systems, 28(3), 583-592.
To export a reference to this article please select a referencing stye below:
My Assignment Help. (2017). Hardware Requirements, Resource Management, SLA Management, Essay In Cloud-based Remote Administration.. Retrieved from https://myassignmenthelp.com/free-samples/cloud-computing-in-multinational-companies.
"Hardware Requirements, Resource Management, SLA Management, Essay In Cloud-based Remote Administration.." My Assignment Help, 2017, https://myassignmenthelp.com/free-samples/cloud-computing-in-multinational-companies.
My Assignment Help (2017) Hardware Requirements, Resource Management, SLA Management, Essay In Cloud-based Remote Administration. [Online]. Available from: https://myassignmenthelp.com/free-samples/cloud-computing-in-multinational-companies
[Accessed 22 November 2024].
My Assignment Help. 'Hardware Requirements, Resource Management, SLA Management, Essay In Cloud-based Remote Administration.' (My Assignment Help, 2017) <https://myassignmenthelp.com/free-samples/cloud-computing-in-multinational-companies> accessed 22 November 2024.
My Assignment Help. Hardware Requirements, Resource Management, SLA Management, Essay In Cloud-based Remote Administration. [Internet]. My Assignment Help. 2017 [cited 22 November 2024]. Available from: https://myassignmenthelp.com/free-samples/cloud-computing-in-multinational-companies.