Please note: Work presented in an assessment must be the student's own. Plagiarism is where a student copies work from another source, published or unpublished (including the work of a fellow student) and fails to acknowledge the influence of another's work or to attribute quotes to the author. Plagiarism is an academic offence.Work presented in an assessment must be your own and must not be submitted as part of another unit’s assessment. Plagiarism is where a student copies work from another source, published or unpublished (including the work of another student) and fails to acknowledge the influence of another’s work or to attribute quotes to the author. Plagiarism is an academic offence and the penalty can be serious. Self-plagiarism, or duplication, is the act of using your own work that is submitted for more than one assessment.
The University’s policies relating to Plagiarism and Self-plagiarism can be found in the regulations at To detect possible plagiarism we may submit your work to the national plagiarism detection facility. This searches the Internet and an extensive database of reference material including other students’ work to identify. Once your work has been submitted to the detection service it will be stored electronically in a database and compared against work submitted from this and other universities. It will therefore be necessary to take electronic copies of your materials for transmission, storage and comparison purposes and for the operational back-up process. This material will be stored in this manner indefinitely.
• An online prescription request system.
• Possibly new hardware and software (Note: the company wants to keep the Microsoft Windows as their client operating system).
• A telephone management system (patients sometimes complain that they are unable to get through because the phones are busy). Also, the laboratory is closed during weekends, but several requests have to be processed. The company wants to include an internal telephone system for internals calls.
• Many new patients come to the laboratory as it was suggested to them by someone they know It would be nice if the laboratory could exploit social media to increase its reputation. Also, the patients could rank the lab, to make it appear higher in search results.
Nature of the Business
Over the past decades, a lot has been achieved in the advancement in the way the big organizations operate. This has been enhanced by a high level of technology. Technology is, therefore, acting as a driving machine in the success of modern business. It is, however, essential to understand what the term technology means and its components that thrives the current business success. This refers to the applications of scientific knowledge for particular purposes through the use of scientifically developed machines and devices (Prijatelj 2017). The effectiveness of the electronic technological devices depends on the level of technology of the IT staffs to implement them most effectively and also the knowledge on how to secure the devices to ensure a free flow of data with no attacks. One of the areas where the technology impact has been experienced is in the hospitals and the microbiology laboratories where the patients receive checkups such as the blood test and others.
Nature of the business
The LutonBio is one of the recently well-known small private microbiology laboratories that offers check up to the patients. The business laboratories are equipped with various technology devices including the computers for ensuring effective communication and other operation within the network. There are various computerized sectors in the business including the secretarial section where the sectaries are tasked with the responsibility of receiving a call from the patients who wish to book appointments and also holding transaction details within the offices. The microbiologist office uses the computers to access the files of the patients and add the test results of the blood while the GPs officers use the computers to access the files of the patients, update them and also make any necessary recommendations.
Moreover, there are computers that are connected to client hospitals for information sharing. They are also responsible for storing patient's transactions and files and more so providing backing up solution. This completes the cycle for the functioning of the business and coordination to its client hospitals. However, due to the various reason such as the need to satisfy all customers' needs to ensure they can be served at any time and also ensure the communication among the business offices and to the hospitals is enhanced, the business has decided to create an online prescription request system.
Cost benefit analysis
To create an online system is one of the cheapest tasks to a business. This is because it does not involve a lot of software or hardware. The system enhances the communication between the offices and increases the security to the stored data. Its benefits are much promising and it moreover at the end reduces the cost and time that may be taken in the transfer of data and enhances security (Alles et al. 2018). The cost to the security practices is cheaper as the operations are encrypted, and they also limit access to an illegal user.
With the need for the business to create an online prescription request system, there are also some factors to consider such as the security to the business network and also to the host. According to Yi, Qin and Li (2015, August), network security comprises both the software and the hardware components which are designed to protect the information stored and also being transferred from one device to the other.
Cost Benefit Analysis
Moreover, to ensure security to the online information, a network is designed such that it creates a secure working environment where the mobile applications, software programs, and computer users can perform digital activities free from any vulnerability. These security threats lead to the loss of the patient's data, therefore, endangering the life of the patients. There is quite a long list of the network and host threats that may affect the new online request prescription system to be created but I will major on the most common and most harmful which may affect the access or transfer of data within the network.
The malware threatens the functioning of the business network by affecting the computer devices and staling the stored data (Friedrichs et al. 2014). Under the malware threat, there are various software that causes damage to the organization network such as the rootkit and botnets, spyware, Trojans, worms and viruses
Rootkits and Botnets
A rootkit, similar to malware, it is a software that gets installed in a networking device without the user's knowledge. Upon being installed, the couples itself on the other installed software. On the other hand, from the bot network, the term Botnet is simply an automated bot or program that gets into the computer (Yan et al. 2016). The botnet attacks the targeted computer via malicious or virus codes.
The rootkits and the botnet affect the functionality of networking devices, therefore, making it hard for the devices to communicate or access stored data. The attack threatens the security of business day-to-day operations by stealing the stored data, slowing or completely stopping the computer and network performance (Satrya, Cahyani and Andreta 2015, August).
This is a malware that is designed to send programs of pop-ups to a computer website which affects the data stored in the computer (Whitehouse et al. 2016). The malware gets installed into the computer by the user clicking either knowingly or unknowingly. It is designed such that it can identify the network website that has the most sensitive data stored and transferred through the online means. As in our case study, the transfer of the data from the laboratories to the client hospitals will be done through the online means, and this is one of the malicious attacks that may affect the network performance between the two networks. It can affect the network by either slowing down the speed of the network, making the access of the data impossible or even completely deleting the available data stored in the computer.
Trojans, worms and viruses
These are the software designed to affect any computer network in a working environment. The attacks are in most cases recognized due to changes in the behavior of the computer operations. For an instant, when the worms, Trojans and the viruses attack a computer in a given network, they lead to unexpected shutting down of computers while in use and also crashing of the computers operating systems (Aljumah and Ahamad 2016). Moreover, they can worsen the functionality of the network by making it hard to access the stored data or even completely damaging the crucial stored data.
Network and Host Threats
Unauthorized access to the computer system leads to the loss of significant patient's data and in addition to, it exposes the entire network to attacks which may worsen the functionality of the devices. This sometimes is caused by, for an instant, a host playing the role of a web server who may fail to provide security to the network as per the requirements (Groomer and Murthy 2018). This has been a common case especially to those IT staffs or other employees in an organization who have less or no idea of the risk they expose to the network by availing the access to anybody.
The threat is considered one of the tops most dreadful to the computer system. The intruder uses some technological mechanism by identifying the independent connection between the sender and the receiver. Upon identification, the intruder intercepts the messaging channels one by one then tries to alter the information before it reaches to the receiver (Yadav, Venkatesan and Verma 2018, March). The message is modified then release to the sender-receiver conversation channel. The modification occurs in such a smooth way that the two cannot know whether the information is being altered at a certain point. This, therefore, interferes with the success of the information being shared between the networks. The attack leads to poor feedback, and it may moreover expose the overall network to attacks.
Ethical and Law consideration
The ethical and law considerations issues in any networking revolve around the individual's right to privacy and the general entity such as the company or the society (Partridge and Allman 2016). It may involve tracking how computers are used to prevent illegal access to the data. To ensure ethics in information security has been adhered, there is a hierarchy body tasked with the responsibility. According to Gray and Thorpe (2015), some of these bodies includes the federal (e.g. FERPA, HIPAA), Organization (e.g. Computer User policy) and the International (e.g. International Cybercrime Treaty) among others. The major ethical issues in the security threats to the network and the online systems are:
Privacy: This is the individual's permission to access or manage the data in the system. It involves protecting the sensitive information or the personal data in the system.
Accuracy: This is concerned with the accuracy, fidelity and the authenticity of the information in the system and the overall network.
Property: This acts as the identifier to the owner of the information and who can control the access of the information in the system.
Accessibility: This deals with the rights and responsibility of the type of information the business can collect. In this situation, the business should expect any unforeseen uncertainties, and, therefore, it should be aware of the measures for safeguarding from such risks.
The effective functioning of any network depends on the level of security. Due to technology, some of the networking threats have been programmed such that they can enter into the system and alter the functioning of the networking devices. On the other hand, technology has enhanced security of the network therefore preventing the damage or loss of the data in a given network. To ensure the network is not prone to any vulnerabilities, the IT staffs should be aware of different network threats and how to mitigate the attacks (Hong et al. 2015, February). Different network threats require a different mechanism of mitigation. It is important to know that the network is made up of several layers each performing different but independent functions to one another. They are arranged in such a way that the movement of the data from source to the destination flows in a precise manner.
When it comes to the network attacks, the network layers are the areas prone to the vulnerability. The type of attack to the data may differ from one layer to the other as each layer has different task related to the success of data transfer. In this section, I will discuss the practices that can be performed to mitigate the attacks. Since the mitigation practices differ from one layer to another, I will discuss the security solution as per each layer of the network OSI layers.
This is the first OSI layers of the network. It encompasses the original setups of the network; the infrastructure and the cabling setups in a network that enables communication between different devices (Mukherjee et al. 2014). This layer is more exposed to the device and the infrastructure attacks, especially to the hardware components. This attack disrupts the way information is transferred and the DOS attack is one of the common security threats to this layer. The Denial Distribution of Service attack is also associated with this layer. However, to mitigate this attack, the packets within the network should be frequently monitored to ensure that no room for the entrance of the counterfeit packets.
Moreover, for any network, it is vital to ensure that all the networking devices are updated frequently. Upgrading the operating system ensures that a high level of security to the devices are maintained therefore limiting the vulnerability of the network to any attacks. Finally, it is advisable that the last level of capacity should be taken into consideration before running the server (Zou et al. 2015).
Data Link Layer
This is the second OSI layer and is responsible for the delivery of the data blocks. It comprises of the switches which are used for dynamic Internet Protocol throughout the networking. The dynamic IP protocol includes the DHCP and the STP. The main attacks in this layer include the Address Routing Protocol (ARP) poisoning, MAC flooding and the IP spoofing (Chelli 2015, July). The attacks focus on the IP protocols. In every networking, IP protocols are the most component that requires high security. In this case, the attack on the IP protocol in the network can be achieved by hardening the network imperative switches (Kumar 2015). It can also be mitigated by considering to disable the unused ports within the network. This ensures that the ports are not prone to the attack. For the spoofing, all the incoming and ongoing traffic within the network should be filtered. The ACLs are also essential insecurity mitigation as they prevent the entry of falsified IP address into the network.
The third layers of the OSI model in networking are the network layer. As the name suggests, the primary function of the layers is to provide network interconnection among various networks (Yan et al. 2017). Moreover, the layer is tasked with the responsibility of transferring network packets from one destination to the other. For the network to be said that its connection is successful, it has to effectively support the transfer of network packets within the network boundaries. This success depends on various network devices such as the routers hence on the network layer, there are controls such as the ARP/broadcast monitoring and routing policy. Therefore, the Internet Protocols (IP) are associated with the layer which is prone to security threats. To ensure the security to the layer, some networking security measures should be considered.
Rootkits and Botnets
First, the firewalls are one of the best security measures as it provides security to the ports which are used by the hackers to gain information about the operating systems and software running on the network (Sharma and Rawat 2015). Network sniffing is also another best security measure to secure the network layer.
The sniffing tools play a crucial role in security by intercepting and monitoring the networking devices and all the software installed on them for secure traffic. It moreover encrypts the private traffics. Spoofing also helps in securing the network by checking on any practices which involve faking of the true packet identity. These countermeasures include the ingress and egress filtering. It should also be ensured that the unused services and interfaces are disabled and also blocking unused ports. All security patches in the switch software should also be up to date.
This is the fourth layer next after the network layer. It has one of the crucial functions of data communication. According to Wang, Gamage and Hauser (2016), the layer is tasked with determining the among of data that should be sent across the network at a given time. When communication is being made, the layer determines the amount of data that can be transferred and received within a specific time point. In this layer, there are two components; the UDP) and the TCP. Although these components are vital in the functioning of the network, they are however prone to security attacks. They can be used by hackers to block off or infiltrate the network. These attacks can, however, be mitigated through various ways. A firewall can be used on the layer to enhance security to the data. This is achieved by limiting a specific transmission of protocols within the network as well as regular monitoring to ensure the firewall is well functioning (Chang and Ramachandran 2016).
This layer acts as the managers for the information transfer from one point to another. It manages communication between different ends within the network by creating a communication channel and terminating the session when communication ends. Its function is supported by the presence of protocols within the layers, and like the network layers, it also requires some security measures. It is advisable to keep away from the Telnet and the FTP as security measures since they are vulnerable to attacks.
The best-known secure protocol is the Secure Shell (SSH). It is most preferred due to the ability to establishes an encrypted channel between the remote and the local host (Ullah, Numan and Ahmad 2016). The Kerberos or the IPsec can moreover be used in case of communication with a trusted host outside the connection network. The protocol authentication also ensures security to the network as it ensures the data transferred within the network has been authenticated. The hijacking of the session layer data has also been enhanced by the use of random initial sequence numbers and the switches.
This is the layer after the fifth OSI layer, and it is where the Operating System in the network operates with the data. The layer has some multiple functions as compared to the other network layers. It is tasked with the responsibility of compressing, encrypting and also translating the data. The layer is positioned close to the application layer at which the user interacts with the data and sent it down to this layer.
Although encryption and the decryption security measures are common in the second, third and fourth layer, it can also be performed on the presentation layer (Sinha et al. 2017, July). This is necessary to ensure that the data that moves down the protocol stack is secure. Moreover, the security threat to the layer data can be prevented through a constant sanity check an also by the separation of the program control and the user input.
This is the topmost layer of the network OSI model comprising of features such as the high-level functions and GUI. Of all the layers, it is the most open-ended and more challenging to secure especially due to the advent of Software as a Service (SaaS). However, various security measures can be performed to reduce the threat of attack. Malware scan is necessary when it comes to the ransomware attack which is one of the security threats to the layers. Sandbox also ensures any potentially vulnerable application is protected from accessing the sensitive data (Illiano et al. 2017). It is moreover essential to review and test applications codes to ensure they are always safe from attacks.
Technology plays a vital role in every part of the modern business processes. In case of any data loss or interruption in the business premise due to unexpected disturbances, then one of the best well-known solution to minimize such damages and restores the environment within a short time possible is by use of Data Recovery Plan (DRP). A DRP refers to a complete set of procedures that are vital in reducing the downtime in case of any data loss or interruption to the technology in a business by focusing on the most effective way to recovery (Hsiao et al. 2016).
The set procedures have to be followed strictly to ensure the aim of data recovery has been achieved. Each of these steps is tasked with different role vital to the downtime solution. The business will, therefore, utilize the DRP to prevent any losses that may occur as a result of the downtime. Below is a summary of the data recovery plan that should be used in the proposed online system to ensure there is reduced downtime on the office functions in case of any problem. The DRP has been divided into fifteen steps to ensure the recovery is easily understandable and easy to perform.
Evaluating business activities and the data or applications needed to support them
Step 1. Identifying critical business processes
This is the first step of the data recovery plan. In the business survival, some processes are critical while there are others that are less critical the business can survive without. Essential processes of a business are those the company's process cannot survive without otherwise it could be closed up (Richardson 2015). At this stage, the business has to identify and differentiate between the two then focus on the critical ones.
Step 2. Label dependencies
While in business, there are various processes of which most of them depend on one another. However, there are those processes that rely on others most. This means that the breakdown or interruption of one of the processes affects the entire most dependable processes. There are also other processes within the business whose depend on other processes is minimal. At this level, the business will only major on those applications that rely on each other most and diagnose the maximum downtime of each application accordingly (Horney et al. 2016).
Trojans, Worms, and Viruses
Step 3. Define Vital Applications
Upon labelling dependencies, it is noticed that there are applications within the list that have the most urgent restoration time. To reduce the impact that may result due to the downtime, the business has to make a list of the applications according to their urgency restoration time (Sahebjamnia, Torabi and Mansouri 2015).
Step 4. Assess your current Data Recover Strategy
After the definition of the critical application, various data recovery strategies have to be considered. The weaknesses or risk that may occur as a result of the applications with the most urgent restoration time should be carefully investigated. The other strategies that need to be understood and considered are the backups vs restoration vs failover vs high availability (Sampath et al. 2016).
After gathering enough information, it is then the time to determine the requirements for Recovery Time
Step 5. Performing Business Impact Analysis (BIA)
This step depends mostly on the data that has been gathered. It is one of the most critical sections as it gives the reality of the business performance after considering various measures. According to Berke et al. (2014), conducting Business Impact Analysis allows the business to measure the effects that may result due to the downtime from the areas that have been affected and determine the requirements availability. Moreover, it allows the business to estimate the downtime cost (reduced customer confidence, lost sales and others) and also identify compliance/legal levels regarding the security of the data.
Step 6. Defining Recovery Point Objective (RPO)
The RPO is enhanced by ensuring that the last point in which a backup or valid replication for the data was done can be restored. Data backup helps in avoiding data loss in case of any breakdown, and this data can, therefore, be restored through the RPO by prioritizing the business data dependencies (Mohamed 2014).
Step 7. Distinguish Recovery Time Objective (RTO)
After the data corruption or hardware failure, a specific amount of time is required to restore the process. Different processes required different restoration time and, therefore, the downtime for restoration has to be determined for each of the listed processes (Chang 2015).
Step 8. Maximum Tolerable Downtime (MTD) designation
The maximum tolerable downtime gives an outright maximum time length that the applications with the most urgent restoration time can be unavailable (Sahebjamnia, Torabi and Mansouri 2015). It is the maximum time that the applications with high dependency can work effectively independently. Otherwise, an extent of time beyond the maximum tolerable time leads to a loss to the business.
After determining Data Recover requirements, it is now time to test the hypothesis and become aware of the weaknesses or technological gaps. It's necessary to invest more in innovative solutions if the risks are high.
Step 9. Access risks
Date recovery plan is aimed at mitigating any risk that may arise as a result of downtime (Rodger et al. 2015). Therefore, to be successful in performing the recovery, it is essential to be aware of the risks that are involved in each process that may require recovery. Any single point of failure leads to risks, and the business should be aware of that. The level of risk to the data loss or any breakdown differs from one process to another, and the business should, therefore, create a table or a chart with a record list of the risks involved. They should be ranked according to their priority.
Step 10. Test Theory
Under the test theory, a technological gap analysis is performed for the business by analyzing the current vs desired MTD, RTOs, and the RPOs (Lozupone 2017).
Step 11. Redesign accordingly.
At this step, the business has to be able to answer whether the data recovery plan is a good archiving system, an adequate recovery vehicle or handicapped with modern solutions (Choo and Chung 2018, July). With a clear answer to the questions, the business is now able to make decision depending on the quality of its data recovery plan. The plan can be implemented if is of high-quality standard otherwise the business can visit more innovative technology. To close the gaps and address the risk areas, the necessary investments should be prioritized.
Step 12. New solution implementation
According to Sahi, Lai and Li (2016), to improve the existing data recovery plan, it is essential to create a new implementation plan that incorporates any new solution that the business identified in step 11. This helps in improving the plan with the new and modern innovative technological solution.
Finally, a strategic response plan can be built and roles and responsibilities delegated to a team.
Step 13. Development of an Emergency Response Procedure
For the business to have an effective recovery plan, it is essential to have a procedure for performing the recovery. At this stage, step-by-step instructions are created that provides the procedures and the criteria for achieving full recovery and restoring the normal operations in the business through immediate response to any data loss or any other breakdown (Shaikh and Sasikumar 2015).
Step 14. Align procedures
The business at this stage assigns the escalation rules and defines severity definitions for the procedures that may be vital so that they can meet the maximum tolerable downtime and the data recovery plan requirements according to various disaster situations (Khoshkholghi et al. 2014).
Step 15. Forming team
This is the final stage in the data recovery plan. As stated by Sterbenz et al. (2013), team is chosen, trained and then designated with roles to respond accordingly. It oversees the business's procedures put in place for data recovery plan to be successful by ensuring that the procedures are followed to avoid any failure in recovery.
Due to the high level of technology, LutonBio will be able to advance their operations in different offices as a result of the new online system. All of its offices have a high dependency to one another. It is however not possible for either of the offices to work effectively without the service from the other office. The chain flow allows the information flow all the way from the secretary to the two client hospitals. For an instant, the client hospitals depend on the patient's test results from the GPs office which contains the patient's results with all the recommendation.
The GPs also depends on the microbiology officers to get the patient's test results. The officers are responsible for testing the results and storing in the computer system for easy access by the GPs office. Also, the microbiology officers depend on the secretaries to get the information of the patients who have booked for the testing. Following the chain, the secretaries depend on the patients with the need for testing and who make calls for booking appointment. With such a long chain, it is an evidence that any breakdown in one office interferes with the functioning of the whole chain.
It is bare truth to say that technology has enhanced the electronic means of communication. However, this means of communication or data transfer has been found to have some challenges especially associated with insecurity. The report has identified some of the common security threats that affect the network. These attacks lead to the loss of the patient's information which is vital for medication.
Various techniques attribute the presence of various attacks to the computer systems. The virus and other attacks are not naturally made, but they are programmed by hackers to ease in the access to sensitive data like the patient's data. If one of the computer systems within the office is affected, then it means the whole network will be interfered with, and the communication will not be effective. This will also lead to poor medication in the client hospital as they depend on the LutonBio laboratory test results and recommendations to attend to their patients.
However, with these security issues associated with technology, the business will have to recruits the best practices to ensure that the malware does not attack the system. The report has also analyzed some of the solutions that can be recruited to ensure the systems are safe from any attack. The network layer is divided into seven layers, and each of the layers has a special responsibility. However, these layers are vulnerable to various security threats if not well secured. The report has analyzed how each layer of the network is attacked, the type of attack and how the problem is solved. There are various mitigation techniques according to the nature of the attack as analyzed in the report.
Due to the uncertainties in the loss of data or breakdown of the operations with the business as a result of system hardware or software failure, various procedures vital in the data recovery and reduction of the downtime have been discussed. The data recovery plan has fifteen steps which have a chronological step for effective data recovery and reduction of downtime which could lead to a loss in the business.
Having considered all the requirements for the implementation of the online system, it is, therefore, essential to implement the system. Online means of data transfer is one of the best and secure means compared to the manual currently used in the business. Most of the data transfer methods from one system to the other in the business are performed through the use of flash disk which is highly prone to data loss and attacks by malware. Therefore, implementing the system will be a solution to any challenge of data transfer or communication between the offices.
The implementation of the online system will be vital in further success to the business as the files containing the patient's information and also the booking means by the patients will be enhanced. This will attract more patients for testing and medication in the hospitals as the system will offer services 24/7. The cost of implementing the online system has a low startup cost compared to the benefits that will be achieved through the system. Due to the sensitivity of the data that is transferred from one computer to the other, there is a need to ensure a high level of security to the whole system. This, however, has less challenge to the online system as it offers a high level of security to the stored data and that in transit. It is therefore essential to implement the online system due to the benefits to the company and also privacy to the illegal data access.
Aljumah, A. and Ahamad, T., 2016. A novel approach for detecting DDoS using artificial neural networks. International Journal of Computer Science and Network Security, 16(12), pp.132-138.
Alles, M., Brennan, G., Kogan, A. and Vasarhelyi, M.A., 2018. Continuous monitoring of business process controls: A pilot implementation of a continuous auditing system at Siemens. In Continuous Auditing: Theory and Application (pp. 219-246). Emerald Publishing Limited.
Berke, P., Cooper, J., Aminto, M., Grabich, S. and Horney, J., 2014. Adaptive planning for disaster recovery and resiliency: An evaluation of 87 local recovery plans in eight states. Journal of the American Planning Association, 80(4), pp.310-323.
Cerullo, V. and Cerullo, M.J., 2004. Business continuity planning: a comprehensive approach. Information Systems Management, 21(3), pp.70-78.
Chang, V. and Ramachandran, M., 2016. Towards achieving data security with the cloud computing adoption framework. IEEE Trans. Services Computing, 9(1), pp.138-151.
Chang, V., 2015. Towards a Big Data system disaster recovery in a Private Cloud. Ad Hoc Networks, 35, pp.65-82.
Chelli, K., 2015, July. Security issues in wireless sensor networks: Attacks and countermeasures. In Proceedings of the World Congress on Engineering (Vol. 1, pp. 1-3).
Choo, C.Y. and Chung, K.S., 2018, July. The Solution of Disaster Recovery System on Cloud Computing Environment. In Journal of Physics: Conference Series (Vol. 1060, No. 1, p. 012005). IOP Publishing.
Friedrichs, O., Huger, A. and O'donnell, A.J., Cisco Technology Inc, 2014. Method and apparatus for detecting malicious software using machine learning techniques, (pp. 291-286). Cham.
Gray, E.A. and Thorpe, J.H., 2015. Comparative effectiveness research and big data: balancing potential with legal and ethical considerations. Journal of comparative effectiveness research, 4(1), pp.61-74.
Groomer, S.M. and Murthy, U.S., 2018. Continuous auditing of database applications: An embedded audit module approach. In Continuous Auditing: Theory and Application (pp. 105-124). Emerald Publishing Limited.
Hong, S., Xu, L., Wang, H. and Gu, G., 2015, February. Poisoning Network Visibility in Software-Defined Networks: New Attacks and Countermeasures. In NDSS (Vol. 15, pp. 8-11).
Horney, J., Nguyen, M., Salvesen, D., Tomasco, O. and Berke, P., 2016. Engaging the public in planning for disaster recovery. International journal of disaster risk reduction, 17, pp.33-37.
Hsiao, Y.M.U., Moxley, D.M., Plaza, R.T. and Van Hise, D.G., 2016. International Business Machines Corp. Direct storage of recovery plan file on remote server for disaster recovery and storage management thereof, 28(52), pp.780-784.
Illiano, V.P., Munoz G., L. and Lupu, E.C., 2017. Characterization and Diagnosis of Spoofed and Masked Events in Wireless Sensor Networks. IEEE Trans. Dependable Sec. Comput., 14(3), pp.279-293.
Khoshkholghi, M.A., Abdullah, A., Latip, R., Subramaniam, S. and Othman, M., 2014. Disaster recovery in cloud computing: A survey. Computer and Information Science, 7(4), p.39.
Kumar, S.N., 2015. Review on network security and cryptography. International Transaction of Electrical and Computer Engineers System, 3(1), pp.1-11.
Lozupone, V., 2017. Disaster recovery plan for medical records company. International Journal of Information Management, 37(6), pp.622-626.
Mohamed, H.A.R., 2014. A proposed model for IT disaster recovery plan. International Journal of Modern Education and Computer Science, 6(4), p.57.
Mukherjee, A., Fakoorian, S.A.A., Huang, J. and Swindlehurst, A.L., 2014. Principles of physical layer security in multiuser wireless networks: A survey. IEEE Communications Surveys & Tutorials, 16(3), pp.1550-1573.
Partridge, C. and Allman, M., 2016. Ethical considerations in network measurement papers. Communications of the ACM, 59(10), pp.58-64.
Prijatelj, V., 2017. Success factors of hospital information system implementation: what must go right? Studies in health technology and informatics, pp.197-202.
Richardson, F., 2015. Disaster recovery plan. The Canadian Veterinary Journal, 46(11), p.1036.
Rodger, J.A., Bhatt, G., Chaudhary, P., Kline, G. and McCloy, W., 2015. The Impact of Business Expertise on Information System Data and Analytics Resilience (ISDAR) for Disaster Recovery and Business Continuity: An Exploratory Study. Intelligent Information Management, 7(04), p.223.
Sahebjamnia, N., Torabi, S.A. and Mansouri, S.A., 2015. Integrated business continuity and disaster recovery planning: Towards organizational resilience. European Journal of Operational Research, 242(1),
Sahebjamnia, N., Torabi, S.A. and Mansouri, S.A., 2015. Integrated business continuity and disaster recovery planning: Towards organizational resilience. European Journal of Operational Research, 242(1), pp.261-273.
Sahi, A., Lai, D. and Li, Y., 2016. Security and privacy preserving approaches in the eHealth clouds with disaster recovery plan. Computers in biology and medicine, 78, pp.1-8.
Sampath, P., Vijjapurapu, R., Desai, M. and Borde, S., 2016. Periodic validation and health reports of disaster recovery plan, 52(2), pp.261-273.
Satrya, G.B., Cahyani, N.D. and Andreta, R.F., 2015, August. The detection of 8 type malware botnet using hybrid malware analysis in executable file windows operating systems. In Proceedings of the 17th International Conference on Electronic Commerce 2015 (p. 5). ACM.
Shaikh, R. and Sasikumar, M., 2015. Data Classification for achieving Security in cloud computing. Procedia computer science, 45, pp.493-498.
Sharma, R.K. and Rawat, D.B., 2015. Advances on security threats and countermeasures for cognitive radio networks: A survey. IEEE Communications Surveys & Tutorials, 17(2), pp.1023-1043.
Sinha, P., Jha, V.K., Rai, A.K. and Bhushan, B., 2017, July. Security vulnerabilities, attacks and countermeasures in wireless sensor networks at various layers of OSI reference model: A survey. In Signal Processing and Communication (ICSPC), 2017 International Conference on (pp. 288-293). IEEE.
Ullah K., I., Numan, F. and Ahmad, S., 2016. Open Systems Interconnection model Seven Layers of Computer Networks. IJMCA, 4(5), pp.367-370.
Wallace, M. and Webber, L., 2017. The disaster recovery handbook: A step-by-step plan to ensure business continuity and protect vital operations, facilities, and assets (pp.321-334). Amacom.
Wang, Y., Gamage, T.T. and Hauser, C.H., 2016. Security implications of transport layer protocols in power grid synchrophasor data communication. IEEE Transactions on Smart Grid, 7(2), pp.807-816.
Whitehouse, O., Kirkup, M.G., Bender, C.L. and Brown, M.K., BlackBerry Ltd, 2016. System and method for controlling applications to mitigate the effects of malicious software, pp.991-997.
Yadav, V.K., Venkatesan, S. and Verma, S., 2018, March. Man in the Middle Attack on NTRU Key Exchange. In International Conference on Communication, Networks and Computing (pp. 251-261). Springer, Singapore.
Yan, Q., Yu, F.R., Gong, Q. and Li, J., 2016. Software-defined networking (SDN) and distributed denial of service (DDoS) attacks in cloud computing environments: A survey, some research issues, and challenges. IEEE Communications Surveys & Tutorials, 18(1), pp.602-622.
Yan, X., Zhang, L., Wu, Y., Luo, Y. and Zhang, X., 2017. Secure smart grid communications and information integration based on digital watermarking in wireless sensor networks. Enterprise information systems, 11(2), pp.223-249.
Yi, S., Qin, Z. and Li, Q., 2015, August. Security and privacy issues of fog computing: A survey. In International conference on wireless algorithms, systems, and applications (pp. 685-695). Springer,
Zou, Y., Zhu, J., Wang, X. and Leung, V.C., 2015. Improving physical-layer security in wireless communications using diversity techniques. IEEE Network, 29(1), pp.42-48.