1. Difference between Analysis and Design including Activities used to Support Design Phase of SDLC
Rosenblatt (2013) mentioned that SDLC is a systematic approach that can break down the work into different phases which are required to implement developed information system. Silberschatz, Galvin and Gagne (2014) stated that based on the design phases of SDLC, analysis is done to understand the business requirements and design is done to define the solution of proposed system based on requirements and decisions. Analysis phase gathers and validates the business needs and prototypes implementation of new system. It assesses and prioritizes the business requirements (Aljawarneh, Alawneh & Jaradat, 2017). System analysis is conducted to examine the informational requirements of users and improve the system goal. Software Requirement Specification (SRS) document is used to analyze the system that specifies hardware, functional, software and network requirements of proposed system.
Figure 1: Activities of Analysis phase of SDLC
On the other hand, system design is included of application design of the proposed system, design of network, user and system interfaces as well as databases. The project manager prepares a contingency and operational plan for proper design of the system. According to Duncan et al., (2016), the manager reviews the design and ensures the final system design should meet with business requirements proposed into SRS document. The proposed design is tested for performance and development, and it ensures that it will meet the business requirements. A design document is prepared which is used to support and conduct of SDLC methodology.
Figure 2: Activities of Design phase of SDLC
2. Difference between Modules and Programs
Modules: Wessel et al., (2013) demonstrated that module is a method to develop the structure of system design by breaking down of the problems to solve independent tasks. Benefit of module is to break the problems into autonomous modules such that there is minimization of complexity of problems. Each of the independent modules is assigned to different developmental team members. Silberschatz, Galvin and Gagne (2014) mentioned that module is running easily and tested separately. The module is related to the function which is benefited from implementation of proposed system. During design as well as deployment of a system, functionality is changed. With use of the functions, it solves the problems that are occurred into the system.
Programs: Romero and Vernadat (2016) stated that programs are coded and complied towards working condition of the information system. The programs are tested with organized test data. Both validation as well as verification is checked for the system. The programs are error corrected by proper validation of the entire system. A program manager is required to do system analysis to appear at interactions among the system inputs.
3. Details about Client Server Architecture divide Processes of Information System
Devine, Shifrin and Shoulberg (2015) determined that client server architecture is a distributed application architecture which divides the processes of information system among the service providers termed as server and requestor termed as clients. Server host is used to run programs that can share resources among the clients (Somogyi, 2014). The client can implement the database application which access information and interrelate with the system users. Benefits of client server architecture to present information system processes:
- The client applications are not performing any type of processing of data. It focuses on requested input from the users (Aguilar Jr, Johns & Nutter, 2014). It is requested desired data from server and analyzed and presented those data using display abilities of workstation.
- When the data are distributed to database servers, then the client application can carry on functioning with no improvement and development.
- The client workstation an able to optimize the data presentation and server is optimized to process and store of data (Romero & Vernadat, 2016).
- When there is expansion of system, then it can add various servers to share out the data processing load through use of network.
- On the system server, the shared data are stored rather than stored into the computer system (Coronel & Morris, 2016). It makes easier and well-organized to organize all the concurrent access.
- Low end client workstations are able to access remote data of the system server.
- The client applications present the database requirements to the server with use of SQL statements. After receiving of the data, SQL statement is developed by the server and the outcome is returned towards the client application (Wessel et al., 2013). There is minimum network traffic due to shipping of request and outcomes over the network.
Figure 3: Client Server Architecture
4. Evolution of Client Server Computing from file Server to Multilayer Applications to Web-Based Applications
The client server computing is evolved with development over the local area network (LAN) as well as personal computers. The important files are shared over the server and the client applications are run throughout the server. The multilayer applications are operated a variety of machines to develop as well as store of the data (Romero & Vernadat, 2016). The internet permits web based applications to operate from any locations. The driving force following the evolution is desired to access as well as distribute of the data from anyplace as well as anytime. From the last ten years, the users can able to access data as well as applications from the devices such as mobile phones, cars, domestic appliances, planes and others.
File server is used to manage the operations of file. It is shared by the client computer to attach with LAN. The connection permits the client computer to share the resources such as files, programs over the server. The server can run the software which can coordinate the information flow among the computer, termed as clients. The file server is considered as an additional hard drive for the computer to store and share the information to end users on the server. The client workstation has responsibility to manage the user interface includes of data processing, business rules and storage of database (Silberschatz, Galvin & Gagne, 2014). It permits multi users system to provide better solution to system problems. A web based application is added to handle we based data. It reduces requirements of software on client and adds control as well as scalability at reduction of support cost.
5. Application Deployment Environment and Importance in Consideration of Development Approaches
Somogyi (2014) stated that application deployment environment is consisted of computer hardware or platform as well as operating system that can maintain the application program. New application system should meet the design needs in order to match restrictions of operating system as well as equipments. Devine, Shifrin, and Shoulberg (2015) argued that an application deployment environment consists of programming languages, CASE tools and different software which are used to develop the application software. Application Deployment Environment consists of activities which make the software system accessible for utilization. When the application uses of shared memory programming, then there is deployment of application on shared memory system. On the other hand, when the application is deployed with use of message passing paradigm, then it is deployed on the distributed memory. The application consists of particular requirements in order to deliver preferred performance as well as scalability.
6. Concepts behind a CRUD Analysis and type of Analysis Accomplished
Wazlawick (2014) described CRUD analysis as elemental functions of determined database. CRUD analysis is stand for “Create, Read, Update and Delete”. Based on the requirement of the system, CRUD analysis is done. It is analyzed by taking an example. Let a customer has capability to create account, retrieve and return of the website, updating of the billing process and delete the unused information from the database. Hite et al., (2013) argued that checking of data model for accuracy ensures that the CRUD functions are specified into business requirements. In order to validate ERD, CRUD analysis is analyzed. CRUD analysis is accomplished by the following steps:
Create Operation: The keywords used for create operations are INPUT, ENTER, LOAD, RECORD, IMPORT as well as CREATE. All this functions indicate that record is being created within the database at particular time (Aguilar Jr, Johns & Nutter, 2014). The software developer reviews the entire requirements of the identified keywords.
Retrieve Operation: The keywords used for these operations are VIEW, PRINT, LOOK UP, FIND, REPORT, BRING UP as well as READ. Those points are used to retrieve the information and data from database (Coronel & Morris, 2016). The software developer reviews the entire requirements of the identified keywords.
Update Operation: The keywords used for update operations are CHANGE, UPDATE, MODIFY as well as ALTER. Those points are used to update the information and data which are already entered into the database.
Delete Operation: The keywords used for these operations are PURGE, DELETE, DISCARD, TRASH as well as REMOVE. Those points are used to delete the information which is already present within the database (Hite et al., 2013).
Wazlawick (2014) concluded that after performing the CRUD analysis on the data model, it helps to check for the system scope as well as its completeness. When the business functions have no entity to the CRUD against, then there is incomplete data model. In the same way, when there are no entities within the ERD or not touched with CRUD, then it is not needed that entity is within the model.
Aguilar Jr, M., Johns, C. R., & Nutter, M. R. (2014). U.S. Patent No. 8,734,254. Washington, DC: U.S. Patent and Trademark Office.
Aljawarneh, S. A., Alawneh, A., & Jaradat, R. (2017). Cloud security engineering: Early stages of SDLC. Future Generation Computer Systems, 74, 385-392.
Coronel, C., & Morris, S. (2016). Database systems: design, implementation, & management. Cengage Learning.
Devine, C. Y., Shifrin, G. A., & Shoulberg, R. W. (2015). U.S. Patent No. 8,935,772. Washington, DC: U.S. Patent and Trademark Office.
Duncan, R., Jungck, P., Ross, K., Mulcahy, D., & Nguyen, M. (2016). Using packet processing object modules interchangeably as stand-alone programs or “multi-app” components. International Journal of Parallel Programming, 44(1), 26-45.
Hite, J. M., Abdel-Khalik, H. S., Smith, R. C., Wentworth, M., Prudencio, E., & Williams, B. (2013). Uncertainty Quantification and Data Assimilation (UQ/DA) Study on a VERA Core Simulator Component for CRUD Analysis CASL-I-2013-0184-000. Milestone Report for L, 2.
Romero, D., & Vernadat, F. (2016). Enterprise information systems state of the art: past, present and future trends. Computers in Industry, 79, 3-13.
Rosenblatt, H. J. (2013). Systems analysis and design. Cengage Learning.
Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials. John Wiley & Sons, Inc..
Somogyi, P. (2014). Analysis of server-smartphone application communication patterns.
Wazlawick, R. S. (2014). Object-Oriented Analysis and Design for Information Systems: Modeling with UML, OCL, and IFML. Elsevier.
Wessel, P., Smith, W. H., Scharroo, R., Luis, J., & Wobbe, F. (2013). Generic mapping tools: improved version released. Eos, Transactions American Geophysical Union, 94(45), 409-410.