Get Instant Help From 5000+ Experts For
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
myassignmenthelp.com
loader
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
wave
Big Data Technologies in Mobile Medical Information Systems

Advancement of new technologies in medical systems

Read the following CASE STUDY and answer ALL questions according to the sequence of the question. 

Advancement of new technologies in medical systems such as wearable sensors, computing systems, networking, mobility, etc. increased the amount, type, and variety of collected, processed, and stored data in hospitals. Figure 1 shows different parts of Mobile Medical Information System (mMIS) that is used as hospital management system. The data from medical sensors will be transferred to the patient’s smartphone and to the administrator’s computer via communication network. Patient needs to have a profile in administrator’s computer to keep his/her personal data, medical data, medications, and medical records. After transferring and processing data in smartphone and administrator’s computer, the required data will be sent to clinic, assigned nurse, assigned doctor, emergency centre, and patient’s family. These data are complex and vary in types. Set of complex, multimodal medical data, which cannot be analysed by the traditional data processing software, is called big medical data (BMD). Big data technologies are required to handle the BMD from storing to analytics and decision-making; however, security and privacy challenges must be considered. BMD can consist of physiological, behavioural, molecular, clinical, environmental exposure, medical imaging, disease management, medication prescription history, nutrition, or exercise parameters, etc. 

There are various heterogeneous sources for BMD data such as electronic medical records, diagnostic, biomarkers, ancillary, medical claims, prescription claims, clinical trials, social media, and wearable sensors. Big Data Technologies (BDT) can be used for analytics of BMD. Some of the big data techniques for Big Data analytics are cluster analysis, data mining, graph analytics, machine learning, natural language processing, neural networks, pattern recognition, and spatial analysis. BDT can be used in mMIS in different areas such as genomics, drug discovery and clinical research, personalized healthcare, precision medicine, elderly care, mental health, cardiovascular disease (CVD), diabetes, gynaecology, nephrology, oncology, ophthalmology, urology, etc. BDT is very useful for mMIS as it can “prevent medication errors by analysing patient’s data; help early detection of disease; recognize high-risk patients; help accurate prediction of disease; offer patient-centric care; prevent security fraud; reduce cost and hospital. 

Big Data Analytics

(a). Design an information system(s) to improve the functionality of Mobile Medical Information System (mMIS) in the case. 

(b). Create a strategic plan using Porter’s Competitive Forces Model for Mobile Medical Information System (mMIS) to address business objectives. 

(c). Create a FIVE-STEP process plan for ethical analysis of Mobile Medical Information System (mMIS). 

Big Data Technologies for analysis of complex medical data

(d). Create a plan for safeguarding the Mobile Medical Information Systems.

Read the following CASE STUDY and answer ALL questions according to the sequence of the question. 

Room2Room is a telepresence system that leverages projected augmented reality (AR) to enable life-size, co-present interaction between two remote participants. It recreates the experience of a face-to-face conversation by performing 3D capture of the local user with color and depth cameras and projecting their life-size virtual copy into the remote space. This creates an illusion of the remote person’s physical presence in the local space, as well as a shared understanding of verbal and non-verbal cues, i.e. gaze and pointing.

Room2Room enables co-present interactions between remote participants by integrating person space, task space, and reference space at the life-size scale. 

In Figure 1 (section a and d), remote participants are represented as life-size virtual copies projected into the physical space. Each participant sees their partner’s virtual copy with correct perspective and they can communicate naturally using speech and nonverbal cues in Figure 1 (section b and c). With ongoing hardware enhancements, the image quality of the projected participants improved in Figure 1 (section e).

This is accomplished by extending an existing spatial AR system called RoomAlive to two separate locations. A complete, room-scale, projected AR system is deployed in each room comprising three ceiling-mounted projector-camera units or procams (Figure 2). Each procam includes a Microsoft Kinect v2 color and depth camera and a commodity wide field-of-view projector (BenQ 770ST). Kinect sensors capture the environment’s geometry and appearance and the people, while projectors display virtual content in the environment, including virtual copies of people and objects.

Each Kinect is hosted by a PC which serves Kinect sensor data such as depth, color, body tracking (user’s skeleton joint positions), and audio to clients via the network. With this, virtual copies of real people and objects in a remote environment are captured and projected into a local physical environment using commodity projectors. Once the remote participant is projected onto the shared physical location, together they can form a natural and consistent conversation to solve a collaborative, physical task.

Read the following CASE STUDY and answer ALL questions according to the sequence of the question. 

Room2Room is a telepresence system that leverages projected augmented reality (AR) to enable life-size, co-present interaction between two remote participants. It recreates the experience of a face-to-face conversation by performing 3D capture of the local user with color and depth cameras and projecting their life-size virtual copy into the remote space. This creates an illusion of the remote person’s physical presence in the local space, as well as a shared understanding of verbal and non-verbal cues, i.e. gaze and pointing.

Applications of BDT in mMIS

Room2Room enables co-present interactions between remote participants by integrating person space, task space, and reference space at the life-size scale. 

In Figure 1 (section a and d), remote participants are represented as life-size virtual copies projected into the physical space. Each participant sees their partner’s virtual copy with correct perspective and they can communicate naturally using speech and nonverbal cues in Figure 1 (section b and c). With ongoing hardware enhancements, the image quality of the projected participants improved in Figure 1 (section e).

Remote Participants

Figure 1

This is accomplished by extending an existing spatial AR system called RoomAlive to two separate locations. A complete, room-scale, projected AR system is deployed in each room comprising three ceiling-mounted projector-camera units or procams (Figure 2). Each procam includes a Microsoft Kinect v2 color and depth camera and a commodity wide field-of-view projector (BenQ 770ST). Kinect sensors capture the environment’s geometry and appearance and the people, while projectors display virtual content in the environment, including virtual copies of people and objects.

Each Kinect is hosted by a PC which serves Kinect sensor data such as depth, color, body tracking (user’s skeleton joint positions), and audio to clients via the network. With this, virtual copies of real people and objects in a remote environment are captured and projected into a local physical environment using commodity projectors. Once the remote participant is projected onto the shared physical location, together they can form a natural and consistent conversation to solve a collaborative, physical task.

Kinect sensor data

Figure 2

Current videoconferencing applications (i.e. Skype, FaceTime) are limited in many ways: they afford only partial views of remote participants, in 2D, on a flat-screen, and at a reduced scale. These technical constraints limit the sense of co-presence and the ability to communicate naturally using gaze, gesture, posture, and other nonverbal cues or shared personal space.

Furthermore, while some applications support the notion of a shared task space (i.e. desktop sharing feature in Skype), this task space is typically completely virtual and separate from the personal space. Finally, there is limited or no support for the use of nonverbal cues (such as pointing) to refer to objects in the task space, a capability known as reference space, which limits many collaborative tasks. Previous research in telepresence systems offered solutions to some of these restrictions: i.e. enabling 3D, view-dependent rendering of participants and supporting gesturing and pointing in the task space.

In contrast to traditional videoconferencing approaches, the virtual copy of the remote participant is projected directly into the physical environment, rendered at life-size scale, and in a view-dependent, perspective-corrected way, such that the local participant can see them from different viewpoints as they move. Furthermore, remote participants are rendered on top of existing real furniture, which makes them appear as if they are inhabiting the same space. This facilitates more natural interaction since people can see each other fully and make better use of nonverbal cues such as gaze, posture, and gestures. Room2Room does not require users to wear any display or tracking equipment, nor does it represent them as avatars. Their appearance and movements are faithfully reproduced on their virtual copies, to within sensor limits.

Room2Room uses a set of three ceiling-mounted projector and camera units at each location capable of projecting life-size telepresence on most surfaces of the room. In this setup, virtual copies can be projected onto numerous physical seating affordances or standing in the room for optimized seating flexibility and room-size collaborations. The placements are done such that they form a natural conversation with the local participant that is consistent across both spaces. The system innately supports collaborative tasks such as physical assembly, since both the participants and task objects are situated in a common space.

The system’s capability to provide a virtual copy of a remote participant into a shared consistent, integrated person-task space environment between two participants represent tremendous opportunities for future amazing applications in business, social and recreational environments.

(a). List down and explain Room2Room hardware and software, and their functionality.

(b). Consider suitable use cases for Room2Room applications at USM or in your daily life. How Room2Room can be part of your team plans? Describe one potential Room2Room application for your group.

(c). Explain how this potential Room2Room application can be a viable business model and able to generate revenue (money) for your group.

support
Whatsapp
callback
sales
sales chat
Whatsapp
callback
sales chat
close