Introduction to HCI
The processes by which human interact with computer systems has travelled through a long path. The advancement in HCI field not only focuses on the interaction quality but also in designing the brilliant robust and active interfaces.
Human computer interaction analysts have created many interfaces that deviate from “Window, Icon, Menu, Pointing (WIMP) device” or direct style of interaction [1]. The advancement of the post-WIMP interfaces is incited by the new developments in information technology and the enhanced analysis of human psychology. Examples for the post-WIMP interaction forms are mixed, virtual, and augmented reality, definite communication, context-based computing, mobile, or handheld interaction. These new interaction styles provide sense making ideas regarding the analysis of user’s own bodies, environment, and other users.
The main aim is to make the human computer interaction like communicating with the non-digital, real environment. This study provides an in-depth view of the applications, latest trends, and techniques in the field of interactive displays, mainly focussing on human interfaces. It also presents a review about the introduction of human sensing and perception methods, and an overview of human-computer interactions using natural interface methodologies based on sensing and intervention of touch, sound, and vision [3].
3. Definition and Terminology of Human-Computer Interaction
HCI is a framework design that should be able to produce an effective communication between the computer, user, and the needed services for achieving higher performance in both optimality and quality of the interface [2]. The presented technology will also affect the design of several categories of HCI with the same purpose. An example is utilizing Graphical User Interface (GUI), commands, menus, or virtual reality technology to access various functionalities of any computer system. The next section provides an overview of different interaction techniques and the equipments needed to collaborate with computer systems [4].
4. Advances in HCI
The latest advances in HCI are classified in 3 sections namely wearable devices, virtual devices, and wireless devices.
- Global Positioning Navigation Systems
- Military soldier enhancing devices
- Personal Digital Assistant
- Canesta Keyboard having QWERTY pattern
The architecture of HCI illustrates various configurations and interface designs. It is normally described by means of the number of inputs and output provided.
Each independent single data channel is termed as modality [13]. The HCI systems are categorized based on the number and nature of modalities.
- Audio-Based
- Sensor-Based
- Visual-Based
It can be used to track different human responses aspects that are recognized as visual signal. The main research places are:
- Body Motion Tracking
- Eyes Movement or Gaze tracking
- Facial expression analysis
- Gesture recognitions
It deals with more trustable and usable information perceived by various audio signals. The research places are:
- Musical Interaction
- Speaker Recognition
- Emotion Analysis
- Human Noise
- Speech Analysis
Post-WIMP Interfaces
It has wide variety of applications and the sensors used are listed below:
- Pressure sensors
- Taste or smell sensors
- Haptic sensors
- Joysticks for games
- Motion tracking digitizers
The combination of multiple modalities is referred as multimodal HCI systems. Hence, a multimodal HCI interface facilitates human computer interaction by means of two or more input modes.
- Reliable video conferencing
- E-Business
- Helping disabled peoples
- Smart or Intelligent Homes or offices
- Monitoring the driver
- Human-Computer Interaction Techniques
Making sense of the people by utilizing information technology has turned into a ubiquitous process in the digital environment [7]. It does not contain the involvement in searching information alone but also requires expertise in new domains, determining un-structured problems, procuring situation alertness, and contributing in knowledge exchanges. Some sense making processes examples are evaluating expenses, service plans, trade-offs, and features in user decision making, gathering, and information structuring processes [5]. The information can be related to any medical status, treatment facilities, and adjustments for selecting a treatment, investigating a subject matter domain, etc.
In other words, sense making is a method of forming and collaborating with representations which determine the complexity of the computations. So in sense making representation is the main thing that structures computation which in turn structures intelligence.
The schema or representation formed by sense making process aggregates the external data, avoids information that is not relevant, and is made to effectively and efficiently collar the task in which it is collaborated [6]. It is a two way process of making the data to fit into a frame and making the frame to fit around the data. The data calls the frames which in turn connect and select data. When the fit is not adequate, the data is reconsidered or the existing frame is revoked.
- Agent
- Type of Action
- Action Modality by means of manner
- Action setting
- Action Rationale – Cause, State of Mind, and Aim.
6.1.1. Applications of Sense Making
- The sense making concept is primarily used to create shared alertness and realizing various persons’ interests and perspectives in organizations.
- Sense-Making is central to the conceptual structure for defence network-focussed operations. In a collaborative defence environment, the concept of sense making is made difficult by many factors like technical, cultural, social, operational, and organizational factors.
A story is a strong concept used by artificial intelligence analysts to learn about the threats and to analyze the patterns in the analytical methods. The main objective of story-telling is to craft coherent and clear intelligence reports for the purpose of producing the set of actions. This capability can be used in creating narratives and stories. By this way, the process of sense making can be made triggered when the user experience a context gap.
The collection of entities is a difficult task in the process of creating narratives. The main key to artificial intelligence analysis is the capability to provide entity relationships. The entities like location can be viewed on a map, relations can be displayed in network diagrams, and sequence can be viewed on a timeline chart.
- Some entity relationships are tough to visualize.
- Formalism cut down the narrative interpretation to only the information within it.
- The formalism creates some gaps between the interesting situation occurrences against stereotypical needs of the world.
Record Keeping is the process of recording any data corresponding to the task involving system states, annotations, snapshots used for visualization, notes, and other processes for some other analysis like to-do-lists and reminders. The record keeping activities can be performed in the context of sense making process. It is done by improving the design of record keeping activities for the tools utilized for collaborative visual analysis. This can be done by embedding the record-keeping activity into an integrated visual analysis tool.
Sense Making Process
In the field of computing, post-WIMP is based on working on user interfaces, mainly graphical user interfaces.
It is defined by van Dam as interfaces “having at least an interaction method which is independent in traditional 2D objects such as icons and menus”. It contains all senses in parallel manner, multiple users, and communication through natural language [7].
The traditional direct manipulation interaction transformed the interfaces nearer to the real world interaction by letting the users to perform object manipulation directly rather than asking the users to type the difficult commands. The new style of post-WIMP interaction exerts the reality of the interface widgets and lets the user to collaborate with them directly by simply using daily actions inside the non-digital or real world [8].
The notion “real world” refers to the non-digital or physical world. In other words, post-WIMP framework mainly aim on the real world themes listed below:
- Naive Physics: It is nothing but the knowledge of common sense on the present physical world.
- Social awareness and related skills: This refers to the interacting skills of the user.
- Body awareness and related skills: This is the alertness about their physical bodies and acquires some information regarding the coordination and control of their own bodies.
- Environment awareness and related skills: Users have knowledge on their own environment and acquire certain skills on moving and negotiating inside their surroundings.
The above-listed themes now play a major role in emerging interaction styles.
It is the user understanding of common real world principles or knowledge of common sense regarding the physical world. It involves many aspects such as velocity, gravity, friction, and widgets persistence [6]. From the perspective of evolving interaction styles, post-WIMP interfaces utilize the real world properties directly. For example, Apple iPhone employs physical devices having the deception of mass, force, and rigidity to the graphical objects.
Users mostly have alertness on the real presence of other users and improve the social interaction skills. These skills involve verbal and non-verbal interaction, physical objects exchanging capability, and the capability to communicate and work with other users for collaborating on a single task [6]. The evolving interaction styles appreciate both remote or co-located interaction and social alertness. The virtual environments show the social alertness and the related skills by exploiting the presence of users, by means of avatars, and by letting the actions of the avatars to be visible.
It is the user understanding of their physical bodies, which is independent of the environment. For instance, a user can be aware of his or her motion range, relative placement of his/her limbs, and some senses included in acquiring certain schema [6]. Emerging post-WIMP interfaces provides several input methods related to these skills, such as entire physical body interaction or double-handed interaction. An example is a virtual reality environment where the users are allowed to move or walk from one place to another inside a virtual environment.
Applications of Sense Making
By mirroring the user physical bodies in the virtual environment, interfaces for virtual reality let the users to do the actions corresponding to the physical body. The context-aware systems manipulate the location and orientation of the user and show the data corresponding to the relative position of the user in the real environment [7].
6.2.3. Multi-touch technology
In the field of computing, multi-touch is the ability of sensing the surface for e.g. touch screen or track pad in order to analyze the existence of two or more contact points with the surface [8].
The multi-touch technology provides many opportunities for interacting with GUI, letting expressive gesture management and multiple-user interaction by means of comparatively simple and cheaper software and hardware configurations.
The multi-touch technology can be categorized into five types based on the touch sense:
Here the touch event is registered by the change in the ultrasonic waves. Then this touch information is sent to the controller for further processing. It provides higher image clarity and resolution when compared to capacitive and resistive technologies.
The touch event is registered by the interruption of an infrared light grid that is located in front of the display screen.
It contains glass panel with a charge storing material covering its surface. It works only with the finger contact and not with pen stylus or hand covered with gloves.
It contains glass panel covered with electrically resistive and constructive layers. These layers are divided by means of separator dots that are not visible. The electrical current moves across the screen in operational mode [7]. The touch is registered when there is an electrical current transition due to the pressure applied to the screen.
Here the optical sensors are employed to detect the touch point and the touch is registered before any physical touch on the screen. Any input device like stylus pen, paintbrush, or finger can be used in this technology and moreover, the user can do zero or light touch on the screen in order to get the response [9].
The available option for multi-touch screen interaction is to utilize the n set of points conforming to n touches in direct form. In order to attain more effective interaction in between the user and screen, studies [8] explain different methods on the utilization of the touch input.
An important aspect found in the studies [10] is that single-handed method is suitable for collaborative tasks while dual-handed method is suitable for performing individual tasks.
Multi-touch Technology
The multi-touch gestures are listed below:
Fig 1: Figures representing various multi-touch gesture inputs
It is force sensitive and offers incomparable scalability and resolution, thereby allowing the users to produce the experienced multi-point objects that are larger enough for large applications to indulge both hands and more users [8].
TouchData LLC. Provides multi touch solutions that are incorporated in many sectors like engineering, academic, marketing, tourism, media, and also medical field.
Gestures prevail in several forms within several application domains containing various Input/ Output devices [9].
Fig 2: Classification of Gesture Based Human Computer Interactions
It involves establishing the identity or location of an object within application domain context. The application domains can be virtual reality applications, desktop computer, or any mobile device.
The manipulations can take place either on the desktop having two-dimensional interaction by using manipulation devices like stylus pen or mouse, or as a three-dimensional interaction containing zero handed motions [10].
It is mainly used for enabling distant computing in intelligent environments and smart rooms. It involves static gestures, dynamic gestures, and stroke gestures.
These gestures are utilized for sign languages and are not dependent on other gesture styles. It varies from gesticulation where the gestures corresponding to symbols are saved in the recognition system [7].
Classification of gesture interaction by gesture enabling technology
It contains the employment of devices that are utilized to input the gesture, which needs physical touch in order to transmit location or spatial data to the computer processor. The gesture input can be made through mouse and pen input, touch and pressure input, audio input, and different electronic sensing mechanisms [11].
This perceptual input technology employs audio, motion, or visual sensors that are able to receive the sensor input data through the user actions, locations, or speech within their environment [16]. Moreover, it does not require any other input device.
Classification of gesture interaction by Application Domain
Here the user physical body is displayed as an avatar, movements are made through the virtual environment, and the objects in fully-immersed or partially-immersed interactions are controlled to manipulate robots or vehicles in telepresence and telerobotic applications [10].
In desktop applications, gestures are another choice to the keyboard and mouse interactions. In table applications, they are used within specified domains like air traffic controls, editing of the musical store, and collaborative work [15].
The responses obtained using gestures is mainly based on visual displays. The visual output can be two-dimensional displays like desktop screens, projected displays, and portable devices or three-dimensional displays like stereoscopic displays or head mounted displays [13].
This output has the primary utilizaton in mobile and pervasive computing domain for visually impaired individuals [14].
One of the main prevailing issues in gesture based interaction is how a processor can differentiate between separate gestures when they are done subsequently [12]. This is a primary problem for bare handed or open handed gestures and needs to determine the starting and ending point of gestures that should be interpreted as a sequence. Moreover, computer vision is also one of the difficult inputs, and also noted for various issues like evaluating the tone of the skin, noise filtering, and feature detection [14].
The gesture research in this paper tries to solve the old problems created in traditional gesture based interaction systems. Other problems created by gesture interactions include artificial or complex gestures, fatigue due to prolonged utilization, security problems while gesturing in public areas, controlling multiple users, and setting up costlier complex systems [3].
AMOLED mainly consists of an array of thin-film transistor switches that manage the state of each pixel. The pixels are either arranged on top of one another or next to another [12].
AMOLED provide high refresh rate when compared to passive matrix OLED and consume relatively less power.
The main technical challenge of AMOLED is the restrained lifetime of the organic materials. The layers can be severely affected by the water that damages the organic materials in the displays. Hence, the highly improved sealing process is required while manufacturing. The color output should be modified after certain usage time frame since the OLED material utilized to form blue color light degrades more quickly than the materials forming other colors. Hence, the improper color alignment is highly noticeable.
Conclusion
The human-computer interaction is the main part in the design of visualization systems. The quality of the system is based on the representation and utilization of the users. Hence, more attention has been made in the effective designs of HCI. The old brick and mortar methods of HCI are being replaced by natural interaction methods, intelligent or smart methods, and multimodal methods. This research paper explains the narration of a set of stories through sense making. It also illustrates the collaboration sense making technique by creating different artefacts for visualizing the stories that are shared in between collaborative users. Even though visualization embeds all entities relationships, another visualization approach is required for narrative definition.
The ubiquitous computing is embedding the HCI technologies into the virtual environment so as to make it more natural and real. Moreover, Virtual reality is the emerging trend of HCI which can turn into most common interface in the coming feature.
This research paper provides an attempt to explain several interaction techniques, their issues, applications, and created a survey of research by means of inclusive reference list.
References
[1] D. Te’eni, J. Carey and P. Zhang, Human Computer Interaction: Developing Effective Organizational Information Systems, John Wiley & Sons, Hoboken, 2007.
[2] D. Te’eni, “Designs that fit: an overview of fit conceptualization in HCI”, in P. Zhang and D. Galletta (eds), Human-Computer Interaction and Management Information Systems: Foundations, M.E. Sharpe, Armonk, 2006.
[3] B.A. Myers, “A brief history of human-computer interaction technology”, ACM interactions, 5(2), pp 44-54, 1998.
[4] A. Murata, “An experimental evaluation of mouse, joystick, joycard, lightpen, trackball and touchscreen for Pointing - Basic Study on Human Interface Design”, Proceedings of the Fourth International Conference on Human-Computer Interaction, pp 123-127, 1991.
[5] S. Brewster, “Non speech auditory output”, in J.A. Jacko and A. Sears (eds), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah, 2003.
[6] J. Vince, Introduction to Virtual Reality, Springer, London, 2004.
[7] K. McMenemy and S. Ferguson, A Hitchhiker’s Guide to Virtual Reality, A K Peters, Wellesley, 2007.
[8]ExtremeTech, “Canesta says “Virtual Keyboard” is reality”, https://www.extremetech.com/article2/0,1558,539778,00.asp, visited on 15/10/2016.
[9] A. Kirlik, Adaptive Perspectives on Human-Technology Interaction, Oxford University Press, Oxford, 2006.
[10] D.M. Gavrila, “The visual analysis of human movement: a survey”, Computer Vision and Image Understanding, 73(1), pp 82-99, 1999.
[11] A. Jaimes and N. Sebe, “Multimodal human computer interaction: a survey”, Computer Vision and Image Understanding, 108(1-2), pp 116-135, 2007.
[12] I. Cohen, N. Sebe, A. Garg, L. Chen and T.S. Huang, “Facial expression recognition from video sequences: temporal and static modeling”, Computer Vision and Image Understanding, 91(1-2), pp 160-190, 2003.
[13] M. Pantic and L.J.M. Rothkrantz, “Automatic analysis of facial expressions: the state of the art”, IEEE Transactions on PAMI, 22(12), pp 1424-1448, 2000.
[14] B. Shneiderman and C. Plaisant, Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th edition), Pearson/Addison-Wesley, Boston, 2004.
[15] D. Norman, “Cognitive Engineering”, in D. Norman and S. Draper (eds), User Centered Design: New Perspective on Human-Computer Interaction, Lawrence Erlbaum, Hillsdale, 1986.
[16] L.R. Rabiner, Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs, 1993.
To export a reference to this article please select a referencing stye below:
My Assignment Help. (2022). Advancements In Human-Computer Interaction (HCI). Retrieved from https://myassignmenthelp.com/free-samples/ece415-multimedia-communications/the-organic-materials-layers-file-A81CB6.html.
"Advancements In Human-Computer Interaction (HCI)." My Assignment Help, 2022, https://myassignmenthelp.com/free-samples/ece415-multimedia-communications/the-organic-materials-layers-file-A81CB6.html.
My Assignment Help (2022) Advancements In Human-Computer Interaction (HCI) [Online]. Available from: https://myassignmenthelp.com/free-samples/ece415-multimedia-communications/the-organic-materials-layers-file-A81CB6.html
[Accessed 12 December 2024].
My Assignment Help. 'Advancements In Human-Computer Interaction (HCI)' (My Assignment Help, 2022) <https://myassignmenthelp.com/free-samples/ece415-multimedia-communications/the-organic-materials-layers-file-A81CB6.html> accessed 12 December 2024.
My Assignment Help. Advancements In Human-Computer Interaction (HCI) [Internet]. My Assignment Help. 2022 [cited 12 December 2024]. Available from: https://myassignmenthelp.com/free-samples/ece415-multimedia-communications/the-organic-materials-layers-file-A81CB6.html.