Discuss about the Sonification.
Sound and Music
People often spent their leisure time in entertainment joints where the film is shown, or music played as a way of making them feel relaxed. Both film and music affect people across cultures and geographical regions because of the expression of humanity that they carry. The designing of beats to connect with the words and rhythm and words in music plays a crucial role in satisfying the taste and interest of different listeners. It is important to understand the relevance of a consistent flow of words in music as well as how the use of non-speech audio helps to achieve successful presentation of data from sets of data.
Sonification and the Concept of Song and Lyrics
Sonification is the creation of sound to relay information that is in the form of words or data. This activity involves identification of suitable musical soundtracks to represent information instead of the usual use of speech ( Bodle, 2006). The rhythm of music is a sensitive aspect that can always be noticed in an environment because of its ability to arouse feelings. Music can communicate more efficiently because the information it carries can reach many people and be understood by both literate and uneducated. The ability of sound to engage the audience stimulates imagination and makes people attentive. The context in which a pattern of sound is played informs the audience of the situation even without anyone talks to them. Knowledge of sound relaying information without speech is applied in the case of ambulance sirens which alert road users that there are an emergency situation and calls upon them to understand and pave the way for the ambulance. Similarly, film producers use sound to create the mood movies because sound tracks help to move the audience and direct their reaction towards a situation (Jones, 2011).
Just the same way as visualization where pictures are used to represent different shapes, sizes, and color of data, sonification is also applicable to text. Individuals can create the sound of their preference to represent certain information about them or to identify others. The symbolic attachment of sound in personalizing text messages creates sonification ( Barret, 2016). Technological advancement has led to the invention of mobile gadgets which owners can use their favorite ringtones and soundtracks to reveal the caller or to identify the sender of a message. By listening to the sound, one can confirm what it is all about without looking at the mobile phone. Ringtones are also used to differentiate the type of communication by indicating whether it is a call, email or message ( Akiyama, 2014). The use of sonification to describe the type of message is a technique applied in alarm clocks where the owner sets an alert tone to communicate a reminder ( Hass & Brandes, 2009). Similarly, when school bells ring, their sound is a routine signal that dictates the next activity in class or for the whole school. Electronic signals are installed at the entrance of some homes to inform the residents when visitors come around.
Lyrics are the words that a musician composes and writes down after brainstorming on the idea and theme of a song they want to release. There must be some sound to attract listeners and for the lyrics to deliver the intended message. The final copy of the song is a composition of the lyrics and non-speech audio originating from the sound that is included purposefully to attract the audience to listen to the song ( Cobussen, Meelberg & Truax, 2016).
Analytical Discussion of Text Sonification
Communication through email and messages has had an upward trend in recent years, and many people have embraced it because of the availability of mobile gadgets such as mobile phones and computers ( Perry & Lee, 2007). In the process of communicating, users of this mobile equipment need to be notified instantly in case new are sent to them for them to maintain the flow of feedback ( Mohd & Wong, 2008 ). There is the very minimal use of audio notifications such as melody tones to help in passing the messages. In an attempt to understand how messages can be encoded into an understandable melody, Alt, Shirazi, Legian and Schmidt engaged in research to study how users can use melody to know the intentions of a message even before looking at the contents of that message. There focus on text sonification and its impact on instant messaging provoked ideas to carry out a survey and discuss the various effects that accompany the use of notification tones to communicate messages. Mobile devices have platforms for intercepting the type of communication such as messages and calls that come through and analyze the content for the user. Through the use of the Application Programming Interfaces, a musical representation of a message can be created to help users know the intentions of an incoming message or tell who the caller is because a tone is assigned to represent different people (Filimowicz, 2014). When users get used to frequent melodies and notification tones of different messages, they learn of the content and meaning of those messages and so they can be able to identify what type of message the melodies refer to in future. The First task for new users is to learn the art of interpreting the musical messages, after which it becomes a norm guess and understand the intentions of a message. The musical representation of words in a message gives users a hint on what the message is all about. It is therefore argued that using text tones enables a user to get the communication without having to read the message (Sinha, 2012). Early research and inventions such as the Morse code for communicating using non-verbal ways and the Audio Abacus that transformed numbers into tones formed the basis of the application of sound to represent text messages (Walker, Lidsay & Godfrey, 2003).
The process of transforming a message into a melody involves the conversion of each character in the message into a tone for the whole message to make a tuneful melody. The intention of a message is communicated by considering the punctuation, keywords, and emotions in each character of the message. The effect of using melodies has affected the perception of users towards messages because no-modified messages are not easily checked or read. Intentions that different messages represent determine the message-checking behavior of users. Users take more time before reading question messages because of the tendency to complete the task at hand first. Receive-to-read time for active personified messages is fast because positive messages are often addressed with urgency as compared to negative messages which users take the time to read and respond ( Kim & Wattanapongsakorn, 2015).
The conclusion of the survey done on sonification had different findings. First, it was found that there are no major differences between musicians and non-musicians in understanding the intentions of unified messages. This implies that understanding the melody or tone of messages does not rely on the knowledge of using or playing an instrument. It was also found out that real user behavior depends on the intentions of a message because messages with positive intentions are checked and read faster than negative messages and questions which are checked later. A musical representation of a message causes it to be more attractive and gives it a better chance of being read and responded to.
Critique of the Findings
While encoding a message, it 's hard to know the interpretation that the receiver will attach to the melody or unified message because of different people their interest and preference. The sender of a melody might send it with the intention that it evokes positive reaction only for the receiver to perceive it as negative. The argument that people read and respond to positive messages faster than the negative ones is contrary to real life situations. The urgency with which people deal with questions negative messages is faster in real life. This is evident in the way people respond to emergencies and disaster. During such time, people act promptly despite the communication being negative ( Vihalemn, Kiisel & Harro-Loit, 2011). Besides, the technology in developing tones for individual characters in a message could be costly for people to embrace. This overrides the single advantage that using melodies is an easy way of understanding messages without having to read.
The Listening Machine
Conversations over social network platforms have been increasing over time as result of the creation of more social media sites such as Twitter, Facebook, WhatsApp, and Instagram. These conversations are people’s expressions either written in words or presented as pictures. The listening machine can produce music content from the conversations on the social network, as studied by Daniel Jones and Peter Gregson, it was used to deliver real-time communications through unified tones that could be heard over the internet. This was based on Twitter conversations in the UK and aimed at generating live audio through the use of musical tones to represent words in the posts of those holding conversations on Twitter.The textual analysis of interviews is done by linguistic analysis of the discussions and relaying that information to the audio workstation that has musical elements to assign tones to various words in conversations. The resulting music is then transmitted to online listeners as the conversation on Twitter takes place. The working structure of the listening machine was based on four distinct stages of operation. The first stage was the listener which monitored the social media platform to sample the subject of discussion and detect new conversations immediately they are posted. Secondly, the phonomat analyzed the posts, classified the content and outlined the dominant words in conversations on Twitter. The conductor was the third stage that analyzed the text and linked it to music denotations for the right sound fragment to be attached to the text as the conversation takes place. These sound pieces were then passed to the encoder which took the output of the audio and encoded it for broadcast (Jones & Gregson, 2014). Online listeners were then able to follow real-time live audio representation of the Twitter posts. The listening machine adjusted its musical tempo depending on the rate at which conversations were made; it could go low during the night when fewer posts are sent because many people are asleep. Disparities between the language of the conversation and the musical expression was a common challenge noted for the reason that music has no semantics to be able to give musical sonification to every word that was used in the conversation ( Pinch & Bijsterveld, 2013). This form of sonification is therefore not appropriate in conveying the original meaning from the words in the conversation because it only classifies the general subject and denotes meaning of words which are known, this makes it a doubtful source. Orchestra bands can use the listening machine in generating musical tones to text on real-time.
Critique of the Listening Machine
The process of using the listening device to attach musical tones that make meaning is a laborious process that involves multiple tasks. It is not appropriate for one to wait for the listening machine to generate audio yet they can read the conversation on social media even more instantly. A lot of time is wasted in composing tones for each word in the conversation and relaying the live algorithm, the activities involved are too engaging as compared to reading text as it is posted. Since the conversation involves different people, the language of expression differs. There are instances where there is no musical tone for certain words which results in distortion of the original fashion of the conversation.
Comparison of Sonification of text and the listening Machine
Mobile devices can achieve a musical representation of information from the text as part of a way to communicate ( Alt, Legian, Schmidt & Mennenoh, 2010). The application of sound to create meaning for text is an important aspect, especially in the broadcast industry. Television and radio use sound as a signature to reinforce their brand and to differentiate their programs. Each segment of production involves playing a routine jingle that reminds listeners of what is about to be broadcast or the program that is currently on air. Similarly, in advertising of products, the melody is created to help prospective customers to identify with the product. It is however not very effective to apply sound in attaching meaning to everything. Mobile gadgets are owned by people who want to use them, so it is not necessary for them to use the equipment selectively. Setting tones on a mobile phone cannot be done for the purpose of ignoring or reading a specific message. Communication, whether positive or negative is important to the user; tones are set to signal the user of a call or message but not to differentiate that whish is important from the rest. Social media is an independent aspect from broadcast media. There should be a separation in the use of social networks and broadcast so that the needs and interest of various people are catered for.
Cobussen, M., Meelberg, V. and Truax, B. (2016). The Routledge Companion to Sounding Art. 1st ed. Georgetown: Taylor and Francis.
Barrett, N. (2016). Interactive Spatial Sonification of Multidimensional Data for Composition and Auditory Display. Computer Music Journal, 40(2), 47-69. https://dx.doi.org/10.1162/comj_a_00358
Bodle, C. (2006). Sonification/Listening Up. Leonardo Music Journal, 16, 51-52. https://dx.doi.org/10.1162/lmj.2006.16.51
Akiyama, M. (2014). Data Effect: Numerical Epistemology and the Art of Data Sonification. Leonardo Music Journal, 24(24), 29-32. https://dx.doi.org/10.1162/lmj_a_00192
Haas, R., & Brandes, V. (2009). Music that works (1st ed.). Wien: Springer.
Jones, S. (2011). Sonification: the element of surprise. AI & SOCIETY, 27(2), 297-298. https://dx.doi.org/10.1007/s00146-011-0352-4
Walker, B., Lindsay, J., & Godfrey, J. (2003). The audio abacus. ACM SIGACCESS Accessibility And Computing, (77-78), 9. https://dx.doi.org/10.1145/1029014.1028634
Filimowicz, M. (2014). Piercing Fritz and Snow: An aesthetic field for modified data. Organized Sound, 19(01), 90-99. https://dx.doi.org/10.1017/s1355771813000447
Sinha, M. (2012). Nonverbal communication (1st ed.). Jaipur: Pointer Publishers.
Perry, S., & Lee, K. (2007). Mobile phone text messaging overuse among developing world university students. Communication, 33(2), 63-79. https://dx.doi.org/10.1080/02500160701685417
Mohd, A., & Wong, T. (2008). Short Messaging System (SMS) Compression On Mobile Phone–SMS Zipper. Jurnal Teknologi, 49(1). https://dx.doi.org/10.11113/jt.v49.205
Vihalemm, T., Kiisel, M., & Harro-Loit, H. (2011). Citizens' Response Patterns to Warning Messages. Journal Of Contingencies And Crisis Management, 20(1), 13-25. https://dx.doi.org/10.1111/j.1468-5973.2011.00655.x
Kim, K., & Wattanapongsakorn, N. Mobile and wireless technology 2015 (1st ed.).
Pinch, T., & Bijsterveld, K. (2013). The Oxford Handbook of sound studies (1st ed.). New York: Oxford University Press.
Alt, F., Legian, S., Schmidt, A., & Mennenoh, J. (2010). Creating Meaningful Melodies From Text Messages, 63-68.
Jones, D., & Gregson, P. (2014). The listening machine. Generating Complex Musical Structure From Social Network Communications.