Question 1: What are the four goals of Science?
Researchers observed that some people performed better with an audience, yet others performed better on their own. This describes behaviour.
Jajonc (1965), formulated a theory of social facilitation to explain this behaviour; whilst people are being watched, there is a physiological arousal increasing dominant responses. In an easy or familiar task, the dominant response will induce an accomplished performance, however in more complex or unfamiliar tasks the dominant responses will make for a poor performance.
Kotzer (as cited in Heath, 2018), devised a hypotheses that predicted when throwing basketballs into hoops in the presence of an audience, experienced players would perform better than when alone. In contrast, inexperienced players would perform better alone than with an audience.
In 2015 Yu and Wu, (as cited in Heath, 2018), researched a baggage x-ray screening task. From their findings, they recommended screening smaller items whilst being watched, yet screening large items alone for better results; helpful suggestions to control future conditions. (Heath, 2018).
Question 2: Identify 3 ways in which you could modify previously published research to create a new research idea. Provide a psychologically relevant example in each case.
1.Using Jajonc’s theory of social facilitation, new research could be created by modifying the participants, selecting from preformed groups of men / women, or under 20’s / over 50’s, exploring whether this theory had the same levels of physiological arousal in particular groups.
2.Changing an IV from measuring the effect of ‘room temperature’ to ‘background noise’ on classroom performance. Alternatively, changing a DV of measuring classroom performance to ‘enjoyment of classroom experience’. Or by using a factorial design, new research could explore the impact of both IV’s individually and when combined.
3.Changing the environment from laboratory to a field study. There is less control of extraneous variables in a field setting, but new research could be developed by observing a behaviour in a natural setting rather than when in artificial conditions.
Question 3: Name the 4 levels (scales) of measurement and give an example of each. Make sure that your example is described in sufficient detail to demonstrate that it belongs to the level of measurement that you have used it to illustrate (for example, age could be measured in a variety of different ways, so giving age as your example would be insufficient).
A nominal scale places information into categories, it is not numerical so no rating of the data, e.g. best to worst. It is ideal for measuring nominal variables such as eye colour. Researchers can assign a number to each variable, 1. Blue, 2, green, etc. but the numbers are merely labels, having no value, all categories are equal. Researchers can then record a percentage such as 10% blue, 5% green.
An ordinal scale places variables into order or ranking. Eg. brands of footwear, from most to least popular shows which brand is ranked higher than others. Although there can be no measurement of difference between the ranks, how much more popular one ranked from the next cannot be measured.
An inverval scale has equally measured intervals between each scale; a tape measure measuring length of material. However, the scale does not have a true zero, 0mm does not mean there is an absence of material.
Ratio scale does have a true zero, it still has equal measurements between scales but a true zero can be measured, for instance ‘zero distance in space travelled’ can be interpreted as no movement has taken place. (Heath, 2018).
Question 4: Provide a clear definition of each of these terms. Pick one psychologically relevant example of an IV and its levels and one of a DV. In each case describe two different ways in which your variable could be operationalised.
Researchers can manipulate one or more variables, called the independent variables (or also known as the ‘cause’) and watch the effects of that manipulation on a response measure called the dependent variable (‘the effect’).
Can music affect people’s mood? Campbell and White (as cited in Heath, 2018), exposed students to music whilst exercising to see if it affected their mood. Researchers were measuring ‘mood’, this was the DV, it relied on the directly manipulated IV (music).
The IV was operationalised by measuring ‘mood’. Another way this IV could be operationalised would be to see if participants ran faster when music was present. In changing the operationalisation of the IV, the DV would also change from ‘mood’ to ‘speed’. The level of IV could have been operationalised differently, very loud and soft music.
Question 5: Briefly design a) latent and b) concrete (manifest) variables. Give three different psychologically relevant examples of each (please be clear and precise when describing your chosen variables).
A concrete variable can be directly measured by using a scale. A person’s weight on a weighing scale, how loud music is in decibels, how long a person can hold their breath by using time as a measure.
Operationalising a variable identifies how variables are measured. However, not everything can be observed or easily measured on a scale, these are abstract or latent variables. For instance, measuring how happy a person is in life, or how much a child loves their parents. Researchers devise ways to measure these constructs. Intelligence is a latent variable often measured using a defined IQ test.
Question 6: Outline what is meant by reliability and validity in terms of measurement of constructs in psychology/ Describe two different forms of validity and two different forms of reliability to illustrate your answer.
A reliable measure will give consistent results over and over again.
Open-ended questions need their responses grouped with the same criteria, often by multiple researchers. To ensure consistency, Inter-rater reliability assesses using a statistic called Cohen’s Kappa, which ranges from 0 = no agreement between researchers to 1 = perfect agreement. (Heath, 2018).
Test-retest reliability relies on the same participants taking the same test on two or more occasions. Constructs that are expected to remain stable over time can be measured using this method.
To be valid, the measure must be measuring what we expect it to. Eg. a difficult test designed to measure intelligence may actually measure frustration, no validity. Convergent validity is when two different tests measuring the same thing correlate in a high level of validity as both tests produce supporting results. Divergent validity supports tests that are predicted to have opposing results.