Get Instant Help From 5000+ Experts For
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
myassignmenthelp.com
loader
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
wave

Introduction to test construction in the social and behavioral sciences.

Psychological testing a practical approach to design and evaluation.

Testing an expanded theory of planned behavior model to explain marijuana use among emerging adults in a promarijuana community.

Measurement Scales: Nominal, Ordinal, Interval, and Ratio

In this research paper we will adopt and work on developing a method based on Scaling. In a research process, Scaling consists of a measurement method which involves the making of a construct which includes both qualitative constructs and quantitative metric units. Scaling has found extensive use in the field of psychology education for measuring the ‘unmeasurable’ constructs such as authoritarianism and self-esteem.

In Scaling, we make use of four scales of measurement: nominal, ordinal, interval and ratio. Basically, these are just ways of categorizing variables of different type. Quantitative metric units, also referred to as numerical data, are used where the observations are countable or measureable. In this context, we place 'Age' and 'income' as continuous numeric variables. An 'age group' is considered as an ordinal qualitative variable and 'sex' is placed under nominal qualitative variables.

In ordinal measure, the interval between the values cannot be interpreted. However, in interval measurement, the interval between two attributes can be interpreted. Let us take an example of measuring the temperature. Here, the interval of 10 Units between 30 & 40 is the same as interval of 10 Units between 70 & 80. Here, the interval between two values can be interpreted.

This is a scale where data is placed just in order of its magnitude and there is no method or standard of measuring the intervals between two quantities. As an example, consider the game of squash ladder which is measured on an ordinal scale and from the results we can say that one person is better than the other, but we cannot quantify the margin.

The purpose of this paper is to describe a process for developing a construct using multi-item, multi-subscale and interval-level scales. The essential seven steps necessary for producing a reliable and user-friendly scales have been followed.

The seven sections discussed hereunder cover the seven steps of a scale development. The process of scale development begins with the first step which is creation of items for assessing the construct under consideration. This step can be constructed either by allowing the process to be conducted inductively, where the items are generated first and then the scales are derived; OR by allowing the process to be conducted deductively, where the theoretical definition is created first and then the items are generated.

The process of Deductive Scale Development is based on the theoretical definition of a construct and this has been used as a guide to create the items. This approach is based on an understanding of the relevant theory of the case study, of the phenomenon which is to be investigated and helps the researcher in ensuring that there is content adequacy in the final scale constructs. This case study also provides a good example of the deductive approach to be followed for item development as it also creates a measure for lodging quality.

The Seven Steps for Developing Reliable and User-Friendly Scales

Pre-testing the construct items for their content adequacy is a step that is often overlooked although it is a very essential step in the process of scale development. This pre-testing helps the researcher and assures him the adequacy of the content, prior to his finalizing the questionnaire. This also provides support for the validity of the construct as it allows the researcher to delete those items which he may find to be conceptually inconsistent with his final construct. In has happened on many occasions where researchers, after investing substantial time and effort in the collection of large amount of data sets, finds that one of the important measure is flawed and needs to be corrected.

This paper has also found a recently developed method which can be used for conducting a content assessment. This methodology uses both sorting and factor analysis technique for assessing on a quantitative scale the adequacy of the content of a set of the newly developed items. In this new method, the respondents are asked to rate the extent to which the items enlisted in the questionnaire corresponded with definitions provided in the construct. The responses given by the respondents are then factor-analyzed and only those items are selected which can be retained for subsequent admission into an additional sample.

The items analyzed are meant to represent four constructs. But if we do not know how many separate dimensions are in the data, then there are three alternative procedure to address this question.

1 Principal Components Analysis; 2 Factor Analysis; 3 Cluster Analysis

All the three procedures can attempt to approximate the nvar * nvar correlation matrix R with a matrix of lesser rank, one that is nvar * nf.

The retained items, taken from Step-2 are then presented through an appropriate sample construct and the objective of this exercise is to examine how well these items can confirm the expectations with regard to the psychometric properties required from the new construct. The new items are also required to be administered in relation to the other established measures adopted for the construct for a later assessment of any overlap among the existing and proposed scales of the construct.

In addition to this, the data from the existing constructs can be later used for a preliminary examination of another construct and its criterion-related validity of new scales. After the data gathering has been completed, it is essential for the researcher to evaluate the performance of the items included in the construct so as to validate whether they have adequately constituted the construct scale. In the opinion of this paper, item evaluation through factor analysis should be retained as one of the most essential steps for determining the viability of the construct scale.

Deductive Scale Development: Creating Items for Assessing Construct

Scale Development process can be executed with the help of two basic types of factor analyses. The first one is known as exploratory factor analyses. This finds use in reducing the set of observed variables to smaller, yet more parsimonious set of variables. The second type, known as confirmatory factor analyses is used for assessing the quality of the factor structure and this is done by statistically testing the significance of the overall model, and also by testing the relationship among the items and scales in the construct. It must however be pointed out that the researcher, prior to conducting the factor analyses, must examine the inter-item correlations among the variables and any other variable that may correlate at less than 0.4 with all other variables then this variable may be deleted from the analysis. Low correlations of these types indicate about items which have not been drawn from an appropriate domain and which may be responsible for producing errors and unreliability.

Although reliability of the items can be calculated in a number of different ways, the most common measure used in field studies for the purpose of assessing a scale’s internal consistency is Cronbach’s alpha. This can tell the researcher how well the items are measuring in the same construct. But the researcher has to first establish the uni-dimensionality of each scale. Once the researcher has conducted the exploratory or confirmatory factor analyses and has deleted all the bad items, he can successfully calculate the internal consistency reliabilities for each of the scales. A large coefficient alpha, of above 0.70 for an exploratory measure, indicates a strong item covariance and suggests that the researcher has adequately captured the sampling domain.

After completing Step-5, the new scales will reflect the content validity (refer to Step-2) and the internal consistency reliability (refer to Step-5). Both these factors provide the required supportive evidence of the construct’s validity. The researcher can find more evidence of the construct’s validity by examining the extent to which these scales are correlating with the other measures which have been designed for assessing similar constructs ( for convergent validity) and to those they do not correlate with the dissimilar measures (for discriminant validity). The researcher can also find these useful for examining the relationships with those variables which have been theorized to be the outcomes of the focal measure (for criterion-related validity).

The last and final step in completing this construct scale development process is to collect another set of data from another appropriate sample and then repeat the above mentioned scale-testing process with these new set of scales. In case the researcher had a large enough initial sample data, he can split the original sample data randomly into two sets and conduct a parallel analyses for his scale development. Researcher can also avoid the common source problem, by collecting data from other sources, such as performance appraisals can be collected wherever possible. However, the researcher must keep in view that replication includes confirmatory factor analyses, assessment of internal consistency reliability and construct validation for all collected data. These analyses will provide the researcher with the confidence and he can finalise the measures which possess reliability and validity and which will be suitable for use by him or any other researcher in future research.

Pre-testing the Construct Items for Their Content Adequacy

We are of the opinion that perceived behavioral control has reference to the subjective perceptions of the respondents about their ease of performance behavior. Where the context of substance abuse is concerned, attitudes and norms can usually be measured with respect to the respondents’ engaging in the behavior and the perceived behavioral control will be operationalized as efficacy to resist the abuse of the substance. Although the experimental manipulations regarding TPB constructs have not been carried out in the context of marijuana, various studies have been conducted for examining the degree to which TPB can account for variability in marijuana abuse under correlational designs. These studies have also been able to confirm that more positive attitudes, more positive perceived norms and lower efficacy to refuse marijuana use lead to the prediction of greater intentions of using marijuana.

Most of these studies, however, have been limited by their reliance on those samples which had low absolute levels of marijuana use or were lacking in information about the overall levels of use within the sample. These studies were also not able to reveal the connection between operations of the model within the broader cultural context of favourability towards marijuana, which is generally referred to as the pro-marijuana community. These features of the past studies have made it difficult in determining the extent to which TPB can account for marijuana-related intentions and behavior across the larger cultural context of relative favourability towards its use. Both aspects are important for consideration under the given increases in the use of marijuana which are likely to accompany its legalization in the US.

Given the superior efficiency of theory-based interventions in comparison to those which are not theory-based, the study by Ito et al. (2015) successfully examined marijuana abuse within the framework of the theory of planned behavior. It is a dominant model which explains health behavior and determines whether components of the TPB theory have associated with both the intention to engage in a behavior and behavioral engagements as such. The general area of interest were young college students and their responses to marijuana legalization in the United States. Specifically, the TPB reveals that the predictor variables concerning the psychosocial influence of the constructs of initial attitudes, norms, and efficacy were related to the outcomes reported, resisting use on initial intentions and examined the effects of the original plans on the use of marijuana.

Moreover, Ito et al. (2015) have also reported that these relationships get moderated by following the construct variables such as normative influences, both descriptive and injunctive, along with two kinds of intentions, namely use and proximity intentions. However, recent research has also revealed the following limitations of the study as its reliance is on self-reported measures. So, as per the rationale of this report, use of quantitative survey method with the young college students should be used to address this limitation as it is significant for the constructs of self-reported attitudes, intentions and refusal self-efficacy which account for considerable modification in marijuana use (Ito et al. 2015 p.57).

Factor Analysis: Evaluating the Performance of Construct Items

Participants were 370 University of Colorado students taking part in a three-year longitudinal study. As evidence of marijuana favorability in this context, students from the university from which we recruited reported using marijuana within the past 30 days at more than double the rate (38.2%) of other US college students (15.8%) (American College Health Association, 2011). In order to ensure that the sampling was of a wide range of use, participants were recruited as never users (those who never tried marijuana), infrequent marijuana users (those who used marijuana four times or less per month for less than three years) and frequent marijuana users (those who used marijuana an average of 5 days a week or more for at least the past year). The respondent’s eligibility was determined through phone interviews prior to enrollment.

Although we planned to sample the full range of recreational marijuana users, we did not intentionally focus on recruiting dependent users. Examination of responses was with reference to the Marijuana Dependence Scale, a scale which is based on Diagnostic and Statistical Manual of Mental Disorders (4th ed.) criteria for determining dependence (such as “When I smoked marijuana, I often smoked more or for longer periods of time than I intended”; “I need to smoke more marijuana to achieve the same ‘high’”) and this was completed during first session and which showed that even though marijuana use was high among the frequent marijuana users in this sample, frequent users on average endorsed only 4.30 of 10 symptoms of dependence. We described in details only those measures of interest relevant to our current hypotheses.

Potential participants, recruited through email and phone invitations using their university account and advertisements on campus. Those interested in the study were initially interviewed on the phone by the study personnel for determining their eligibility. Participants who successfully met the selection criteria for inclusion were then invited to participate in two sessions a year for a total of three years. Data for the present analyses came from the first four sessions. Because our interest was in knowing how the psychosocial factors will help in predicting actual use, our primary analyses were to use attitudes, norms, RSE, and intentions which were measured in Year 1 for predicting use in Year 2.

We focused specifically on data obtained during the first two years of assessment as we found that this period had captured the first two years of college for majority of the participating students and it was the appropriate time when marijuana use risk is presumed to have increased and students are potentially getting exposed to new social influences. It is also the appropriate period where young adults begin taking more personal responsibilities for their life choices. The first two laboratory sessions, during which the baseline data were obtained, occurred on an average within 5.77 days of each other (SD = 5.12). The first Year 2 session occurred approximately 12 months after the first Year 1 session (M = 362.77 days, SD = 19.30 days). The second Year 2 session occurred on average 5.79 days later (SD = 6.53). We had instructed our participants to abstain from alcohol for at least 24 hours, recreational drugs (including marijuana) for at least 6 hours and caffeine and cigarettes for at least 1 hour prior to each laboratory session.

Cite This Work

To export a reference to this article please select a referencing stye below:

My Assignment Help. (2020). Introduction To Test Construction Using Scaling In Social And Behavioral Sciences. Retrieved from https://myassignmenthelp.com/free-samples/psy7105-tests-and-measurements.

"Introduction To Test Construction Using Scaling In Social And Behavioral Sciences." My Assignment Help, 2020, https://myassignmenthelp.com/free-samples/psy7105-tests-and-measurements.

My Assignment Help (2020) Introduction To Test Construction Using Scaling In Social And Behavioral Sciences [Online]. Available from: https://myassignmenthelp.com/free-samples/psy7105-tests-and-measurements
[Accessed 01 May 2024].

My Assignment Help. 'Introduction To Test Construction Using Scaling In Social And Behavioral Sciences' (My Assignment Help, 2020) <https://myassignmenthelp.com/free-samples/psy7105-tests-and-measurements> accessed 01 May 2024.

My Assignment Help. Introduction To Test Construction Using Scaling In Social And Behavioral Sciences [Internet]. My Assignment Help. 2020 [cited 01 May 2024]. Available from: https://myassignmenthelp.com/free-samples/psy7105-tests-and-measurements.

Get instant help from 5000+ experts for
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing: Proofread your work by experts and improve grade at Lowest cost

loader
250 words
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Plagiarism checker
Verify originality of an essay
essay
Generate unique essays in a jiffy
Plagiarism checker
Cite sources with ease
support
Whatsapp
callback
sales
sales chat
Whatsapp
callback
sales chat
close