Case study: Job Satisfaction in the Banking Industry: Or the logistical nightmare of conducting large scale quantitative research
Job characteristics are believed to have an impact on stress and well-being at work (Karasek & Theorell, 1990). The demands of the job on the one hand and the extent to which you have control over your own activities (decision latitude) on the other, are two factors which together define how stressful a job is. Those jobs which are high demand, but offer limited control, are considered to be high-strain and carry an increased risk of job dissatisfaction, stress and burnout.
Based on this theoretical framework, the Union of Belgian Banks sent out a research call to several institutions, with a bidding process based on criteria such as quality of the proposal, timing, and – above all – budget. The aim of the research was to carry out quantitative research to measure the relationship between job characteristics and job satisfaction in all Belgian banks (see Cambré et al., forthcoming). But in order to do this effectively, several methodological issues needed to be resolved during the research process. First of all, a research consortium was selected to conduct the research, or more precisely, the two highest ranked bidders were asked to jointly undertake the research. This was the outcome of a political decision by the banks (see also p. 142, ‘Affiliation and conflicts of interest’), since the employers preferred one partner and the unions (employee representatives) preferred the other. The two competing research institutes, a private company specialising in stress at work and the Katholieke Universiteit Leuven (Belgium), were required to co-operate and develop a level of trust in order to conduct the research. For example, both research institutes had different ideas as to which scale should be used in the questionnaire. They could not just combine the scales or include both scales, simply because they are supposed to measure the same concept. Furthermore, this would also make the questionnaire too complex. Therefore, the research institutes had to combine their knowledge, look for compromises and jointly work on a shared vision, which is, to say the least, rather time consuming. A second obstacle that needed to be overcome was the sample (see Chapter 7). In total, 69,000 employees work for Belgian banks and it was decided that questioning all employees would be too complicated and too expensive. Therefore the research committee, consisting of representatives of the banks, the unions and the research consortium, opted for a cross-sectional design (p. 45) with a fixed sample of 15,000 employees (roughly 21%; see p. 187 ‘Absolute and relative sample size’). In this sample, the small banks were over-represented in order to be able to make conclusions at the level of each bank. Within each bank, the respondents were selected at random with no particular quota for gender, age or employee level. In the postal survey (see p.231 ‘Self-completion questionnaire or postal questionnaire’) several steps were taken to improve the response rate (see p. 234 ‘Steps to improve response rates to postal questionnaire’; see also suggestions by Dillman, 1983). The survey was based on addresses which had been provided by the banks (name, language, address) and each employee randomly selected in the sample received a personalized envelope through regular mail, sent to him/her by the employer. The completed questionnaire needed to be returned (free of charge) through the internal post within each bank. This caused two problems: (1) a perceived lack of anonymity, because the employees received a personalized envelope (p.136, ‘Invasion of privacy’); and (2) potential bias to the reliability (p.41; p.157) and the response rate (see Key concept 7.5 on p. 189) because the completed questionnaires were collected by the banks themselves. The researchers were able to overcome this by communicating clearly in a letter (1) that although the data collection was not completely anonymous (home address on envelope), the data analysis would be completely anonymous; and (2) that the completed questionnaires were collected by the bank but were transferred immediately to the researchers without being opened or read.
The latter already presented various logistical problems. The researchers had to travel to each bank to collect the completed questionnaires and due to the fact that in Belgium, part of the population speak Flemish (Dutch) and part speak French, two versions of the questionnaire needed to be available and then carefully translated and tested for the accuracy of their translation (see also Tips and skills, p.488: ‘Translating interview data’). The questionnaires were sent to the respondents’ home addresses, a French version if the respondent lived in the French part of Belgium, a Flemish version when living in Flanders. This prompted a series of angry calls when Flemish people, living in the French part, or vice versa, received a questionnaire that was not in their native language. Furthermore, Brussels is officially bi-lingual and, to complicate matters even more, contains many headquarters in which the main language spoken is ... English! In order to minimise attrition, it was important that these respondents received a questionnaire in their preferred language. Another logistical issue was the co-ordination and control of the distributed information. The Belgian banks, who were the research financers, chose a decentralised way of working, hence organizing a ‘sensibilization campaign’ within each bank whereby the researchers had to visit all the banks to explain the theoretical framework and the outline of the research to representatives of both employers and employees. Additional initiatives to prompt a higher response rate were taken up by individual banks, or, more precisely, by some of the banks. The researchers were required to carefully follow-up on those initiatives implemented by the banks, to ensure that these initiatives remained both neutral and valid for the research. Some of these initiatives proved difficult to deal with due to the selective use of information that had been employed (e.g. letters forcing the employees to participate; or union campaigns to guide certain answers). Hence, the researchers had to be sensitive for the respective organisational cultures, while making sure they kept a neutral position towards all partners involved in the research. Once the data collection was completed (response rate of 47.6%), the data handling needed much attention. A comprehensive check and double-check was conducted on wrong entries, filters, missing cells ... just to increase the reliability as well as calculating a weight-factor to compensate for the over-representation of small banks. One issue was the major difference in response rate between the banks. Due to a strong campaign, some banks reached a response rate of over 60%, whilst others barely reached 20% because they did nothing to increase the response rate. To what extent did this bias the reliability? At the request of the financers, a reference group of Belgian employees needed to be defined, comparable on gender, age, educational level and employee level which would allow all banks to be compared not only with the other banks, but also against an overall group of employees outside of the banking sector. However, since the questionnaire was also the result of a negotiation, some of the items used were new and not included in previous research on economy-wide employee well-being. Obviously, there existed no previous data for these new items. As a consequence, these new items could not be compared with larger populations and therefore only an internal comparison (every bank compared to all other banks) was chosen. A final issue here was the choice for a minimal threshold for analysis and reporting. It was decided that every cell needed to contain at least 15 observations in order to allow for a graphical representation in the analysis or the final report. As a result, for some smaller banks the final report was not exactly fine-grained, since some of these banks only contain 25 employees. For these banks, only overall results were presented, whereas for larger banks the results were split up for gender, age, department, etc. A final issue occurred when presenting the results. As mentioned above, the language issue is particularly important in Belgium to the extent that one even has to be concerned with the order of reporting and presenting (in terms of which language first). A discussion arose concerning the graphs used in the report: using different axes can result in different perspectives, despite the fact that, statistically, the results obviously remain the same. In both figures below, the amount of people with stress is 5, whereas 10 have no stress. So the appearances can be deceptive.
QUESTIONS• Research consortium: the organisations that financed the research (banks) are both research subjects and are involved in the research design which could cause a conflict of interest. How could you deal with this ethical issue in this research situation?
•Sample size is n=15,000. Is this large sample really necessary? Discuss its relative and absolute size. What other options could have been taken?
•This is research in a real social context. Hence issues such as time, budget and politics are important. How would you deal with them? Discuss some of the decisions the researchers have made.
•The small banks were over-represented in the sample. Afterwards, a weight factor was calculated in order to correct for this. However, as could be expected, for some small banks the response rate was still too small to allow for organizational conclusions. So what was the point of this over-representation? Discuss its appropriateness in contemporary organizational research with many small firms. What alternatives could have been used to allow for conclusions at organizational level?
•Due to a strong campaign, some banks reached a response rate of over 60%, while others hardly reached 20%. To what extent did this bias the reliability of the results? What can be said about the generalization issue?