On January 31, 2024, the Provost Office announced the “Queen’s Renew Project”, directed by Nous Group consultants who have been hired by the university for an undisclosed amount to help the administration make cuts to balance the budget. The first part of this project involves a survey measuring faculty and staff satisfaction with university services in order to guide where and how to make these cuts. Though couched in the neoliberal language of ‘benchmarking’ to ‘improve services’, there can be no mistake that the purpose of the survey is to guide who will be laid off, effectively enrolling faculty and staff in this process to lend it legitimacy, and giving the process an air of objectivity by making it appear as ‘evidence-based decision-making’.
However, as comments by a colleague with expertise in survey design suggest, rather than evidence-based policy, this survey appears to be a classic case of fabricating policy-based evidence. This colleague provided QCAA with a scathing critique of the NOUS survey which is a must-read and makes the survey look truly embarrassing (see full critique below). One of the most fatal flaws they point to is breathtaking: given the levels of understaffing at Queen’s, the survey will not measure faculty’s dissatisfaction with existing services, but rather dissatisfaction with understaffing. So given that the purpose of the survey is to make cuts to services, there is a good chance that it will end up punishing the most understaffed services – for being understaffed.
Many colleagues have asked us how to respond to the survey. There really is no good answer to this question, because this survey is a foregone conclusion: the administration has already decided that cuts will be made by laying off staff, and they will push this through regardless of how people answer the survey or how bad the response rate is (clearly, since we did not get a survey measuring faculty’s satisfaction with the administration, they have precluded cuts to upper admin positions or salaries to balance the budget). But since every question of the survey is connected to staff positions at risk of layoffs, our recommendation is to mark all services as “critical” and to answer “very satisfied” – in line with what our colleague says the survey is likely actually measuring (quoting our colleague): ‘I am very satisfied with the performance of this Service considering how short-staffed they are’.
- Methodological Critique of the Service Effectiveness Survey
The survey prepared by an affiliate of Cubane Consulting Group of Companies and distributed to employees of Queen’s University possesses a number of methodological flaws. Such flaws have implications for the quality of data produced and thus, the validity of the conclusions drawn from these data.
The question matrix meant to capture the concept of ‘satisfaction with performance’ is problematic for several reasons. A hiring freeze was implemented in May 2023. Many of the Services listed in the survey question matrix are being staffed by fewer staff than there are positions in these Services. According to representation from the United Steel Workers Union, Local 2010, as a result of unfilled positions since the hiring freeze, workloads for existing staff have increased and short-term sick and long-term disability leaves have increased. Workers must work more than their regular hours to keep up with the workload and feel they cannot request overtime or lieu time pay for fear of termination. As such, in many cases the questions around satisfaction with performance are not measuring respondent satisfaction with performance of the Service in question, but rather are measuring how well the Service appears to be functioning with reduced staffing. The Service may offer more satisfactory services when staffed appropriately. This issue of construct validity is compounded by the fact that the respondent is unlikely to know if a Service is properly staffed or not. What may appear to be dissatisfaction with a Service, is actually dissatisfaction with understaffing; yet had the respondent been aware that a Service is understaffed, their opinion might resemble the following: ‘I am very satisfied with the performance of this Service considering how short-staffed they are’. This issue of construct validity is especially concerning if the ‘satisfaction with performance’ section will be used as a proxy measure to capture the larger concept of respondent perception of expendability of a Service. In essence the Service has been starved, and the reduction in quality of service as a result of that starvation can be used to justify further staffing reductions. Ultimately, while it is unknown how data will be interpreted and used, users of these data should be aware that a poor ‘satisfaction’ rating is more likely an indicator that the Service requires additional staffing, not cuts.
The question matrix meant to capture the concept of ‘perception of importance’ is problematic because many employees of Queen’s University have a limited understanding of the scope and scale of the services offered by and responsibilities of each university Service—even when the respondent has regular contact with a given Service. In other words, there is little value in knowing respondent perception of importance (i.e., “How important to you are these services?”) when respondent experience with the Service encompasses a fraction of the workload and responsibility of a Service. Providing a service to or ‘serving’ another employee of the university typically occurs along-side many, many other, less visible tasks performed by a given employee.
While the Services listed in both the ‘satisfaction with performance’ and ‘perception of importance’ portions of the survey may be listed using the proper names for the various Services on campus, it is likely unclear to many respondents precisely which service providers are being referred to by the question. For instance, “Financial Services” could be interpreted to mean central Financial Services, or it may be interpreted to refer to an employee internal to a unit who provides various ‘financial services’ to the individual unit. Again, many employees of Queen’s University have a limited understanding of the scope and scale of the services offered by each university Service and may confuse the person they interact with for a representative for the Service in question.
Finally, the use of cross-sectional design, or in the least, the timing of the survey dissemination makes resultant data quite unreliable. Queen’s University has been featured prominently in the national media for statements made by the provost suggesting the institution will close; announcements have been made indicating likely job losses; and a hiring freeze has been in place long enough to lead to staff burnout and frustration. Many survey respondents are likely to feel angry, upset, and/or demoralized, leading to a particular response orientation that is not reflective of true experiences with or perceptions of university services.
2. Interpretation of results will be very challenging
A significant gap in the PDF of the survey provided is that there does not seem to be any questions that ask for characteristics of the respondent that would not identify the respondent but would significantly impact their responses. As multiple people have noted – in their positions some services are not relevant but they know are critical to the university. It would be appropriate and much more informative if the data could be analyzed taking into account the position of the respondent within the university (e.g., what is critical to the success of a staff person vs. a teaching faculty).
The questions that drill down to allow comment on three services you are satisfied with and three you are dissatisfied – largely lead respondents to attribute their satisfaction to people and positions within the university. There is no opportunity to attribute dissatisfaction to the lack of support for a particular service
It would be very reasonable to request that the data be made publicly available so that the many skilled statisticians within the university can conduct independent analyses.
It also is very poor practice on a survey to force a responding as in the case of this survey (you cannot skip a question).
Also the survey took my much longer than 15 min. to complete
I think my recommendation for responding to this survey is to be very honest – if you do not use the service respond N/A. I opted to respond critical to all services I use. When probed for satisfaction – generally I selected either very satisfied or very dissatisfied. When given the option to comment. I attributed my satisfaction to the excellent staff already in place and my dissatisfaction to the lack of adequate support for a particular service.
Bottom line
- the lack of nuanced data characterizing the respondent makes interpretation of the results very challenging.
- Probing question re. satisfaction/dissatisfaction are limited in scope and do not fully explore diverse reasons for satisfaction/dissatifaction beyond people working to support the service
- a request should be advanced to make the data publicly available.
3. The results can be, in the end, fabricated in a way they want to see
Depending on how they prepare the data for interpretation, the results can be, at the end, fabricated in a way that they like to see.
For example, the categories for the importance questions are less important (=1), somewhat important (=2), important (=3), very important (=4), and critical (=5). They will nullify the N/A response, and for the rest of the options, they have multiple options to choose:
- Aggregate the scores for each service to rank them from lowest (least important) to highest (most important)
- Obtain the average for each service to rank them from lowest (least important) to highest (most important)
- Collapse the categories into two groups: if they present data using this method, there is a higher chance of manipulation of the data to come up with a story they would like to disseminate.
In the case of 1 and 2, even if all services score highly on a 1-5 scale, based on the ranking, the service with the average score of 3.5 will get ranked lower than the service with the average score of 3.7. But both are considered as important-very important on a 1-5 scale.
Collapsing into groups has a lot of issues such as oversimplification, manipulation, etc.
The same protocol will likely apply for the satisfaction questions. But given that this has a “neither satisfied nor dissatisfied” option, this option can be collapsed into the dissatisfied group or satisfied group, again depending on the story they want to tell. Also, the “neither” option can be selected for many different reasons given that the N/A option is not available for the satisfaction questions. For example, individuals who have not received or used a specific service can’t answer this question as satisfied or dissatisfied so they will likely choose the neither option. Same applies for those who feel indifferent about the service. Therefore, the satisfaction questionnaire will likely yield useless information.
Given that there is good chance the data can be manipulated differently, making the raw data publicly available will be very important so the data can be analyzed independently.
