Measuring university success
23 September 2024
Dr Jon Renyard, University Secretary, Arts University Bournemouth, finds a way through the jargon jungle to set out some academic assurance basics.
What makes a university ‘successful’? Commercial organisations can find it relatively easy to identify success metrics but universities often have multiple and competing missions, a wide array of stakeholders and a particular history of academic autonomy. Even so, most boards would probably agree that the provision of a high-quality education to students, leading to credible degrees (both postgraduate and undergraduate), forms part of the bedrock of ‘what the university is for’.
The difficulty for governors is that there’s no one single metric with which to measure success in that field. The Office for Students, as the regulator for higher education, sets out in its Condition B its expectations in relation to quality and academic standards. The associated guidance is long and detailed, probably of most interest to a very specialist audience, and most of the expectations are described in ways which are not susceptible to simple judgements.
As an example, Condition B1 states that students should receive a high-quality academic experience, including that the course is up-to-date, provides educational challenge, is coherent, is effectively delivered, and requires students to develop relevant skills. This is then supported by more than 2,000 words of detailed guidance as to the kinds of judgements which might apply. The exception is Condition B3, which has explicit numerical thresholds that universities are expected to achieve in relation to students continuing on their course, successfully completing their course, and progressing to professional employment.
Simple solutions to complex problems?
As with so many areas of governance, there is a risk here of seeking simple solutions to complex problems. It is likely that few governors will have had direct experience of higher education management, which might enable them to see through the inevitable jargon. But that doesn’t mean that they should just take it on trust—university quality managers, me included, can obfuscate as well as anyone else! There are probably one or two board members who do have experience of higher education, but what about the board as a whole? How should it approach academic assurance? Here’s my take.
While there is no simple proxy measure for high-quality education, or for credible awards, there are plenty of published datasets which can be called on, as well as internal data the university can provide. This includes performance against those B3 metrics described above, with supplementary local information about the performance of students from different demographic groups, and the results of national student satisfaction surveys. There is also other public information, such as the outcomes of the Teaching Excellence Framework (TEF) and reports from professional and statutory bodies.
Assurance questions
If I were a governor, the first things I would ask are: can you explain to me how you can be confident that you meet the B conditions? What is your quality management framework, and how does it take account of all these different pieces of evidence? In other words, can we trust the process? Universities might have different ways of doing this – my own has provided some short videos on the various aspects of the framework that try to demystify it.
Secondly, I would expect to see an annual report or similar which shows the outcome of that process. This report should make explicit comment on the B3 indicators, but should also provide a broader discussion of the evidence that the education provided by the university is sound. I’d expect to see an annual action plan arising from this report, noting the key priorities for the next year. My own board has also agreed a set of annual RAG-rated indicators that give an at-a-glance overview of performance, with an expectation that any indicator which is not green will be discussed in the main annual report.
As a corollary, of course, I would expect to see a clear explanation of any evidence that does not fit the neat narrative presented – because no story about academic assurance is ever entirely neat! I would also expect, periodically, to see a review of the thoroughness of the process itself: is it still fit-for-purpose, and can it be trusted to produce reliable outcomes?
Most universities also produce an annual Degree Outcomes Statement, which gives their degree classification profile (what proportion of students from various demographic groups achieved the respective classifications) over a five-year period, explains in layman’s terms why they are confident that these outcomes are correct, and discusses both any trends in those results, and any action being taken to address areas of concern. The board should also receive and interrogate this statement, ideally alongside the annual report.
I would ask if the university understood the reason for its performance in the TEF and ask to see the action plan which has been developed in response (or, possibly, as preparation for the next exercise).
And if I was really keen, I might look personally at some of the published data on my university—on student performance, or on student satisfaction—and check that any numbers that look ‘out of kilter’ are covered in those reports.
Additional mechanisms
Some universities have introduced additional mechanisms to support academic assurance—for example, there might be a dedicated committee to look at this area of work in greater detail, which will give a subset of board members more insight into both the processes, and the ways in which they interact. This can be useful, especially if board time on academic assurance is limited.
It’s not that governors need to understand all the arcane detail, or to become experts in higher education quality management, any more than someone on the board of an NHS trust needs to understand brain surgery. Boards need to be assured that the university has a sound process for academic assurance that is comprehensive and will produce reliable outcomes. Despite the unfamiliarity of this area of work, the jargon jungle and the alphabet soup of acronyms, it’s really not that hard to do!