IS THE TEF GOOD PUBLIC POLICY? PROBABLY NOT.
Professor Richard James is Pro Vice-Chancellor (Academic) and Director of the Centre for the Study of Higher Education at the University of Melbourne.
23 February 2017
The Teaching Excellence Framework (TEF) in the United Kingdom seeks to elevate the status of teaching and learning in higher education and provide better information to prospective students.
Can the TEF achieve the goals set for it?
From 2018, the TEF’s assessment and rating of higher education institutions will also be used as the framework for a limited degree of fee differentiation in the UK.
But the reputational effects of TEF ratings could be even more significant than the financial ones, for it’s a safe bet the media will intensively report the TEF outcomes.
The UK Government’s stated purposes for the TEF are to:
• better inform students’ choices about what and where to study;
• raise esteem for teaching;
• recognise and reward excellent teaching; and
• better meet the needs of employers, business, industry and the professions.
Yet the policy ambitions for the TEF, which are laudable goals in principle, outrun the validity and reliability of the performance data the new system will use.
It is highly unlikely that the TEF assessment process will generate metrics and indicators of teaching quality of sufficient accuracy to warrant institutional comparisons.
Nor will it be reliable enough to justify decisions that will have significant reputational and financial effects.
How the TEF works
Teaching excellence is defined broadly within the TEF to include teaching quality, the learning environment and student outcomes and learning gain.
The six core metrics for each aspect of quality will be drawn from the National Survey of Students, the Destination of Leavers from Higher Education Survey and the Higher Education Statistics Agency.
Metrics for learning gain do not currently exist. Work is underway to explore the development of suitably robust measures, but there is a long way to go.
Trained TEF assessors will be supplied with contextual data to inform their understanding of the operating context of each provider and a 15 page provider submission.
Assessors will be alerted to institutions that are significantly above or below benchmarks based on weighted sector averages.
These benchmarks will be based on the characteristics of the students of each provider – which means that each provider will have its own distinctive benchmarks.
On the basis of both quantitative data and the qualitative adjustments that TEF assessors make for context, institutions will be rated as gold, silver or bronze.
Those assessed as gold or silver will have access to inflation-based indexation of student fees.
Institutions rated bronze will be permitted only half of the indexation.
As a result, the TEF will not provide additional financial resources but will reduce the revenue, in real terms, of the providers assessed at Bronze status.
You can read more about the proposed TEF methodology here.
Can the TEF work? Issues and concerns
The TEF is a significant policy development. The Australian higher education sector should monitor it closely.
There are significant concerns about the TEF across the UK higher education sector.
Some of these relate to specific methodological issues.
Others are more philosophical or ideological – an in-principle rejection of public policy interventions that, it is claimed, assume a shallow market or instrumental view of higher education and which ignore or fail to capture the deeper worth of higher education to communities.
The methodological shortcomings of the TEF are not easily dismissed.
The central flaw in the TEF lies in the limitations of the underlying base data on which the framework is constructed.
The six core data elements—predominantly drawn from student or graduate surveys—reflect information currently available.
Yet few would argue that these capture a full or adequate picture of the quality of teaching and learning in a higher education institution.
Over time, the underpinning data might be expanded, of course, and the validity and reliability improved.
The inclusion of learning gain data, for example, would be one step forward.
Yet such advances in measuring the quality of teaching and learning are a long way from being fit for significant policy purposes.
A further methodological challenge for the TEF is the process of adjustment of raw data to account for variations in institutional context.
Context certainly must be controlled for before fair institutional comparisons are made.
However, the essential problem is the large number of variables to be taken into account.
The adjustment approach to be taken by the TEF involves a mix of quantitative techniques, via the benchmarking component, and the qualitative judgments of assessors.
Inevitably the TEF adjustments for context will lead to significant alteration of the ‘raw’ metrics.
The effects of adjustments on institutional ratings will be significant and will likely be a source of stakeholder uncertainty and mistrust.
The policy dilemma can be expressed simply: the use of unadjusted raw data would be unscientific, and unfair, yet no definitive or uncontestable algorithm for adjustment exists.
A final methodological issue with the TEF is the allocation of institutions to three broad bands, the gold, silver and bronze awards.
These broad bandings exaggerate the apparent differences between institutions and create injustices at the margins of the bands.
Such problems are unavoidable in a policy framework that relies on the conversion of continuous variables to an ordinal scale.
The policy pragmatism of the three award bands is obvious, yet this most prominent and publicly significant of TEF outcomes is difficult to defend methodologically and will be highly contentious.
The laudable policy objectives and the considered design elements in the TEF—given the limitations inherent in the data—should not be allowed to paper over its basic flaws.
The reputational and financial stakes are high for UK higher education institutions
The stakes are high, too, for prospective higher education students who might be influenced by TEF data and TEF awards.
Is the TEF good public policy? Probably not.
The TEF assessment process is not trustworthy enough to achieve the TEF’s declared purposes.
The TEF dataset and the processes of adjustment of raw scores are an inadequate basis for making institutional comparisons.
Nor are they robust enough to be the basis for information on which to guide prospective student decision-making.
The TEF entrusts judgements that will have consequences for reputations, finances and student decision-making to a data structure that is not up to the task.