The A-level exams were cancelled in March in response to the spread of COVID-19. Subject grades in the UK were originally awarded according to a teacher-assessed ranking of students at each centre and the expected grade distribution of the centre (e.g. school). The expected centre grade distribution has been based on the past performance of the centre, adjusted for the GCSE results of the current intake.
The process has met the criteria set by parliament, but bemused and angered many students who had not attained their predicted grades and found that they have been subject to a mysterious ‘algorithm’ which was informed by the performance of other students who happened to have previously attended their school and differed if more than five students had taken the subject at their centre and again if more than 15 students had taken the subject.
The tension between achieving the required top-down distribution from bottom-up inputs, led to such hostility, that the whole system was abandoned in favour of teacher predictions alone.
The Government’s response for England and Wales had been that the objective of the exercise has been met: grades in 2020 match the distribution in previous years, universities can allocate their places as before, the system can move forward, and students can get on with their lives.
As a parent, I thought that the system had been cruel. As a commercial real estate researcher, I found unlikely parallels to modelling portfolio performance.
Like the distribution of A-level grades, despite the wide variation, the performance across individual portfolios follows a predictable and fairly narrow distribution. In property investment terms, the power of diversification removes the idiosyncratic property level variations and leaves (mostly) market return (it is hard to diversify the specific property risk entirely when buying such large assets).
Top-down property modelling (forecasting market return) therefore tends to ignore specific property influences and, quite rightly, market participants often comment that individual property returns can differ significantly from the forecast market average.
Ofqual were charged with creating a similar distribution of A-level grades to previous years by distributing the grades to students via an algorithm. This is similar to asking top-down property forecasters to predict individual property performance around the market average. In my experience, this exercise can end in acrimony, with objections mirroring those of students receiving their 2020 A-level grades, unless the drivers of model and the data to calibrate the model are fully agreed.
The process of generating property performance can be broken down into individual components: rent is received, costs are incurred, units are let, rents are reviewed, tenants make choices at breaks and expiry, managers refurbish buildings and various amendments to planning and sites are achieved.
The historical drivers of these outcomes can then be tabulated: some drivers have their own mean and distribution, such as vacancy periods and time over-runs on development. Some are discrete probabilities such as the probability of tenant renewal. It may be possible to forecast the average outcome in future (office rental growth) and it may be possible sometimes to improve the property specific prediction (tenant credit rating for probability of default).
A model can then create the expected fund performance by applying these individual distributions and probabilities to each property in the portfolio. The likelihood of the portfolio achieving different levels of return can then also be estimated, which can be useful when, for example, calculating the probability of meeting debt covenants.
If the distributions and probabilities are not calibrated correctly, or the model is mis-specified, then the portfolio return estimate will be biased. Users are therefore right to treat output from such ‘black-boxes’ with scepticism: to overcome these misgivings users need to understand and agree that the performance drivers have been correctly identified and that the best possible data has been used to drive the results. Much discussion is therefore required on this before the model/algorithm is used. Indeed, much of the usefulness of a model is the discussion of the specification: where is performance generated and how do we, as a fund management house, generate strong performance?
After all, there will still be real uncertainty around the answer, which exists even if the model is perfectly specified. Few models for example will have included the probability of a pandemic.
The uncertainty around portfolio return will be wider if a portfolio has concentration risks, for example a bunch of lease expiry dates in one year or a small number of properties. A good model recognises these influences and quantifies their influence: how does risk increase as lease length shortens, what if the property is let on shorter, flexible, leases or on inflation linked lease?
There are multiple data requirements: how does the mean and distribution of vacancy periods vary across properties by type, region and quality (or you may believe it is influenced by which side of the street)? Empirical data may be hard to source and may challenge conventional thinking. This is the benefit of a model, removing group-think and unconscious bias and replacing with a research-led investment process (and doing the maths that is too hard to do in our heads).
To obtain the correct estimate of the expected property returns, and the uncertainty around this estimate, therefore requires a well-specified model, well calibrated data and the buy-in of the users. The same requirements should have applied to the awarding of the 2020 A-level grades.