In the first of a series of occasional blogs about maths assessment, Andrew Taylor, Head of Maths at AQA considers the importance of a balanced approach when assessing students. Last weekend, I read Kris Boulton’s blog on assessment with great interest. The piece makes a number of points that really chimed with me as someone who works in assessment, but came into that profession from a maths teaching background.
Kris writes about the importance to teachers of understanding assessment and recommends Daniel Koretz’s book as a starting point. Across AQA, we put a lot of time into trying to demystify assessment for teachers, through videos about assessment design and awarding, and in many of our meetings with teachers. In my experience, maths teachers are usually keen to know not just the what and how of assessment but also the why, and I am always happy to try and explain our approach to a question or an exam paper or a whole qualification.
Kris also talks about the challenges of assessment writing, particularly about questions that make significant reasoning demands on students. He is right in saying that such questions often need more time to answer than straightforward technique questions worth similar marks, and this raises one of the many tensions in exam writing. It really is all about balance, and a good paper has to manage conflicting demands such as:
- testing extended reasoning and covering lots of the specification
- putting questions into realistic contexts without making them too wordy
- wanting all students to have some success whilst challenging the best
- balancing novelty and familiarity so questions cannot be predicted but there is a recognisable style to the exam
- getting the demand right so that all grades can be awarded based on good evidence, but the exam drives better learning and higher standards
It is not easy to get the balance right. Writing is just the first part of the process, and we have to give as much care to the marking and setting of grade boundaries. As an example, at GCSE level, we have to balance the performance across tiers to ensure that a Grade C on the higher tier is comparable to a Grade C on the Foundation tier. We’re grateful to our colleagues from AQA’s Centre for Education research and practice (CERP), who provide statistical analysis to help in the process. This quantitative insight is balanced by the qualitative judgements of our senior examiners, to help us reach the right standard, and to maintain the standard from one exam to the next.
The difference between recall and reasoning is an important one. GCSEs have defined assessment objectives, and the weighting of these objectives influences the construction of the paper. For the current specification (the one students will sit this summer), the paper consists of questions that target:
The use of the word target is important here. As Kris says in his blog, it is possible for a student to use reasoning skills to address a question that was designed to be one testing recall. When testing recall or technical fluency, the expectation is that students have covered that material as part of their teaching programme. If a student has not covered this (or cannot recall it), but can reason their way through it, then credit will always be awarded for unconventional but valid approaches.
As a consequence of the way we target ‘recall’ questions, a question that makes a demand outside of the direct testing of the specification content is likely to be one testing reasoning skills or problem solving.
The construction of a successful paper goes beyond compliance with the assessment objectives. We must also ensure a number of other factors are considered;
- the subject content weightings (for example, the proportion of geometry marks on a paper), must fall within our range for the qualification
- marks must be attributed to Quality of Written Communication, and functional elements
- the overall demand of the paper should to be broadly comparable with that set in previous years
- all of the specification content should be assessed within a 5 year period.
Data and insight from past examinations is critical in helping us set future papers.
We’re fortunate in maths that there is less subjectivity in marking than in other subjects, and teacher confidence in quality of marking is high. As you’d imagine, rewarding a perfect answer is straightforward, but the skill comes in rewarding partial credit fairly to students who have shown some knowledge, but not managed to answer all of the question. The development of electronic marking has given us richer data than we have ever had, meaning we can scrutinize how students perform in much more detail.
As an example of how this has changed our thinking, we began to notice that some questions carried “redundant” marks. Imagine a 2 mark question where the whole cohort either scored 0 or 2. When the mark scheme was written, partial credit would have been available for 1 mark, but the reality was that if a student could do the first step, they would always go on the complete the question. In such examples, the award of 2 marks merely exaggerates the gap between students who knew that skill and those who didn’t. As such, we’ve changed these questions over time to remove this redundancy, ensuring all of the marks in the paper help us to differentiate between students.
In the end we have to be fair to all students so they get the grade they deserve based on showing what they can do. They have a right to expect an exam that can be marked reliably and fairly. But we also have to be fair to the subject of mathematics and ensure we are testing that which is valued in the subject and not just that which is easy to set and mark. Mathematics is probably the most reliably marked subject in the curriculum. As the demand for questions that validly test reasoning, problem solving and modelling increases at all levels, the challenge for examiners will be to retain the reliability whilst giving the right emphasis to the key mathematical skills we want in our young people.
Have you got a question about maths assessment that you’d like us to answer in our next blog? Get in touch – email us at email@example.com or tweet us @AQAMaths.