Having never been before, we were quite unsure what to expect from AHE 2023. But I was unprepared for the rare combination of massive names in the field of HE (Higher Education) pedagogy (Winstone, Carless, and Boud to name a few) paired with a welcoming, supportive, and dynamic community. There was an infectious and genuine dedication to radically change a Higher Education system that is more compassionate, more inclusive, and fundamentally about student growth and development.
The first keynote speaker, Sally Everett (Professor of Business Education, King's Business School), had been speaking to people championing assessment redesign across the country. Sally's model of wading through the ’treacle’ clearly resonated with many people in the room and aligned closely with our experience that many academics feel disempowered when making decisions about their courses. This theme was echoed in other talks espousing the importance of encouraging educators to feel ‘at the centre’ of positive education change! Top-down processes can be most helpful when they decide strategic priorities, supply guidance and training for change-makers, and create constructive collaboration across the institution. However, curricula interventions are most effective when the teaching team has the responsibility and autonomy to address clearly defined challenges in their own way.
A commonly reoccurring theme throughout #AssessmentConf23 was around the purpose of assessment. Not least, our own talk discussing our discursive approach to supporting disciplinary assessment reform. The concept of authentic assessment tied many of the talks together, calling for educators to think more broadly about the types of scenarios that can be considered authentic (Elizabeth Goode). While industry and employment are obvious sources of inspiration, purpose can be derived from changing the way students view the world or themselves, as well as creating useful artefacts that impact the world outside of the course (David Boud). We know that students are more likely to engage with aspects of education that they value, but there is a lot of work to be done to clarify how students construct that value. An interesting idea presented at multiple talks was using the assessment process to ask students to tell us what important things they have learned as part of the course (seemingly without direct coordination – Paul Kleiman’s keynote as well as Rebecca Rochon’s excellent talk on the missing affective in assessment). Concepts such as portfolios, the ‘Unessay’, and ‘Tell me what you have learned’ play well into both ideas. Ultimately, a key direction for the future of Higher Education is for assessment to create value for students, staff, institutions, communities, people, our environment, etc., beyond just the allocation of a grade.
As is true for the literature, the AHE23 conference held many lively discussions about what aspects of assessment expectation should be transparent and whom they should be transparent to. There were multiple concurrent studies on Assignment Briefs; an area some considered to have been under-studied since the work of Gilbert and Maguire (2014). The Assignment Brief is an essential aspect of building staff and student assessment literacy, but as Elaine Walsh and John Knight said during their talk, it requires a challenging balancing act of Clarity, Comprehensiveness, Conciseness, and Consistency (Walsh, 2021). However, the most crucial element of effective assignment briefing is commonly not implemented – Dialogue (Chahna Gonsalves). No amount of static content will maximise transparency of an assignment brief and they must be supported by meaningful opportunities for lecturers and students to discuss expectations in detail. Subjects that wish to find a simple, single approach to improving assessment on their course would do well to consider implementing some form of standardised assignment briefs across their teaching portfolio and incorporating teaching time within each module to discuss them.
Finally, there was something of a spectre that loomed over the whole of the proceedings - ‘Chatty G’. While this has caused widespread panic throughout the sector, speakers were keen to point out that there are important opportunities to be had. Aligned with points previously mentioned on the purpose of assessment, educators need to take the time to consider how we can shape student engagement with AI (Artificial Intelligence). If the sole purpose of assessment is to supply a grade, then we can expect that students will ‘cheat’. However, if we adopt a mindset that graduates will be expected to use Generative AI in the same way we use emails (to increased productivity), then can we use engagement with this technology to improve student development of future-proof competencies (such as critical evaluation of AI generated content)? If the purpose of assessment were to be useful, then whatever is created will have value irrespective of how it has been generated!