Setting up a cross institutional comparative evaluation study of first year assessment practices in law

Document Type


Publication details

Collins, P, Schilmoller, A & Shircore, M 2011, 'Setting up a cross institutional comparative evaluation study of first year assessment practices in law', paper presented to My lawyer rules: assuring legal and education standards: the Australasian Law Teachers Association 66th annual conference, Brisbane, Qld., 3-6 July.


For academic staff working in smaller non-metropolitan universities, opportunities to converse and collaborate with colleagues within their own schools/faculties and colleagues from other universities about their practices relating to assessment and student learning may be limited. This can result in uncertainty or even complacency about whether the learning outcomes of students have achieved identified threshold standards established for a discipline. With the formation of the new higher education quality agency, TEQSA, there will be a renewed emphasis on transparent standards and evidence of quality. A key question will relate to standards; in particular, what evidence do we have that our graduating students are of a comparable standard to others in the sector?

The Council of Australian Law Deans (CALD) has provided some answers to this question by adopting Threshold Learning Outcomes in Law (TLO’s) which set threshold standards of attainment of knowledge and skills which graduate law students will be expected to achieve in a Bachelor of Laws. The TLOs represent what a graduate is expected to know, understand and be able to do as a result of learning as required by the Australian Qualifications Framework (‘AQF’) 2010. While the TLO’s provide a broad framework for the discipline degree as a whole, they provide limited guidance for teachers at the coalface who struggle daily with questions concerning the appropriate level of attainment required in individual subjects. For example: Are our subject requirements sufficiently rigorous? Are we marking too hard, too soft? Are we expecting too much of our students or too little? How do our students learning outcomes compare to those in other universities? What can we do to improve our practices and processes?

These questions are all the more complex when one considers the diversity of students being taught across the sector. Such diversity includes institutional peculiarities due (for example) to demographics, equity issues and students’ relative preparedness for university studies. Without the level of collaboration and collegial support that may be present in metropolitan universities, academics in non-metropolitan universities can struggle to find answers to these questions, compounding the experience of isolation. Mindful of these concerns academic staff from three non-metropolitan law schools agreed to engage in a cross-institutional comparative evaluation project to ascertain the level or ‘standard’ of our first year Bachelor of Laws courses (single unit of study or subject). The project is informed by extensive scholarship which suggests that: • The way in which students are assessed fundamentally affects their learning • Good assessment practice should be designed to ensure that students are able to demonstrate they have achieved intended learning outcomes (teaching intentions), and • Well designed assessment instruments, criteria and feedback provide an effective measure of academic standards. This adopts the idea of constructive alignment which concerns the importance of aligning the learning objectives, teaching and structures of assessment. With constructive alignment, Biggs argued that “the students are entrapped in this web of consistency, optimising the likelihood that they will engage the appropriate learning activities”. The burden rested on academics as to whether they ‘can operationalise desirably high levels of understanding in ways that denote performances that can be elicited by teaching and learning activities, and that can be assessed authentically.’ This paper describes the rationale for the project, its development and implementation phase, the context informing analysis of the data and the findings to date.