Learnership Management System

How do I evaluate my assessment practices

Design and Use

Assessment instruments are designed to support specific kinds of inference. For example, a summative assessment at the end of a course is designed to support an inference about the level of student attainment at the time of testing. If the evaluation has been designed so that it is valid and reliable, then the inference about the level of student attainment can be said to be justified. In this case we would say that the assessment is fit for purpose. The reason for this is that we have confidence in the assessement instrument and in the accuracy of the inferences made; the instrument is measuring what it is supposed to measure (valid) and the instrument is devoid of features – poorly written questions, ambiguity, confusing intsructions that would adversely effect student performance (reliable).

We have described what is known as the design inference for this assessment instrument; the design inference is the inference that the assessment instrument was designed to support e.g. an inference about the level of student attainment at the time of testing. Assessment results can be put to other uses (see below). When this happens, use inferences may be made. For example, the results may be used to stream students or to guide students in a career choice. When this happens, it is important to know that the use inference is justified. Does a particular result (or combination of results) warrant allocating a student to a particular stream? Does a particular result (or series of results) warrant inferences about suitable career directions for students?

Identify the Purpose of Assessment

In order to evaluate assessment practices, a teacher must work out how to determine whether the assessment practice is fit for purpose. This means that teachers must, in the first instance, be clear about the purpose of assessment. If you think about any particular assessment that you deliver, then you might identify a number of purposes.

An assessment might serve a formative purpose (assessment for learning) or a summative purpose (assessment of learning). However, assessments can serve other purposes.

    Diagnosis (to clarify the type and extent of learners’ learning difficulties in light of well-established criteria, for intervention);
    Screening (to identify learners who differ significantly from their peers, for further assessment);
    Qualification (to decide whether learners are sufficiently qualified for a job, course or role in life – that is, whether they are equipped to succeed in it – and whether to enroll them or to appoint them to it);
    Licensing (to provide legal evidence – the license – of minimum competence to practice a specialist activity, to warrant stakeholder trust in the practitioner);
    Certification (to provide evidence – the certificate – of higher competence to practice a specialist activity, or subset thereof, to warrant stakeholder trust in the practitioner);
    Programme evaluation (to evaluate the success of educational programmes or initiatives, nationally or locally);
    Comparability (to guide decisions on comparability of examination standards for later assessments on the basis of cohort performance in earlier ones)

Therefore, the first step in evaluating your assessment practices is to identify the purpose(s) of each assessment practice.

Evaluating an Assessment

One way (although by no means the only way) to evaluate an assessment is to look at the performance of students on an individual assessment task. Here, one of the main ways you can evaluate is by looking at the assessment results (i.e. student performance) in detail.

Are these as anticipated in terms of the expected pass and higher level scores? You would expect to generate a bell curve from a normal distribution of students with a reliable and valid assessment?
How does the performance of students on this assessment compare with their performance on other assessments?
Are there any question items that are outliers or did not produce the expected results?
How did students and staff feel about the assessment? Were there any practical issues that need to be addressed in future?

Based on the evaluation, you may then need to improve the reliability of the performance e.g. re-writing questions, re-wording instructions.

You might also need to reconsider the alignment of the assessment with the teaching and learning activities and the learning outcomes. Whenever students perform in an unexpected way, there is usually a mis-alignment between these areas. Misalignment may lead to a revision of the learning outcomes, to re-designing teaching and learning activities or to changing the assessment to ensure validity. Changes of this sort are made at stage 5 in Figure One. Planning instruction can include rewriting learning outcomes and/or re-designing teaching and learning activities. Revising an assessment can include re-writing assessment instructions and/or questions to improve reliability. Revising an assessment can also include removing questions and/or adding new questions to ensure validity.