What makes for effective assessment?
An assessment can be considered effective to the extent that it is valid and reliable. To ensure students are given practice and experience performing real‐world tasks, instructors should create and administer authentic assessments, as well.
A valid assessment is one that measures what is supposed to be measured and provides accurate information about students’ mastery of the intended learning objectives or outcomes. An assessment that includes vaguely written questions may not be valid because the questions would measure students’ ability to guess the instructor’s intent as much as it measures their mastery of the content. Validity is the most critical characteristic because the decisions that are made based on assessments (e.g., grades, instruction, curriculum) rely on their being true measurements of achievement.
A reliable assessment consistently yields the same results under identical circumstances. When designing an assessment, it is important to establish explicit performance criteria to ensure that students' performances would be consistently evaluated the same way on different occasions and by different evaluators (of similar qualifications). For example, essays that are graded without explicit performance criteria may be evaluated differently by different evaluators and essays of similar quality may be inconsistently judged by the same evaluator. In this case, the assessment would not be reliable.
An authentic assessment challenges students to synthesize the knowledge and skills they have acquired to perform a task that closely resembles actual, real‐world situations in which those abilities are used. Authentic assessments measure student achievement in the most direct, relevant means possible, and they promote the integration of factual knowledge, higher level thinking, and relevant skills in a meaningful context. For example, students in a contemporary social issues course that seeks to prepare students to educate themselves about issues and make policy position judgments could be given an assessment that asks them to do exactly that.
Types of Assignments
Open Polytechnic University (n.d.) Types of Assignments. Retrieved from https://www.openpolytechnic.ac.nz/current-students/study-tips-and-techniques/assignments/types-of-assignments/
Writing Good Multiple Choice Test Questions
Multiple choice test questions, also known as items, can be an effective and efficient way to assess learning outcomes. Multiple choice test items have several potential advantages:
Brame, C., (2013) Writing good multiple choice test questions. Retrieved from https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions/.
Versatility: Multiple choice test items can be written to assess various levels of learning outcomes, from basic recall to application, analysis, and evaluation. Because students are choosing from a set of potential answers, however, there are obvious limits on what can be tested with multiple choice items. For example, they are not an effective way to test students’ ability to organize thoughts or articulate explanations or creative ideas.
Reliability: Reliability is defined as the degree to which a test consistently measures a learning outcome. Multiple choice test items are less susceptible to guessing than true/false questions, making them a more reliable means of assessment. The reliability is enhanced when the number of MC items focused on a single learning objective is increased. In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions.
Validity: Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple choice item much more quickly than an essay question, tests based on multiple choice items can typically focus on a relatively broad representation of course material, thus increasing the validity of the assessment.
The key to taking advantage of these strengths, however, is construction of good multiple choice items.
A multiple choice item consists of a problem, known as the stem, and a list of suggested solutions, known as alternatives. The alternatives consist of one correct or best alternative, which is the answer, and incorrect or inferior alternatives, known as distractors.
Writing Short Answer, Sentence Completion, and Extended Response Items
Short Answer: generally require one to three sentences to complete and are assessed by a rubric
- Make sure that the item can be answered with a number, symbol, word or brief phrase
- Use a direct question. This guideline is the same as the recommendation for writing other item types and for the same reason- that is the form of communication that is most common.
- Structure the item so that a response will be concise. In other words, like the guideline for selected response items, make sure the central idea is in the question and is complete.
- If the answer is numerical, make sure the type of answer you want is made clear
- Make sure the items are free of clues
- Avoid ambiguous, confusing, or vague wording. This rule that applies to all item writing.
- Make sure the items are free of spelling and grammatical errors.
Extended Response: usually essays or other written responses (i.e. lab reports) normally one page (300 words) or longer. In arts and design areas, they can include musical compositions, improvisations, blueprints, architectural drawings, interior designs, arrangements, dance choreographies, art works, etc.
Most extended response items contain a prompt, also called a stimulus. Prompts can include excerpts from written texts, videos, or audio recordings that contain information that is needed in order to respond to the question.
- Limit the response to measuring the specified course or program SLO.
- If you are trying to address more SLOs that one prompt can accommodate, then you will need to prioritize the benchmarks in terms of importance or possibly make an additional question.
- Give enough information in the prompt (stimulus) to make clear the nature of the desired answer.
- Make sure the prompt contains all the information the student needs in order to understand the task. Don’t assume that the student will be able to read between the lines or figure out what might be missing.
- Avoid questions that are so broad that a knowledgeable student could write several pages on the subject.
- Make sure the question (or questions if more than one is needed) contains all the information the student needs. If you expect some type of graphic, then make sure that is clearly asked for.
- Avoid asking students to tell how they feel about personal things or to relate personal experiences.
- Use action verbs in the question that encourage extended responses, such as explain, discuss, illustrate, compare, show, describe. Avoid using verbs like name, list, and identify, as these words are likely to encourage the student to make lists or give short answers.
- It is best to write the scoring rubric at the same time that you write the extended response item. This will allow you to align the item with the rubric levels. As you write the prompt and the question, think about what you expect to see in a high-scoring paper and how these expectations are state in the performance that you are measuring.
Brophy, T. S. (n.d.) A practical guide to assessment. Retrieved from https://assessment.aa.ufl.edu/faculty-resources/a-practical-guide-to-assessment/.
This page is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.