Problems with Institutional Assessment

In institutional assessment, teachers enter data into a mega-database. Then someone analyzes the overall results using the online data, to assess the student learning across specific courses and across the department.
Publish date:
Social count:
In institutional assessment, teachers enter data into a mega-database. Then someone analyzes the overall results using the online data, to assess the student learning across specific courses and across the department.

Assessment dominates education from K-12 through college. There are different types of assessment, formative (helps students improve) and summative (grading of students). However, institutional assessment involves the bigger picture of how an institution or a department is doing academically.

In institutional assessment, teachers enter data into a mega-database. For example, teachers may enter their students’ grades on each section of the final. Then someone, often a department head, analyzes the overall results using the online data, to assess the student learning across specific courses and across the department.

Institutional assessment has some basic flaws:

1) Most institutions have not identified a specific enough curriculum that can be assessed. Many contain very general statements of learning. For example, English might state that students will write a well-written essay. Has the English department specified what constitutes a well-written essay? Likewise, a Modern language department may have the curriculum statement “The student should speak in sentences that have relatively simple structures and concrete vocabulary”. What does “speak” mean? Does it mean to be able to talk about one’s life, to hold a conversation. to repeat from memory? When there are only general learning statements, there cannot be any meaningful assessment.

2) If departments have identified specific learning goals, what is the priority of those learning goals? For example, in English the purpose of writing is to communicate ideas or feelings. Shouldn’t the organization of ideas be more important than the spelling? Or does spelling/grammar have the same assessment weight as organization? Likewise, in Modern Languages, are all skills (listening, speaking, reading and writing) treated equally in assessment weighting even though both in class and in the real world, people listen and speak almost double the amount that they read and write? Have the specific learning goals and their priority been communicated to the teachers/students through a department website/wiki?

3) The departments do not have exemplars that show the quality that they expect of students. Does the English department share electronically with all English teachers essays that show what constitutes a high level paper, an acceptable paper, and a non-acceptable paper? Again, are these exemplars on the department website for each course? Does the Modern Language department share audio files of a good ten sentence conversation through their website or an their department app?

4) They have vague assessment tools. The English department has a generic rubric (has good organization, conveys ideas, etc.) that can be interpreted differently by different people. What type of essay will be the written? An autobiographical essay requires a very different approach than a contrast essay. In Modern Languages, how will writing be assessed – holistically or analytically? If different educators can come up with different scores for the same student, then the assessment tool does not accurately measure learning. Teachers can receive a digital image of the rubric and work assessed using that rubric. How well does the assessment tool match up with how the information was taught in class? Is the assessment tool such as the final developed at the competency level or at the highly competent level? Students may be competent but not highly competent

5) The departments do not do a thorough analysis to get at the root problem once they have discovered a gap. If the students do not achieve well, was it due to the students’ lack of effort, a misunderstanding of how to answer the assessment question, a specific word in the assessment question, the thinking level of the test question, the structure of the assessment item, the textbook, the textbook’s powerpoints, the teacher’s explanation, the homework, or the online work? Usually much additional exploration is needed to determine the real reason for the gap. Once the department identifies the gap, what specific strategy will help the students over come this gap? Will the department suggest technology-based strategies that appeal to students such as Youtube videos, interactive websites, interactive apps and that help the students directly overcome the gap?

6) Most important of all, how does the institutional assessment help students improve in the course right now? Most institutions assess once a semester. After the analysis, the department focuses on what changes will happen in the future year. Unless regular assessment is done in small intervals throughout the year and changes made almost instantly, then the assessment does not benefit the present students. Next year’s students may be very different than the students who took this assessment. Classroom teachers need access to the online data and analysis so they can take class time to provide the students new learning strategies. Then, students can be successful learners!

How does your institution assess student learning?

cross-posted at

Harry Grover Tuttle teaches English and Spanish college courses at
Onondaga Community College and blogs at Education with Technology. He is also the author of several books on formative assessment.



Learning App Analysis

Does the learning app present problems, scenarios, etc in more than just words? For example, does the app show a picture and base the questions on that picture?

Assessment Unplugged

Not long ago, there were scant options for gathering useful educational assessment data. You know the drill: student "fill-in-the-bubble" sheets are completed, collected, sorted, shipped out to the testing company, scanned, scored, compiled, and analyzed. Months later, the results, static and out of date, are returned