Learning education

Coming to Terms with Assessment

Developing and implementing meaningful

Terms with AssessmentComing to Terms with Assessment. Developing and implementing meaningful but manageable procedures for assessing learning outcomes has become one of the key challenges for both faculty and administrators in higher education. During his tenure as President of M L A, Jerry wrote a piece entitled

Assessment Changes Everything” that stirred a lot of debate, and we are reporting it here in hopes it can foster meaningful dialogue about assessment. We’re including below selections from comments Greg and Paul had about Jerry’s article based on their own recent experiences. We hope these reports, and Jerry’s article will help foster some discussion here about the vexing issues we cover. Please feel free to add your point of view — or experience — in a comment.

I’ve actually been knee-deep in this debate at my own university during my now-four-year tenure on the university’s Academic Program and Curriculum Committee (where I chair the subcommittee on general education) and as a member of our campus Reaccreditation Task Force, which is charged with coordinating the university’s next reaccreditation by the Higher Learning Commission (the HLC). My experience validates many of your observations, but I have found the resistance to outcomes assessment takes less an ideological route (such as that of Bennett, who you cite) than a practical
one, especially in reference to the humanities.
The pressure to do outcomes assessment, in our case, is driven by the reaccreditation process, which now demands this data. Every department must now produce outcomes assessment for its program and enter this data (whether statistical or narrative or a combination) into a campus-wide computer program that stores the information and makes it available for reaccreditation purposes.
What’s this like in practice? Departments and their faculty have the authority to determine outcomes, stipulate rubrics, choose measures, and shape reporting formats. For faculty in the humanities, where an assessment has largely been the giving of grades and the writing of comments on student papers, there is a steep learning curve when adopting and adapting these mandates.
A learning outcome needs to be more specific than “critical thinking,” at least at the program level. For example, it might be: “The student will be able to analyze the particular elements of an author’s style and its relation to genre and tradition.” Then we need a measure, which in this case would be a close-reading assignment (described in the syllabus as approved by the university curriculum committee).
Then we need a measure or rubric: “The student identified accurately the stylistic devices used by the author, and was able to discuss the significance of the text’s use of generic conventions.” Sometimes the measurement is as simple as yes or no, and the data reported as general as “78% of students met the rubric standard successfully.”
The department is also to report back on steps taken when the numbers indicate problems with student outcomes. Departments are not expected to measure outcomes in every course every semester, but to track key outcomes in representative courses over a period of years.
Once the howling and protesting die down, faculty and departments have found, as you suggest, that this process can be useful and informative for everyone. It does help departments focus on articulating what their programs aim to do for students, often in much more detail than they have done before. It does communicate this to students because it is on the syllabus,
Though quite often students don’t read the syllabus or
understand what they are being told. Faculty are prompted to make their assignment expectations clear and to be intentional in the design of assignments. And outcomes tend to get reinforced across a number of classes in the department. While all of this represents more change than overworked humanities faculty wish for, I think it will ultimately benefit us all.
I’ve had experiences at my institution similar to those Greg reports. We’re currently rolling out new rubrics for course assessment across the college of arts and sciences. Departments at Loyola have been given the authority to determine their own discipline-specific outcomes, and we are requiring assessments of just a few courses per semester.
I’ve heard a lot of the same complaints here that Jerry’s article and Greg’s comment retail, that assessment is just another needless bureaucratic hurdle thrown up by the administration, that we already assess our students with grades. But assessment can be beneficial in all the ways Jerry suggests it’s done in a way that doesn’t just translate grades into learning outcomes assessments.
There’s the rub, of course. In my department, I think we’ve come up with some very good rubrics (along the lines Greg department has, with, by the way, no coordination whatsoever). However, Those implementing the actual assessment made it too easy for faculty to simply translate the grades they gave students into an assessment of their ability to perform relative to the rubric.
Faculty were asked to simply rate 20% of the students in a class excellent, above average, average, or below average relative to two of eight rubrics. Nothing was to be included in the syllabus about assessment, and no specific assignments were to be developed to test students relative to the outcomes we were looking for. No portfolio of student work was required. The result was a very mediocre process, one that went through the motions of assessment but really just translated grades into the assessment.
The rubrics themselves, I would reiterate, were quite sound. They included the following: the ability to perform a close reading of literature and other kinds of texts, paying careful attention to textual detail, the ability to communicate ideas clearly in writing, developing well-supported arguments and analyses, demonstrating an understanding of — and ability to use properly —
literary and cultural theories, demonstrating knowledge of textual genres, histories, movements, traditions, and innovations, and being able to discuss how texts reflect and affect a diversity of societies and cultures. The challenge, it seems to me, is not only to come up with appropriate rubrics like these but to find a process for assessing learning outcomes that meet professional standards without swamping faculty with a significant burden of extra work. Where that seems to be the threat, the compromise may all too often produce instruments that do not provide a meaningful measure of the excellent learning outcomes we have developed.

Leave a Reply

Your email address will not be published. Required fields are marked *