The questions we ask our students (and the ones they answer)

The accreditors are coming around to our campus again soon, so assessment is on the march. We held a two-day writing assessment workshop on campus over the summer, and I participated in scoring essays written by first-year students the previous fall. I came away just as skeptical about the quantitative assessment of college writing as I have always been, but I nonetheless found my self shaken by how much the exercise showed me about the pedagogy of college writing.

Recognizing the limitations of giving everybody the same prompt, detached from any connection to course content, the framers of our assessment project—a group of skilled and thoughtful people—gave the teaching faculty some directions about framing their writing prompts but left room for tailoring them to each class. This approach represented our effort to avoid the Scylla and Charybdis of writing assessment: the distorting artificiality of standard exercises, on the one hand, and, on the other, the inability of standardized questions to capture the kind of context-specific scholarship that we most want our student to practice. I was on my first committee trying to navigate those waters in about 2002; I haven’t yet seen anyone find safe passage.

In this latest assessment exercise, the variation among the faculty-written prompts was dizzying. Some were detailed, to the extent that they sounded like guidance for writing full-length scholarly articles. Some consisted of a single sentence inviting the student to analyze two writers, period. Some asked for summary followed by analysis. Some asked students to respond to passages that we faculty had trouble understanding out of context. My point is not that the prompts were bad but that they were so varied that it would be hard to imagine them producing writing that we could assess with a consistent set of criteria.

The real surprise came from reading the students’ essays. In crucial ways, their writing revealed that the students often had not read the prompts carefully, and they were right not to do so. The prompts asked for different kinds of writing, but the students responded in largely uniform ways. They understood the assessment exercise. Most of them have done similar things throughout their elementary and secondary educations: they knew they were supposed to write a short essay, conventionally structured, with some quoted evidence sprinkled in.

And indeed, that’s exactly what we assessed. With our rubrics and inter-rater reliability training in place, we were almost always able to score the essays in a straightforward way because the students knew to rely on the skills that had been praised and rewarded so often in their educations, no matter what their teachers tried to tell them on a given assignment.

The students’ ability to perform assessment-ready writing humbled me in two ways. First, it reminded me that students have often deduced my expectations when I have not explained everything that they need, even though I tend to explain a lot. The assessment exercise showed me how much we all lean on unstated expectations. Second, a gained a new way of thinking about how difficult I have found it to try new kinds of assignments, even with students who are curious, creative, and ambitious. Now I see such assignments in this light: every time I take a step away from an assignment that boils down to “Write an essay of length X on topic Y,” I remove some of my students’ confidence that they know what implicitly earns rewards in academic writing, even if the explicit requirements are incomplete or difficult to understand.

I still want to push my students and myself to break away from conventional essay assignments. I want them to become capable editors as well as readers, to give presentations that deploy ironic as well as explanatory slides, to work productively as members of creative teams that must evaluate their own work and choose how to share it. As I ask them to learn these skills, however, I will do so with a renewed awareness of how much I am requiring them to leave behind the techniques and assumptions that have gotten them to this college in the first place, and I need a similar sense of humility as I encourage colleagues to try new techniques and assignments. I have been thinking especially about the dynamics of classroom authority, race, gender, sexuality, class, and disability: it is easier for some of us than others to ask students to step away from expectations they know they can meet.

I am just beginning to turn from these thoughts to building a structured sense of how to respond constructively to them. From conversations I have had so far, I suspect that my thinking will draw heavily on the methods of my colleagues in the creative arts, for whom it is nothing new to ask students to express vulnerability, to judge one another’s work constructively, and to work in teams whose members have complementary skills. More to come.

Research after writing

Here’s a question that has bugged me for a long time: how can we teach research skills at the introductory level? Or, even trickier, how can we teach research in a non-disciplinary skills course at the introductory level? This semester, I’m trying out a new answer: teaching research by having students research papers they’ve already written.

Every first-semester Grinnell student takes a class we call the Tutorial: a content-based introduction to college-level skills in writing, reading, discussion, presentation, information literacy, and more. (The course is famously overloaded with priorities.) My versions of the course emphasize writing skills, and in the past, I have chosen not to do much with research beyond quotation and citation skills and an introduction to our library facilities; that is, I have covered information literacy rather than independent research skills, leaving the latter to upper-level courses. In thinking about adding a research component for Tutorial, I have always gotten stuck on the problem of assigning research when students cannot read enough to get a strong sense of a research field. Under such circumstances, how can I avoid turning the “research” into the reading of a few semi-random sources, chosen for their vague relationship to a developing paper topic?

This semester, I will try a new approach: building research into the revision of papers. The students will assemble annotated bibliographies of secondary sources for the course’s final portfolios, and they will choose the readings based on issues that arise in my initial responses to their papers. Because the course is portfolio-based, we can identify areas in which secondary sources would help amplify and refine a given argument. The students’ research will thus have a sense of purpose often lacking in preliminary bibliographies: they will go to secondary sources to solve specific problems. Here is the assignment. Comments are most welcome. If this approach works well, I will work to generalize its application to other introductory courses.