Song of MOOCs and Grades

In January, I signed up for a MOOC. Given the extreme burdens of my recent work, the timing was lousy. What was I thinking?

I have followed MOOC news for the past two years and been active in Twitter discussions of venture capital’s latest attempt to abolish the professoriate. (I strongly recommend the blogs by Jonathan Reese and Tressie McMillam Cottom on this and related topics.) But I felt that my ability to critique was limited without direct acquaintance with the beast – a scruple that doesn’t seem to trouble billionaires who don’t know which end of a university is up. My one previous attempt, last spring, foundered on the shoals of professional duties. This time, I meant to finish.

So I picked the Songwriting course offered by Pat Pattison of Berklee on one of the major platforms. It fit the bill: a humanities-arts course for which I had some background but which would also be challenging. I’m a poet with extensive formal training in English and an amateur pianist. I have not attempted to write a song since I was a guitar-playing 12-year-old with no knowledge of music theory, so I didn’t have a head start on songwriting itself. Also, the course focused strongly on lyric-writing; musical composition wasn’t even required. Old skills, new form. A good combination.

I viewed every lecture, consulted recommended resources, participated in the boards, aced all of the quizzes, and completed five out of six assignments for peer review (lowest score dropped, so the 0 disappears). I enjoyed Pattison’s presentation and yes, learned from it. He avoided stumping for MOOCs themselves, offered useful nuts-and-bolts approaches to lyric-writing, and drew some insightful parallels between poetics and musical structure. In short, I worked. I had fun.

I am probably going to fail. But I get ahead of myself.

I was especially curious about how the peer-grading worked – I wanted to feel that from the student’s end. As a professor, I find it frustrating how much students focus on the grade instead of the learning, to the point of seeing learning as incidental. I can remember my growing, dumbstruck awareness, early in my career, that what students wanted from me were grades. Good ones, ideally, but often just lots of them.

This MOOC’s peer review took the following form. Each week, we had a writing assignment that focused on specific elements of lyric prosody – e.g. developing a structure that goes somewhere, use of line lengths, types and use of rhyme, and similar. The rubrics were clearly written and directly correlated with the assignment instructions. Did the work use the strategies taught to create stability? Were important ideas highlighted by structural elements? And so on. Each week, after submitting my work, I then received five anonymous assignments to score. Every scoring rubric had a field in which one could write comments to the other student. A few hours after those scores were due, everyone’s results would be available.

On the first two assignments, my peers, whoever they were, scored me very high. This, I felt, demonstrated my ability to write for a rubric. Not that it was that mechanical. In the first week, I made a list of song ideas – could only come up with three – and then I would just use whichever idea best lent itself to that week’s rubric. In my reviews of others, five per week, I saw a wide range of ability to follow instructions. I’ve been teaching too long for that to surprise me.

Anyone I reviewed got way more than they paid for.

By the second week, people who were mad about their peer grades started posting complaints. Some would post their whole assignment, with the reviewer’s scoring and comments, and then append point-by-point rebuttals of the grade. I hadn’t thought about how the online format would make it easier to complain about grades. The comments to such posts contained a mixture of encouragement and agreement with the refuted reviewers.

These had me laughing out loud – inadvertent entertainment! – until I began to feel voyeuristic: I was watching students gripe about grades. Except they weren’t complaining about or to a professor, they were debating with anonymous “peers.” I use the scare quotes because the concept of peer came under attack. Some people felt they were being graded by people who knew less than they did. This is, at least some of the time, true. But this is what we signed up for – didn’t people know this going in?

What did they want?

Many of the complainers wanted the professor or teaching staff to grade them. You know, ‘cause those people know more about the subject than the rest of us do. This desire both heartened and appalled me – heartened because it showed an understanding that an expert can assess work far better than another random student. Appalled because, evidently, expert attention should be free to any and all comers.

The grade-complaint threads amused me until I started getting failing scores on my weekly assignments. Oh, I was still writing to the rubrics. I’m not sure why my scores were suddenly so low – and then I got to feel the temptation to make a score-refutation post. Of course, I resisted, since doing that suggested that something was at stake. Nothing was.

I’ll indulge slightly here, to a point. In one assignment, we had to mark the natural spoken stresses in our lyrics. Now I can out-scan my thousands and tens of thousands. Yet one reviewer’s sole comment was that I should have marked stressed syllables with / and unstressed with – . I had done that. Did my marks not appear in his/her/their browser? Did the “peer” get the natural stresses backwards? Or what? I don’t know, and I can’t find out. Most of my reviewers left no comments.

Since my plummet from As to Fs – I never received any intermediate score on a peer reviewed assignment – occurred when I chose to write more complex language, and continued as we had to record ourselves singing our work, I suspect that my combination of linguistic ability and truly abysmal singing turned people off. (We were instructed not to judge the lyrics by singing quality, but that’s another post.)

The point: when grading is de-coupled not just from expertise, but also from the possibility of dialogue, the student has little means for understanding why a score was low or high. Posting work and reviews to the forums doesn’t solve this problem because the forum commenters didn’t give the scores. They are trying to read the minds of anonymous others.

Pattison and the staff responded on forums to some of this grousing. The purpose of peer review, said the staff, was to help the student doing the reviewing, not the student reviewed. Reviewing others’ work makes us see our own differently. OK. But why then give me the score? If we’re really trying to learn to perceive elements of lyrics, then we need to practice on that – and be assessed on our ability to do so by someone who knows how. That is, if the assessment matters for anything. But maybe it doesn’t.

Last year, I remarked in a Tweet that you can’t mass-scale human attention. I’ll riff on this from other angles in future posts, but here I want to apply this to that activity called grading, marking, assessment. Whatever. In a credentialing system, the letter or number is the end, and any performances (student’s or instructor’s) simply a means to it. Pretty soon software will be able to do a plausible simulation of grading at far faster rates than I can. Hell, if the point is grades, we could write a very simple code that will just spit out a number at the push of a button. So if producing grades is the real point, let’s ‘fess up to that and write that little code.

But if we want to teach and learn, we have to respond to student work. Responding is a broad activity; it requires time, attention, and is inherently dialogic. I respond to students when I speak to what they have spoken in class, when I ask for justification or consideration of alternatives. I respond when I write comments on their writing, addressing their arguments, organization, style, ideas – their sense of what’s important and why. Sometimes, in consultations, students realize for the first time what they actually said when I read a sentence or two back to them – because when they were writing, they did not attend to their own thoughts and language. Most of them don’t know how. That’s one of the things I try to teach them, and the effort embeds the deeper lesson that thoughts and language – including their own – are worth somebody’s attention. The whole testing regime does not send, indeed suppresses, that message: your thoughts and language and character, even, are worth another human being’s attention.

My piano teacher doesn’t grade me. She responds in ways that help me improve as a musician – if I attend to her attention. To work, this must be reciprocal. That cannot be mass scaled, unbundled, or outsourced to software, without first abandoning the purpose of instruction. I wouldn’t trade her response for a piece of paper with a letter or number on it.

The real work of teaching lies in the response, after the student has attempted something. The opening move, the delivery – the rhetoric of which is highly variable and grossly undervalued – is not the whole of it, or even the main thing. Part of the work of learning lies in the response to the response, something that the finality of the grade doesn’t easily allow except in the form of complaint.

Overall, I enjoyed the songwriting course. For a well-educated person who wants to dabble, to refresh, or to supplement private training, MOOCs can be fun. I have had a hell of a year since this one started (long story), and the songwriting course was my one little oasis for six weeks. But let’s not pretend that what I was able to learn is unrelated to decades of formal instruction in English and classics; reams of reading, memorization, recitation, and performance; and the attention of human beings who were more advanced in these arts than I.

Let’s also not pretend that what I learned has anything to do with my grade. I aced the online quizzes, and flunked the peer reviews as soon as we had to write more than simple sentences. I expect to fail this course.

 

See also:

Jonathan Rees on Peer Grading

Rees’s IHE article on same

and while I was preparing the post, Slate ran this article about who takes MOOCs

One thought on “Song of MOOCs and Grades”

  1. Loved this post, Rebecca. The key thing you said which resonated with me most strongly is the dialogic possibilities lost by anon peer review on MOOCs. I am frustrated by this, too. In #futureed, sometimes i wanted to have a dialogue about my assignment or someone i was reviewing. Once, in #edcmooc, i was reviewing an assignment done in a foreign language with no subtitles or anything! I wanted to contact the person and ask if they linked to the wrong file. I asked another MOOC participant who spoke the language to help me review it. Yeah, a bit funny for a MOOC, where you correctly say not much is really at stake.
    And that last point is important. I wonder why your peer reviewers gave you F grades without justification when they could have just not graded you at all?
    Back to anon peer review, I also feel that anon peer review (including or maybe especially for academc journals) opens the door for people to be harsh rather than helpful. And that’s one of the reasons I like openness in peer review that allows for back-and-forth, for the reviewer to discuss with the reviewed, to help them think of how to make things better. But that is another issue. Thanks for this post

Leave a Reply

Your email address will not be published. Required fields are marked *