Contexts and Criteria for Evaluating Student Writing by Jane Hindman
Of all responsibilities you have as a composition instructor, evaluating student writing occupies most of your time and has furthest reaching material effects. Though you may spend lots of hours preparing for class, conferencing with your students, and actually teaching, chances are you’ll spend many more grading. Though we instructors often place the highest value on the content and methods of our classrooms–be they critical pedagogy and Marxist interpretations of Clinton’s impeachment trials or traditional grammar drills and a New Critical reading of Paradise Lost, thegrades that we assign our students are the only concrete, as well as the most valuable, cultural capital that our teaching creates. As Evan Watkins says, in his analysis of what transformative effect our teaching actually has in our culture,Â you don’t report to the registrar that [your student learned]. . .a revolutionary fusion of contradictory ethical claims. . . .You report that 60239 got a 3.8 in Engl 322, which in turn, in a couple of years, is then circulated to the personnel office at [for instance] Boeing as 60239’s prospective employer. (18)
As a general rule though, new instructors spend much less time training to be graders than training to be facilitators in the classroom. In fact, you may be wondering why you need to learn how to grade at all, for you may think you know how already. In the teacher training courses I’ve taught, most fledgling teachers have initially imagined that grading is a skill they already have, that–as former English majors and/or good writers themselves–they can “naturally” evaluate essays. After all, they reason, they’ve received enough comments on their own papers, right? They know how the process goes; and besides, good writing is obvious: we all know it when we see it, so it should be pretty easy to figure out how to evaluate it.
Would that it were that simple. In actuality, and as a visit to any norming session comprised of instructors from across campus will demonstrate, few university faculty agree on what good writing looks like. In fact, it’s highly unusual when faculty from other departments do agree with the criteria for good writing that we in composition espouse. Even within our own departments (and within your group of new writing instructors perhaps), commanding debates flourish about issues like which factors should have the highest priority in determining a grade–grammar or content; about whether or not a five paragraph essay signals proficiency or a lack of critical thinking skills that necessitates developmental writing work; about how many aural/oral confusions should be “allowed” in a passing essay; about how much “credit” a student should get for taking risks in her argument. These sometimes heated discussions are the rule rather than the exception.
Why do people disagree so much about what constitutes good writing? And considering that no department yet has been able to find the definitive resolution to these debates, what are you going to do to be a consistent and fair evaluator, especially if students try to argue with you over grades? (And believe me, they will.) How can you feel confident that your grades and/or your guidance to students about how to improve the quality of their writing are not “just” subjective interpretations? If your supervisor(s) and/or department chair review your grading practices, how can you be sure that your evaluations will be sanctioned, that you are fulfilling the goals of the writing program, the department, and the institution that employs you?
My advice is to integrate the following tenet into your composition theory and practice: “good” is a rhetorical term whose application and definition depends on its context. In other words, evaluations of writing are always relative because they’re contextually determined. As a matter of fact, the power of any specific use of language depends on its context, regardless of whether the “power” is judged to be sublime(to use literary terms), “felicitous” (to borrow from speech act theory), appropriate, pornographic, persuasive, humorous, disgusting, satisfactory, bland, or “awesome, Dude.”
What does this tenet mean with respect to your efforts to learn how to evaluate student writing? The perhaps bad news is that you’ll have to disabuse yourself of the notion that there are universal standards for good writing, that if we just look hard enough and argue long enough, we’ll uncover those standards once and for all. The necessity to let go of that notion may seem commonplace to you, especially if you’re a proponent of post-modern theory and/or Foucault’s discussions of the order of discourse. On the other hand, you may be someone who thinks that not supporting the belief in inherent qualities of good writing is virtual heresy. Regardless of your predisposition, in practice understanding and internalizing the context-dependent nature of writing evaluation can be difficult.
Imagine, for instance, that the person sitting beside you in a holistic grading exam session believes that spelling errors are the mark of illiteracy and so wants to give the lowest score possible to the very same essay that you found outstanding because of its well-developed discussion of the sexist implications in the weapons imagery of Die Hard 2. It often seems “obvious” to us writing instructors that idea development is more important than spelling, that organization supercedes mechanics as a criteria for quality writing. But, like the definition and arrangement of all criteria for evaluation, the privileging of those characteristics depends on their context.
Lest you think we’re dangerously close to the slippery slope of solipsism, let me reassure you that there is some good news: mediating the context(s) within which you evaluate will ensure that your practices are fair, consistent, and authorized. In other words, if you understand and internalize the purpose(s) of each specific evaluation process you participate in, as well as the criteria developed for judging that specific writing task, then you will have sufficiently evaluated the context. As a result, your application(s) of “good” (or “mediocre” or “excellent” and so on) will be not merely haphazard nor “subjective” (to you alone); rather, your scores/grades will be systematic, consistent (with the purpose and criteria of that context), systemic (i.e. relative to the system within which you’re evaluating).
To return to our example of the “spelling = illiteracy” person sitting beside you in the norming session. If the leader of the session has provided graders with a rubric–a description of the critieria–for evaluation, then you can refer to that description to adjudge the disagreement between you about which to privilege in the Die Hard 2 essay–the development of the claims about sexism or the spelling of the essay. Chances are that the discussion that ensues will expand not just in topic (from spelling to “grammar” and from idea development to “content” and “thinking”) but also in numbers of speakers. Such discussions are an integral aspect of the process by which grading session participants come to agreement about the purposes and criteria specific to each grading context that confronts them. If the leader of an evaluation session does not provide a rubric, then the debate about spelling and development provides you and the other graders with the opportunity to decide among yourselves what criteria you should consider when you read the papers that you’re charged with scoring.
On the other hand, if and when you yourself are the person with sole responsibility for grading students’ writing (as you probably will be when you teach your own section(s) of composition), then your fairness and consistency depend in large part on your careful determination of your purpose(s) and criteria for evaluating. Many writing programs assist instructors by prescribing general purposes for writing (and therefore for the evaluation of) individual assignments and criteria for grading. These clarifications maintain consistency across different sections of the same course and supply individual teachers with the written description of the “what you want in this paper” that students often ask for. In addition, programmatic criteria for grading offer new and experienced instructors with the materials they need to best understand the institutional goals that inform their specific classroom contexts. In the context of individual assignments (or sometimes in lieu of any stated programmatic goals for composition courses), many instructors negotiate with their students the criteria for grading essays. Such a process makes explicit for students and for the instructor what expectations and standards will be adjudicating their evaluations.
But enough talk. It’s in the actual doing anyway that these complicated, contextually-dependent meanings become more clear. So not to worry if what I’ve said so far seems abstract and/or confusing. The exercises that follow are intended to illustrate in practice what I’ve just theorized. If you have a leader(s) directing your training, she should be able to supply the sample students papers you need and oversee the group discussions you have. But even if you are not a member of an organized teacher-training program, if you and at least two or three other new teachers can find some sample student essays and complete these activities, you will develop a good understanding of the following:
Â· Assumptions about good writing you and others currently have
Â· Instances of specific writing practices that demonstrate your (and others’ ) assumptions
Â· Revisions to your assumptions you want to make
Â· Assumptions about good writing explicitly or implicitly required in your institutional context(s)
Â· Descriptions of the criteria for good writing that facilitate students’ understanding
Â· Variety of purposes for evaluating student writing
Â· Variety of methods of writing evaluation and the purposes they best serve
Â· Variety of contexts within which composition instructors evaluate student writing.
Good luck, and happy grading.
Part One â€“ Taking placement exams, Defining criteria
1. Diagnostic Writing — 30 minutes– Write a response to the prompt you are given. (Trainers or Groupsâ€”see Appendix I if your program canâ€™t supply a sample essay placement exam prompt.) Be sure to save your essay as you will refer to it again after several other activities.
2. Metacognitive Writing about Diagnostic Writing — 30 minutes. Write about the process of writing the in-class diagnostic essay. What strategies did you use to be successful on the exam? For instance, what aspects of the writing process (invention/free-writing, planning, organizing, drafting, revising, proof-reading) did you most attend to? Which did you ignore? Did you make conscious decisions about how to divide your time? If so, on what basis did you make those decisions? Did you maintain those decisions or change your mind later? If you didn’t consciously make such decisions, why not? What choices would you make again during an in-class writing situation? Which choices would you change?
3. Defining Good Writing. Make a list or write a description of what you think constitutes good writing. What are the characteristics of mediocre writing? Of definitely “bad” writing? Now write about how you have formulated these opinions about writing. Whose attitudes have you adopted and whose are you rejecting? In what circumstances would you change your mind about what constitutes good writing? What characteristics of good writing are immutable?
1. First, share your criteria for good writing with each other. On what points (if any) do you agree? On which do you most forcefully disagree? Decide among group members which criteria you want to represent the groupâ€™s consensus. If applicable, repeat this process within the large group. When all groups have reached an agreement, make an official list of that criteria.
2. Now, evaluate this criteriaâ€™s usefulness:
Â· What type(s) of writing will it best measure? (For instance, what writing tasks does it best assess: proficiency writing exams that receive a holistic pass/fail grade, placement exams that determine the appropriate level of composition instruction for individual students, formal research papers that receive a letter grade, rough drafts that will be revised later, journal writing that demonstrates studentsâ€™ engagement with their reading assignments, in-class timed writing essays, take-home essay tests? Is it intended to assess developmental writing rather than advanced composition and/or any other level(s) of writing? Should it be?)
Â· How specifically does it articulate the characteristics of good writing? Will students be able to understand the terminology it uses? For instance, does it rely on a COIK (clear only if known) explanation of development or organization?
Â· Is each level of quality uniquely defined? Are highest, middle, and lowest levels demarcated with specific descriptions? Are the middle and lower categories explained in their own right or are they defined only in opposition to the highest category?
Â· What other aspects of the criteria need to be considered?
3. Discuss what you have learned about the most effect methods for constructing and evaluating criteria for grading.
Part Two — Holistically Scoring Placement Exams
1. Holistically score student placement exams
Â· Read several examples of essays that incoming first year students wroteâ€”preferably in response to the same prompt that you wrote to.
Â· Read “A Rubric for Freshman Placement Essay Evaluations.”
Â· Re-read and assign a score to 6-8 different student essays. (Trainers or Groupsâ€”see Appendix II for how to choose these essays.) Make very brief notes to remind you of what characteristics of each essay evoked the score you assign so you can discuss your choices with the group.
2. Compare and contrast your group’s collaborative criteria for “good” with “A Rubric for Freshman Placement Essay Evaluations.” In what ways are your groupâ€™s views ignored or undermined? What specific purpose is this evaluation of placement exams and its rubric meant to serve? In what contexts might the exam and the criterion given in this rubric not apply? In what ways are the rubric’s descriptions vague or fuzzy? What difficulties might a new English instructor have with internalizing the criteria assigned by the rubric? Would your groupâ€™s criteria improve the instructorâ€™s ability to internalize or not?
1. Referring to the 6-8 essays that you scored on your own, stage a norming session in which you discuss your scores with each other. (Refer to Appendix II if you donâ€™t have a supervisor who can conduct the session.) Keep track of how much fluctuation you see between your scores and othersâ€™.
2. After the norming session, discuss these issues: Which readers are usually high or low? What seems to be the explanation for that tendency? Which persuasive points during the norming session most convince you about another evaluatorâ€™s point of view? What points of your own seem the most persuasive? What most annoys you about other peopleâ€™s view of writing? Why do you suppose that particular thing annoys you?
3. In a large group, grade 10-20 more essays after the norming discussion. How consistent were you as a group? How â€œnormalizedâ€ were you as an individual?
4. Discuss as a group what the session has taught you about evaluating placement exams.
Â· Anonymously grade two of your peersâ€™ essays (also anonymous) that they wrote to the prompt. Ask the Trainer or your fellow students to return all essays to original owners. Review your essay and reflect on the one(s) you evaluted. Did you and/or your colleagues perform as well as you expected? What (if anything) does your performance and/or theirs tell you about the effectiveness of timed-writing assignments in assessing writing proficiency?
Â· Review together the metacognitive writing you each did about your process of writing to the prompt. What clues does that writing provide about the strategies most effective in timed-writing settings? How do those â€œcluesâ€ translate into strategies youâ€™ll teach your students?
Part Three — Responding in Writing to Student Essays
1. Read about various ways to evaluate student writing. For instance, look at the chapter entitled “Responding to Student Writing” in Erika Lindemann’s A Rhetoric for Writing Teachers.
2. Write about what you see as the differences between holistic grading and responding in writing to student essays. What different strategies must an instructor employ in each context? How would a wise writerâ€™s strategies change according to the method by which sheâ€™ll be evaluated?
3. Letter-grade students’ final drafts for a first year composition class.
Â· Read 3-5 student essays provided.
Â· Assign a letter-grade and comment on each as if you were responding to the student who wrote the essay.
Â· Be prepared to share your comments in class.
1. Individually record in writing the criteria you used for assigning a letter grade to the student essays you read. Then discuss the criteria you each used. Is there any agreement? Where? What criteria are most contested among you?
2. Compare your grades on the 3-5 essays you evaluated.
Â· Consider the agreement/disagreement in assessments and then compare that disparity with the agreement/disagreement on criteria. What if anything is significant about the relationship between the two?
Â· Now review the written comments that different instructors (to be) make. What different purposes inform their comments? For instance, are instructors writing to improve the studentâ€™s next essay, to justify the grade they assign, to motivate the student to keep trying, to engage the student in further thinking about the ideas discussed in the essay, and so on? Which of these reasons for commenting seem most useful to you? Why?
3. If your Trainer and/or the Writing Program offers criteria for evaluating student essays, refer now to that. If not, decide among yourselves and record what criteria youâ€™ll be using to comment on the student essays written for the first formal assignment youâ€™ll be grading.
4. Using the criteria youâ€™ve been given or that your group has developed, grade and respond to 3 more samples of student essays written in a context similar to that of the first formal assignment youâ€™ll actually be grading. Compare your grades and comments this time around to the ones you made before you discussed the criteria. Have the grades and/or the flavor of the comments changed? How? What purposes do the changes serve? Do they make grading more effective and/or more consistent? In what ways?
Part Four — Evaluating Rubrics
1. In addition to the one supplied by the writing program youâ€™ll be teaching in or the one created by your group, collect at least two rubrics for grading first year composition essays. If your writing program cannot supply you with these additional examples, you could check web sites or books about writing assessment. (Some large writing programs make public their criteria for evaluations of studentâ€™s compositions; the University of Arizona, for instance, has published many editions of A Studentâ€™s Guide to First-year Composition which usually includes a rubric for evaluating students’ essays.)
2. Write about the differing ways that these rubrics operate. Do the criteria contribute to your ease of grading and/or to the students’ understanding of what and how they should write? In what ways are the rubrics’ descriptions vague or fuzzy? Which rubric most undermines and/or supports your views of what constitutes good writing? Which of the rubrics you’ve encountered would best accommodate a new English instructor’s efforts to internalize the criteria assigned by the rubric? Why? Which system which you most like to be graded under? Why?
1. Discuss each of your individual appraisals of these various rubrics. (Refer again to the list for evaluating criteria in Part 1.) Compare them to the one youâ€™ll actually be using when you grade studentsâ€™ essays. Is yours explained as clearly as it could be? If notâ€”and if it is a version required by your programâ€”how will you explain or supplement it so that your students will best understand how to shape their writing? What views of writing inform the rubric you will use? Which specific features of the rubric(s) reveal its perspective on writing and the writing process?
Part Five — Writing Responses to Drafts, Evaluating Response Methods
1. Read Ed White’s article “Post-structural Literary Criticism and the Response to Student Writing” and Peter Elbow’s article “Ranking, Evaluating, Liking.”
2. Respond in writing to these sets of questions:
Â· White claims that ETS developed holistic scoring as a way to produce consistent test scores and thus to improve the unfairness inherent in previous grading situations. Do you agree with White’s belief that the holistic scoring method improves fairness and promotes a sense of community among English teachers? Why or why not?
Â· Summarize (or cite) one point Elbow makes that you support wholeheartedly and explain why you agree with his view OR do the same about a point that you disagree strongly with. Describe how you might successfully apply one of Elbow’s ideas or suggestions to a classroom OR describe what disaster (or mere problem) you think would probably result from your using another of Elbow’s suggestions.
Â· White claims that “[t]he simple fact is that the definition of textuality and the reader’s role in developing the meaning of a text that we find in recent literary theory happens to describe with uncanny accuracy our experience of responding with professional care to the writing our students produce for us.â€ Elbow advises us to learn to “see potential goodness underneath badness,” to “read closely and carefully enough to show the student little bits of proto-organization or sort of clarity in what they’ve already written.â€ Analyze and explain how this suggestion supports or contradicts (or both) Ed White’s viewpoints about the ways we read student writing.
3. Review your comments on the papers you read for Part Three–Responding in Writing to Student Essays. Imagine now that the student essays are rough drafts that you will return and from which students will develop their final, graded versions. Considering that new context and what you’ve read in Elbow’s and White’s essays, re-examine and re-new your earlier comments on those essays. How and why have you changed (or not) those earlier comments? How does responding to a student’s draft differ from responding to a final version?
1. Discuss your responses to the two essays, in particular your perspectives on the similarities (or lack thereof) in Whiteâ€™s notions of teachersâ€™ â€œdeveloping the meaning of a textâ€ and Elbowâ€™s notions of â€œproto-organization or sort of clarity.â€
2. Discuss the ways that each of you in the group changed your comments when you were responding to a draft rather than to final version of a student’s paper. Draft a list of the differences in strategies you find most useful for responding to rough drafts and for responding to final versions of student essays. If applicable, share your list with the large group and then revise a large list that reflects all groups’ perspectives.
Part Six â€“ Synthesizing Possibilities
1. Read Horvathâ€™s article â€œThe Components of Written Response.â€ Write in response to the following:
Â· Describe at least two ways that reading Horvathâ€™s article motivates you to revise (or shape for the first time) your beliefs about responding to student writing.
Â· List at least four different purposes for evaluating student writing and four different methods of evaluating. Now write about which methods work best in conjunction which purposes. Be sure to explain your reasons.
Discuss and come to some consensus about interfacing methods for responding to student writing with the purposes of evaluating individual assignments. Also discuss these other important issues related to evaluating student writing:
Â· What other aspects (besides criteria and purpose for assessment) of the context for evaluating student writing are salient? How do those other aspects affect the criteria for grading and/or the purpose(s) for evaluating an assignment?
Â· What strategies can an individual instructor use to align a pre-determined and prescribed criteria with her purpose(s) for evaluation in a specific writing context? How can she align a pre-determined purpose for evaluating student writing with her own (or with a group’s negotiated) critieria for the assignment? How can she teach students to strategize in these same ways? Should she teach them such strategies?
WRITING ASSIGNMENT– Evaluating Student Writing
The purpose of this paper is to facilitate your synthesis and critique of the various methods you’ve considered for assessing and evaluating student writing. In some way or another, you should demonstrate that you’ve read, analyzed, and thought about the materials. You might use this paper to formulate and defend your philosophy about grading papers or to analyze the ramifications of using a particular system. Whatever the claim you want to make, the argument of your paper should be based on your response to the different modes of assessment and evaluation of student writing that you’ve examined.
What youâ€™re being asked to do is construct a context, a writing assignment and purpose, as well as a method for evaluating that written product. Explain how the method you chose to evaluate the writing is the most effective for the context, task, and goal that you’ve constructed.
Your essay could be a response to at least two different modes of assessment and evaluation of student writing. You could compare the two of the grading systems you’ve practiced or compare the benefits and drawbacks of holistic grading with those of other ways to respond to student writing. You could also compare two or more of the published writers’ viewpoints on assessing student writing.
Regardless of your choice for approach, you’ll need to do more than simply summarize the method(s) or view(s) of evaluation. Take a position about a preference for a particular method of evaluation. Based on what you’ve read and experienced during this unit and from your other experiences as a writer and student (and teacher), what method of evaluation do you promote? What makes that method preferable? In what context does is the particular method effective and why?
Suggested criteria for evaluation:
Content: Does your essay demonstrate that you have read and used the materials that you’ve discussed as a class? Are you contributing additional insight and reflection to the body of knowledge that you’ve built in group activities? Do you rely on overgeneralizations or personal declarations (e.g. “Students learn better if they get feedback.”) to support your points or do you use specific examples from the texts or from other research as support? Do you use enough examples from the student essays and/or the other texts you’ve read to support your argument?
Organization: Are the details (examples) of your essay arranged in the order that will most convince the reader that your claim is true and sustain her interest? Are like ideas chunked together? Have you logically connected one idea to the next AND explicitly signaled just what those logical connections are? Have you given enough signposts such that the reader can easily see the “map” of your essay?
Expression: Is the language of the essay easily accessible to the readers, concrete, appropriate to your purpose? Do you avoid un-necessary formality, mixed metaphors, stilted sentence structures and phrasing?
Mechanics: Are grammar problems infrequent and minor enough that they don’t impede the reader’s understanding of your text?
Appendix I: Prompts for 30 minute, timed-writing essays
These samples are similar to those often used to place students in writing programs or to assess their writing skills.
1. Choose a specific event or situation from your elementary school years. It might involve school, home, or some other aspect of your life that you remember. It might be a single moment or crisis or an event that happened over time. The event or situation should be one that was very important to you at the time. Discuss it, and then put it in perspective through mature reflection.
2. Certain things are not taught in the classroom, such as how to get along with others, how to rely on yourself, or how to manage money. Describe something you learned outside of school and how you learned it, and discuss its importance in your life.
3. â€œDonâ€™t ever slam the door, you might want to go back.â€
This quote considers the issue of â€œburning oneâ€™s bridges.â€ Have you ever left a situation unpleasantly and then later wished you had handled things differently? Discuss the result and explain how it affected you later.
If, on the other hand, you have managed to keep all your â€œdoors open,â€ discuss how you accomplished this and explain how it has affected your life.
4. Begin your essay with the following sentence (copy it into your essay):
The womenâ€™s rights movement has made great strides toward the goal of equal treatment for men and women.
Select one of the following sentences as the second sentence of your essay (copy the sentence of your choice into your essay immediately following the first sentence):
a) But we still have important work to do before our society can be considered non-sexist.
b) In fact, we must be careful not to infringe up on the rights of men in our attempt to compensate women.
c) Unless we make changes in language, however, our culture will remain biased in favor of men.
Complete your essay.
Appendix II: Norming Session
If you’re new to norming, the best results will come if you can get someone experienced to run the norming session for you. If your particular program doesnâ€™t give placement exams or upper division writing exams and thus doesn’t have people practiced in running norming sessions, perhaps someone is the testing office at your university is familiar with holistic grading and norming sessions. If , however, you have no choice but to run the session yourself, try these procedures.
To prepare for the session:
1. Choose at least seven or eight student essays as samples. The samples should have been written to the same prompt or for the same assignment that your graders will be evaluating. (The best option is if the sample essay and the essays to be graded are written to the same prompt that the graders and you wrote to in the first activity “Taking Placement Exams” above.) The number of sample student essays you will need varies, depending on the system for grading that youâ€™re using. You’ll need at least one representative essay for the highest, the lowest, and the middle scores of the rubric youâ€™ll be using during your grading session. For instance, if you have a 4 point system, choose an essay that is without a doubt (or as close to that as you can get to certainty) a â€œ1â€, another thatâ€™s a clear â€œ4,â€ and then a â€œ3â€ or a â€œ2.â€ (One of the ways to be sure about scoring is to get experienced graders to help you decide which essay is an â€œabsoluteâ€ 4 and so on.) You also need to have at least one essay that stirs up controversy, an essay that evokes a wide range of scores from different readers. If you can find whatâ€™s called a â€œ1/4 splitâ€ (meaning that from two different readers the same essay received a score of â€œ4â€ from one reader and a â€œ1â€ from the other), then youâ€™ve got a great example of a “controversial” essay; a â€œ3/1â€ split is the next best bet. And finally, you need a couple of essays that mark the middle range of your rubric; these Iâ€™ll refer to later as your â€œneutralâ€ essays.
2. Make copies of all essays for all your graders. (They’re assigned to read these in Part 2, Individual Work, #1.)
3. Make copies of the prompt or the writing assignment for all your graders.
4. Make copies of the criteria for evaluation for all your graders.
To conduct the session:
1. Ask all graders to read the criteria for grading and the prompt or writing assignment carefully.
2. Ask graders to re-read the â€œhigh,â€ â€œmiddleâ€ and â€œlowâ€ essays. Donâ€™t tell them which one is which. Donâ€™t even announce that youâ€™re presenting a range of essays. Just ask them to read and score â€œthese three essays.â€
3. When theyâ€™ve finished, decide which of the three essays youâ€™ll discuss first (probably the â€œhighâ€ one) and ask each person to announce her score to the group. Donâ€™t discuss these scores yet; just record them on the board or make a note to yourself and ask the graders to do the same.
4. Ask the most experienced grader (and/or the person(s) whose score coincided with the one you intended to represent) to explain her reasons for assigning that particular score. Require the grader to use specific aspects of the student text and of the rubric to support her reasons for assigning a particular score.
5. Ask for discussion among the group members about their various scores. If the person whose score is most â€œoffâ€ the one you intended to represent is willing, ask her to explain her score. At this point, all group members can and should discuss their individual explanations for their scores.Require the graders to use specific aspects of the student texts and of the rubric to explain their reasons for assigning a particular score.
6. Youâ€”especially if you are or seem to be an authority figure to the other group membersâ€”would probably do best not to offer an opinion about which score is â€œright.â€ If, however, group membersâ€™ conversation gets overly heated or their debates cannot be resolved, you can mediate their discussion by calling on the people with the most experience and/or whose scores seem most reasonable to you. Donâ€™t let individuals over-generalize about writing or criteria; insist that they refer to the specific rubric for this context and to some specific features of the student text(s) theyâ€™re discussing. If none of these plans work to mediate debates or if you donâ€™t know for sure just who the experienced people are, then simply move on to another essay. The point of this session is for group members to norm themselves, not for you to get them to conform to what you or one other group member thinks.
7. Repeat this process (#3-6) with the â€œlowestâ€ essay and then with the “middle” one. If necessary, offer other essays for the group to read which you think are examples of the score(s) about which the group members seem to have the most trouble agreeing.
8. Repeat this reading/discussing process with one of the â€œneutralâ€ essays. If a relative consensus is reached (say, for instance, only two or three of ten people continue to disagree with a score and their disagreement is a number away), then ask the group to read and score the sample essay(s) that you chose as examples of â€œsplitâ€ scores. Again, donâ€™t tell graders why youâ€™re giving them this particular example, just ask them to read and score it. Again, begin the discussion by asking for scores from all graders and then for comments by graders you deem reliable. Permit the group discuss their variances as they may.
9. When graders and you feel satisfied that you have discussed their individual points of view, give them the last â€œneutralâ€ essay(s) to read and score. If, after sharing their scores, you have no â€œsplitsâ€ in the scores, youâ€™re now â€œnormalizedâ€ i.e. you’re ready to begin an actual grading process and should have relatively reliable consistency among the scores graders assign. If, however, you still have drastic splits (the widest possible range of scores are assigned to the same essay), then my directions have probably been pretty worthless and youâ€™re probably going to need the assistance of a trained professional. Sorry.
 Jane Hindman is a professor in the Rhetoric and Writing Studies Department at San Diego State University. Her email is: email@example.com.
 A norming session is a preparatory portion of a group’s process of evaluating writing . In the session, graders review the criteria for scoring, often called a rubric; they then read samples of the type of writing they’ll be evaluating, individually assign a score to each sample, and finally discuss their individual scores with each other. The purpose of the norming session is to calibrate scores among the group members so that as much consensus as possible results when these group members individually evaluate the essays they’ll be scoring. These evaluators and their trainers (if they are leading the norming session) use the group discussion about the sample scores to inform members about and persuade each other of specific interpretations of the rubric and of the characteristics of particular writing samples.
Elbow, Peter. “Ranking, Evaluating, Liking.” College English 55 (February 1993): 187-206.
Horvath, Brook K. â€œThe Components of Written Response: A Practical Synthesis of Current Views.â€
Reprinted in The Writing Teacherâ€™s Sourcebook, 2nd Ed. Article originally appeared in
Rhetoric Review 2 (January 1984): 136â€“56.
Lindemann, Erika. A Rhetoric for Writing Teachers, 3rd Ed. New York, NY: Oxford UP, 1995.
White, Edward M. “Post-Structural Literary Criticism and the Response to Student Writing.”
The Writing Teacher’s Sourcebook, 2nd Ed. New York: Oxford UP, 1988. 285-293.
Article originally appeared in CCC 35 (May 1984): 186-95.
Watkins, Evan. Work Time: English Departments and the Circulation of Cultural Value. Stanford,
CA: Stanford UP, 1989.