‘Gross Errors’ or ‘Good English’?: The Historical Legacy of California’s Writing Placement Exam by Holly Bauer
May 10, 2001 Leave a Comment
(This paper is based on a panel presentation given at CCCC 2001, in Denver Colorado. The panel was entitled “Mandates for Composition: Rereading Public Policy Documents in California”, and was chaired by Linda Brodkey.)
One of the reasons we decided to form this panel was to argue that a stronger understanding of the role of university policies and state mandates is an important part of composition’s historical legacy. In the past decade, much important research in composition studies has examined classroom practices, curriculum, composing processes, and student writing. But larger institutional policies also shape “what literacy is” and “who is literate” at the university. Such policies define and characterize literacy, writing, and students; they affect admissions and curriculum; and they structure the segments of a state’s public institutions in varied, even conflicting, ways. Our first panelist (Carrie Wastal) described the three-tier system in California that was instituted through the Master Plan. The next two panelists (Ellen Quandahl and Glen McClish) have discussed a recent mandate and study in the California State University system that clearly affect literacy education today. I will discuss an 1898 University of California policy in order to argue that history has something to teach us.
More than 100 years ago, the University of California first instituted its Examination in Subject A. The Examination in Subject A has a long and complicated history that is linked to the University of California’s development as a major research university, to the relationships among the various segments of public education in California, and to larger cultural anxieties about the “mother tongue.” Pieces of this history are instructive, for particular UC policies that concern the Subject A still shape the treatment of student writing and literacy education in California public schools at all levels. The exam ushered out the 19th century, ushered in the 20th century, and remains with us at the beginning of the 21st century. It has served a gatekeeping function and a disciplinary role. And its history has much to teach us about how student writing has been tested, evaluated, and represented.
The Subject A requirement is described in the 1897-98 Register as an examination of “oral and written expression.” The description reads, “Training in this subject enters into the proper treatment of all topics of study taken up in the school course, and extends to speaking and oral reading as well as writing. Its aim is to secure to the student the ability to use his mother tongue correctly, clearly, and pertinently on all the lines upon which his thought is exercised.” About 25 years later, in 1922, the Examination was described as one “designed to test [undergraduate entering students’] ability to write English without gross errors in spelling, grammar, diction, sentence structure and punctuation.” A 1938 dissertation on the evolution of the Subject A concluded that “Subject A has been considered by the University as a phase of English essential to the preparation of students for the successful pursuit of knowledge at the University level” (8). A desire for “proficiency in English” was the overriding justification given for the examination, which was understood to be the domain of all subjects, and thus the result of reading and writing in various courses (not just English).
Since its inception, the Subject A requirement has been the subject of much discussion and controversy. University faculty and administrators have debated whether the exam should be an admissions requirement or a prerequisite for enrollment. They have debated whether the exam should test written expression or knowledge of grammar. It has generated myriad policy debates, reviews, and status reports–most of which address why the test is not working as intended, how it
can be improved or how it can be fairer. In its current formulation, each May a system-wide examination is given in which students write in response to a prompt that asks them to read and interpret a prose passage of up to about 1,000 words. A month later, high school and college writing teachers from across the state of California meet at UC Berkeley to read and score thousands of Subject A examinations. They use holistic scoring criteria. The essay examinations are written by students who have been admitted to the University of California. Satisfying the Subject A requirement is a prerequisite for enrollment in first-year writing courses and other university course work that requires substantial writing.
The UC first instituted the Subject A requirement as an admissions requirement. In 1919, the requirement was changed from an admissions requirement to a prerequisite for enrollment. Still, in both cases, it is clear that historically, the state considers literacy education to be the responsibility of secondary schools and a prerequisite for success at the university. The test, then, affects high school, college, and university students and teachers. And it serves as a literacy “gatekeeper” for the university––that is, a linguistic pre-requisite for access to public higher education. It is important, then, to better understand what role the test plays in the politics of gatekeeping. It is also necessary to examine from which definitions of literacy standardized tests like the Subject A are operating.
Examinations like the Subject A appear to make clear definitions of literacy possible, as they rely on a standard of what counts as literacy. They operate, then, like common sense notions in that they oversimplify something that is actually quite complex. But if, as many poststructural theories suggest and as many composition scholars have argued, literacy is always already contingent and historically situated, then literacy at home and literacy in high school are possibly, even probably, quite different than literacy at community colleges and than literacy at the University of California. It is only when we accept a definition of literacy as a set of universal, isolable technical skills that the differences among these literacy sites collapse. One way to better understand the discursive effects of this would be to study how an examination like Subject A interacts with literate practices in other segments of the system.
It would also be useful to better understand how the test is used to evaluate and categorize students. I would argue that acceptance of an oversimplified definition of literacy is at least part of what makes possible the treatment of literacy as an individual problem rather than a social one. That is, students are labeled and disciplined based on their performance on the test. There is not a recognition of the discursive effects of this process on the various people involved.
It is not only important for people working in composition to be concerned with the contents of such tests, it is also important to examine “talk” about the tests; that is, the representations of student writing and student literacy that surround these tests. We must better understand who is served and to what ends when literacy is reduced to things that are testable.