USF Libraries
USF Digital Collections

Educators' beliefs about and approaches to the evaluation of student writing

MISSING IMAGE

Material Information

Title:
Educators' beliefs about and approaches to the evaluation of student writing
Physical Description:
Book
Language:
English
Creator:
Minick, Vanessa
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Assessment validity authentic rubrics response
Dissertations, Academic -- Curriculum and Instruction -- Masters -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: The overarching purpose of this study was to describe educators' beliefs about the evaluation of student writing. The inquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the effectiveness of their evaluation methods for judging the quality of students' writing samples? and (c) what factors impact the evaluation decisions of educators? The following variables were considered: public and private school settings, evaluation methods, and educators' beliefs about evaluating writing. In order to gain perspective of the current status of the methods utilized by educators in their evaluation of and response to student writing, it is helpful to observe them during the teaching of writing and to talk with them about their process for evaluating samples of student writing. A mixed methods approach was undertaken during this study and included the collection of questionnaire responses, educator interviews, a classroom observation, and the collection of student writing samples. Interesting points in the findings included the noticeable absence of the notions of validity and reliability in the decision-making process of educators, the apparent impact of educators' self-efficacies on their selection of evaluation methods, and a focus by educators on writing factors perceived as impacting readability. Implications and future directions for research are discussed.
Thesis:
Dissertation (PHD)--University of South Florida, 2010.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Vanessa Minick.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains X pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E14-SFE0004798
usfldc handle - e14.4798
System ID:
SFS0028079:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Educators Beliefs About and Approaches to the Evaluation of Student Writing by Vanessa Minick A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Childhood Educa tion and Literacy Studies College of Education University of South Florida Major Professor: Jenifer Ja sinski Schneider, Ph.D. Susan Homan, Ph.D. James King, Ed.D. Jolyn Blank, Ph.D. Date of Approval: November 12, 2010 Keywords: assessment, validity, authentic, rubr ics, response Copyright 2010, Vanessa Minick

PAGE 2

Dedication This dissertation is dedicated to my family. I would not have been able to complete my program without their suppor t and love. My husband, Matt, endured many nights alone with the children while I attended classes and pa tiently took them out of the house on weekends when I needed to write. My children, Brendan, Tristan, and Madison, have only known me as a doctoral student and ha ve been filled with curiosity about what it is that Mom does while she works. Their smiles and hugs have kept me going when I was ready to take a break, and I am so happy that I had them with me throughout this journey. Thanks must also go to my parents, in-laws, and friends. During the many evenings and on weekends when Matt worked or during the day when I needed time to write and study, they all came to my rescue a nd found fun activities to use to entertain my children. Without their hel p, I would still be writing!

PAGE 3

Acknowledgements I would like to extend my deepest gratitude to all of my committee members for their help and encouragement throughout the di ssertation process. Dr. Jenifer Schneider refused to give up on me and helped me al ong by providing suggesti ons when I felt stuck in place. She patiently read through many drafts and helped me to stay on the right path. Dr. Susan Homan showed enthusiasm for my progress at every step of the way and was always ready with a few helpful comments. Dr. Jolyn Blank kindly joined my committee when I was nearly ready to defend my proposal and I will always be grateful for her careful attention to detail and specific fee dback. Dr. James King ste pped in at the last minute and saved the day for me by roundi ng out my committee. As always, his suggestions and comments were greatly appreciated and helped to improve the quality of my writing. He made me think twice about fam iliar topics and exposed me to new ideas. I am also greatly appreciative of Dr. Dian e Yendol-Hoppey who willingly stepped in as a substitute for Dr. Homan during my final defense. A final thank you must go to Dr. Brenda Walker who agreed to join us as th e final defense chair. Her willingness to help allowed me to graduate this semester! Tha nks to all of you for your time, patience, and encouragement!

PAGE 4

i Table of Contents List of Tables v List of Figures xi Abstract xii Chapter 1 Introduction 1 Introduction to the Study 1 Background of the Study 3 Statement of the Problem 5 Rationale for the Study 7 Theoretical Framework 10 Purpose 12 Research Questions 12 Delimitations 12 Limitations 13 Operational Definition of Terms 15 Assessment 15 Authentic assessment 15 Authentic audience 15 Belief 15 Educator 15 Evaluation 15 Grading 15 Narrative writing 16 Private school 16 Public school 16 Rubric 16 Standardized test 16 Writers conference 16 Writing sample 16 Importance of the Study 16 Chapter 2 A Review of the Literature 18 Introduction 18 Evaluation Defined 19 History of the Eval uation of Writing 20 Validity and Reliability in the Evaluation of Writing 23

PAGE 5

ii Teacher Beliefs About Writing 26 Review of Pertinent Evaluation Research 34 Subjectivity of Raters 36 Assessment Options 37 Authentic Assessment 38 Authentic Writing Tasks 39 Feedback to Student Writers 42 Options for Assessing Writing 45 Holistic Scoring 46 Rubrics 47 Teacher Conferences 48 Peer Conferences 50 Portfolios 51 Standardized Writing Assessments 53 Strengths and Limitations of Available Literature 56 Chapter 3 Method 58 Research Design 58 Context of the Study 60 Phase One: Questionnaire 61 Writing and Evaluation Questionnaire 61 Development of the questionnaire 62 Pilot study 65 Questionnaire distribution 68 Participant demographics 69 Questionnaire analysis procedures 72 Phase Two: Observations 75 Observation data 76 Observation data analysis 77 Phase Three: Interviews 78 The Educator Interview 79 Interview data 81 Interview data analysis 82 Phase Four: Writing Samples 83 Writing sample data 83 Writing sample data analysis 84 Role of the Researcher 85 Triangulation 87 Conclusion 87 Chapter 4 Results 92 Results by Phase of Data Collection 92 Phase one: Writing and evaluation questionnaire 92 Research question one 93 Lesson objectives 93 Frequency of assessment of writing 94

PAGE 6

iii Most important aspect of writing during assessment 97 Drafts and evaluation 100 Frequency of use of evaluation methods 102 Use of formal vs. informal assessments 107 FCAT rubric use 110 Summary of findings related to research question one 114 Research question two 115 Perceived effectiveness of most frequent methods of writing evaluation 115 Overall effectiveness of all writing evaluation methods 119 Perceived helpfulness of FCAT rubric 119 Research question three 122 Time spent evaluating indi vidual student papers 122 Feelings about the assessment of writing 124 Source of evaluation methods 126 Mandated methods of evaluation 127 Reasons for choosing FCAT rubric 128 Phases two-four: Interviews, observation, and student writing samples 129 Categorical Analysis 148 Instruction and Planning 152 Student Skills 154 Growth and Development 155 Feedback 157 Limitations of Writing Evaluation 158 Summary 159 Chapter 5 Discussion 160 Current Evaluation Climate 161 Contributions to the Field 163 Teacher Beliefs 164 Approaches to the Evaluation of Student Writing 165 Evaluation as an instructional guide 166 Mandated evaluation methods 167 Effectiveness of Evaluation Methods 168 Validity and reliability of writing evaluation methods 169 Importance of feedback 170 Selection of evaluation methods 172 Factors Which Influence the Eval uation Decisions of Educators 173 Standardized testing and writing evaluation 173 Additional factors influenc ing evaluation decisions 175 Beliefs into Practice 176 Implications for Future Research 179 References 182

PAGE 7

iv Appendices 197 Appendix A: Writing and Evalua tion Questionnaire 198 Appendix B: Final Version of Surv ey Monkey: Writing and Evaluation Questionnaire 204 Appendix C: Interview Protocol for the Educator Interview 223 Appendix D: Additional Quantitative Data Result Tables 225

PAGE 8

v List of Tables Table 1: The relationship between re search questions and their data source(s) 88 Table 2: Responses to question two 96 Table 3: Responses to question 11: Do you ever have assignments in which your students write more than one draft for you? 102 Table 4: Distribution of responses to the question, Please mark how often you use each of the following methods of assessment while assessing the writing of your students. 103 Table 5: Percentage of time spent on informal assignments 108 Table 6: Percentage of time spent on formal assignments 109 Table 7: Relationship of the gr ade being taught and the use of standardized assignments 112 Table 8: Grade teaching and frequency of giving written feedback 114 Table 9: Effectiveness of most fr equently used method of writing by county 116 Table 10: Responses by county of teach ers beliefs about the effectiveness of their evaluation methods 118 Table 11: Responses by county of teach ers evaluating the effectiveness of feedback from the FCAT rubric 121 Table 12: Feelings of teachers with regards to the assessment of writing 125 Table 13: Rates of mandated assessments of writing 127 Table 14: Sample sentences from SYAC participant writings 132 Table 15: Teacher comments on student 3.8 paragraphs 139 Table 16: Visual representation of th e categories and patterns found across all responses 149 Table D1: Frequency of assessing student writing 224 Table D2: County and frequency of assessing cross tabulation 224 Table D3: Experience and frequenc y of assessing cross tabulation 225 Table D4: Grade teaching and freque ncy of assessing cross tabulation 226 Table D5: Age and frequency of assessing cross tabulation 227 Table D6: Percentage of total wo rk involved in assessing writing assessments 227 Table D7: Percentage of time assessing work by county 228 Table D8: Experience and percentage of total work in assessing writing cross tabulation 228 Table D9: Grade teaching and percentage of total work in assessing writing cross tabulation 229

PAGE 9

vi Table D10: Age of teachers and percentage of total work in assessing writing cross tabulation 230 Table D11: Spend more time in assessing some students writing than others 230 Table D12: Experience and whether mo re time is spent in assessing some assignments than others 231 Table D13: Grade teaching and whether more time is spent in assessing some assignments than others 231 Table D14: Age teaching and whether mo re time is spent in assessing some assignments than others 232 Table D15: Feelings about assessing student writing 232 Table D16: Feelings about assessing writing by county 233 Table D17: Experience and fee lings about writing assessment 233 Table D18: Grade teaching and f eelings about assessing writing 234 Table D19: Age teaching and feelings about assessing writing 235 Table D20: Most important aspect that teachers look for in student writing 235 Table D21: Experience and most importa nt aspect that teachers look for in a writing assignment 236 Table D22: Most important aspect of writing by county 236 Table D23: Grade teaching and most im portant aspect that teachers look for in a writing assignment 237 Table D24: Age and most important aspect that teachers look for in a writing assignment 238 Table D25: Sources of learning di fferent methods of assessment 238 Table D26: Places where methods learned by county 239 Table D27: Assignments with more than one draft 239 Table D28: More than one draft by county 239 Table D29: Experience and assessing a ssignments with more than one draft 240 Table D30: Grade teaching and assessing assignments with more than one draft 240 Table D31: Age and assessing assignme nts with more than one draft 241 Table D32: Method of receiving grades by students 241 Table D33: Method of receiving grades by county 242 Table D34: Frequency of using checklists 242 Table D35: Experience and frequency of using checklists cross tabulation 243 Table D36: Frequency of checklists by county 243 Table D37: Grade teaching and fre quency of using checklists cross tabulations 244 Table D38: Age and frequency of using checklists cross tabulation 245 Table D39: Teacher conferences 245 Table D40: Frequency of teacher conferences by county 246 Table D41: Experience and frequenc y of using teacher conferences 246 Table D42: Grade teaching and frequency of using teacher conferences cross tabulations 247 Table D43: Age and frequency of using teacher conferences cross tabulation 248 Table D44: Frequency of peer conferences 248 Table D45: Frequency of peer conferences by county 249

PAGE 10

vii Table D46: Experience and freque ncy of using peer conferences 249 Table D47: Grade teaching and frequenc y of using peer conferences cross tabulations 250 Table D48: Age and frequency of usi ng peer conferences cross tabulation 251 Table D49: Frequency of holistic scoring 251 Table D50: Frequency of holistic scoring by county 252 Table D51: Experience and freque ncy of using holistic scoring 252 Table D52: Grade teaching and freque ncy of using holistic scoring cross tabulations 253 Table D53: Age and frequency of using checklists cross tabulation 254 Table D54: Frequency of portfolios 254 Table D55: Frequency of portfolio by county 255 Table D56: Experience and fr equency of using portfolio 255 Table D57: Grade teaching and fre quency of using portfolios cross tabulations 256 Table D58: Age and frequency of using portfolios cross tabulation 257 Table D59: Frequency of observations 257 Table D60: Frequency of observations by county 258 Table D61: Experience and fre quency of using observations 258 Table D62: Grade teaching and fre quency of using observations cross tabulations 259 Table D63: Age and frequency of us ing observations cross tabulation 260 Table D64: Frequency of rubrics 260 Table D65: Frequency of using rubrics by county 261 Table D66: Experience and fr equency of using rubrics 261 Table D67: Grade teaching and frequency of using rubrics cross tabulations 262 Table D68: Age and frequency of using rubrics cross tabulation 263 Table D69: Frequency of FCAT scoring rubric 263 Table D70: Frequency of FCAT rubric use by county 264 Table D71: Experience and frequency of using FCAT scoring rubric 264 Table D72: Grade teaching and freque ncy of using FCAT scoring rubric cross tabulations 265 Table D73: Age and frequency of using FCAT scoring rubric cross tabulation 265 Table D74: Primary traits scoring 266 Table D75: Frequency of primary traits scoring by county 267 Table D76: Experience and frequency of using primary traits scoring 267 Table D77: Grade teaching and frequenc y of using primary traits scoring cross tabulations 268 Table D78: Age and frequency of us ing primary traits scoring cross tabulation 269 Table D79: Frequency of self assessment 269 Table D80: Frequency of self assessment by county 270 Table D81: Experience and freque ncy of using self-assessment 270 Table D82: Grade teaching and frequenc y of using self assessment cross tabulations 271

PAGE 11

viii Table D83: Age and frequency of using self-assessment cross tabulation 272 Table D84: Most frequent used method of assessment 272 Table D85: Experience and most fre quently used method of assessment 273 Table D86: Grade teaching and most frequently used method of assessment 274 Table D87: Age and most frequent ly used method of assessment 275 Table D88: Rating of the assessment method used most frequently 276 Table D89: Effectiveness of frequent method by county 276 Table D90: Experience and percepti on of effectiveness of assessment method 277 Table D91: Grade teaching and perception of effectiveness of assessment method 278 Table D92: Age and perception of e ffectiveness of assessment method 279 Table D93: Second most frequently used method of assessment 280 Table D94: Rating of the assessment me thod used second most frequently 280 Table D95: Rating of second most frequent method by county 281 Table D96: Third most frequently used method of assessment 281 Table D97: Rating of the third most frequent method of assessment 282 Table D98: Effectiveness of thir d most frequent method by county 282 Table D99: Effectiveness of evaluation methods overall 282 Table D100: Use of both formal and in formal methods of assessing writing 283 Table D101: Experience and use of bot h formal and informal methods of assessment 283 Table D102: Grade teaching and use of both formal and informal methods of assessment 284 Table D103: Age and use of both formal and informal methods of assessment 285 Table D104: Percentage of time of total evaluation spent on informal assessments 286 Table D105: Experience wise summary statistics of percen tage of time of total evaluation spent on informal assessments 287 Table D106: ANOVA for percentage of time of total ev aluation spent on informal assessments for experience 287 Table D107: Teaching grade wise summary statistics of percentage of time of total evaluation spent on informal assessments 288 Table D108: ANOVA for percentage of time of total ev aluation spent on informal assessments for different grades taught 288 Table D109: Age wise summary statistic s of percentage of time of total evaluation spent on informal assignments 289 Table D110: ANOVA for percentage of time of total evaluation spend on informal assessments for different age groups 289 Table D111: Informal assessment frequency by county 290 Table D112: Percentage of time of total evaluation spent on formal assignments 290 Table D113: Experience wise summary st atistics of percenta ge of time total evaluation spent on formal assessments 291 Table D114: ANOVA for percentage of time of total ev aluation spent on formal assessments for experience 292

PAGE 12

ix Table D115: Teaching grade wise summary statistics of percentage of time of total evaluation spent on formal assignments 292 Table D116: ANOVA for percentage of time of total ev aluation spent on formal assessments for different grades taught 292 Table D117: Age wise summary statistic s of percentage of time of total evaluation spent on formal assignments 293 Table D118: ANOVA for percentage of time of total ev aluation spent on formal assessments for different age groups 293 Table D119: Frequency of formal assessments by county 294 Table D120: Use of FCAT writing rubric 294 Table D121: Frequency of using FCAT writing rubric 294 Table D122: Use of FCAT rubric by county 295 Table D123: Experience and us e of FCAT writing rubric 295 Table D124: Grade teaching and use of FCAT writing rubric 296 Table D125: Age and use of FCAT writing rubric 297 Table D126: Experience and frequenc y of using FCAT writing rubric 297 Table D127: Grade and frequency of using FCAT writing rubric 298 Table D128: Age and frequency of using FCAT writing rubric 298 Table D129: Frequency of FCAT rubric use by county 299 Table D130: Helpfulness of feedb ack on FCAT rubric to students 299 Table D131: Helpfulness of FCAT rubric feedback by county 299 Table D132: Experience and helpfuln ess of FCAT writing assessment to students 300 Table D133: Grade teaching and helpfu lness of FCAT writing assessment to students 300 Table D134: Age and helpfulness of FC AT writing assessment to students 301 Table D135: Use of standardized writing assignments 301 Table D136: Use of standardized assessments by county 302 Table D137: Experience and use of standardized assignments 302 Table D138: Grade teaching and use of standardized assignments 303 Table D139: Age and use of standardized assignments 304 Table D140: Frequency of written feedback on writing assignments 304 Table D141: Experience and freque ncy of giving written feedback 305 Table D142: Grade teaching and frequency of giving written feedback 306 Table D143: Age and frequency of giving written feedback 307 Table D144: Current position 307 Table D145: Position distribution by county 308 Table D146: Ever taught writing to students 308 Table D147: Number of years teaching writing to students 308 Table D148: Currently teaching writing to students 309 Table D149: Grade level teaching 309 Table D150: Highest level of education 309 Table D151: Highest degree distribution by county 310 Table D152: Affiliation 310 Table D153: County in which school is located 310 Table D154: Name of school 311

PAGE 13

x Table D155: Gender of the respondent 311 Table D156: Age of the respondent 312

PAGE 14

xi List of Figures Figure 1: Responses to question 33: What is your current position? 70 Figure 2: Responses to question 35 : How many years have you taught writing to students? Please in clude all years spend teaching writing to students in this respons e even if you are not currently teaching. 71 Figure 3: Responses to question 37 : What grade level do you currently teach? 72 Figure 4: Responses to question tw o: How often do you assess student writing? 95 Figure 5: Responses to question three: What percentage of the time that you spend assessing all of your student s work would you say is spent assessing writing assignments? 97 Figure 6: Responses to question eight: What is the most important aspect of writing that you are looking fo r when you assess a student writing? 98 Figure 7: Responses to question 12: How do the students receive grades for those papers? 101 Figure 8: Responses to question 14: Select the method of assessing student writing listed in the previous question that you use most frequently. 105 Figure 9: Responses to question 28: How often do you use the FCAT Writing Assessment? 111 Figure 10: Responses to question 32: How often do you provide your students with written feedback on their writing assignments? 113 Figure 11: Responses to question 17: Please rate the method of assessing student writing that you use second most frequently (as designated in the previous question) on a scale of 1-4 with 1 being minimally effective and 4 being extremely effective. 117 Figure 12: Responses to question 30: How helpful do you feel that the feedback from the FCAT r ubric is to your students? 120 Figure 13: Responses to question seven: How do you feel about assessing student writing? 125 Figure 14: Responses to question 10: Wher e did you learn the different assessment methods that you use to assess student writing? 126

PAGE 15

xii Abstract The overarching purpose of this study was to describe educators beliefs about the evaluation of student writing. The inquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the e ffectiveness of their evaluation methods for judging th e quality of students wri ting samples? and (c) what factors impact the evaluation decisions of educators? The following variables were considered: public and private school settings evaluation methods, a nd educators beliefs about evaluating writing. In order to gain perspective of the current status of the methods utilized by educators in their evaluation of and response to st udent writing, it is helpful to observe them during the teaching of writing and to talk with them about their process for evaluating samples of student writi ng. A mixed methods approach was undertaken during this study and included the collecti on of questionnaire responses, educator interviews, a classroom observation, and th e collection of student writing samples. Interesting points in the findings included the noticeable absence of the notions of validity and reliability in the decision-making process of educators, the apparent impact of educators self-efficacie s on their selection of eval uation methods, and a focus by educators on writing factors perceived as imp acting readability. Implications and future directions for resear ch are discussed.

PAGE 16

1 Chapter 1 Introduction Introduction to the Study The path to becom ing an author begins early. Emergent literacy appears as children interact with their environment and come to unders tand that the symbols around them have meaning. That understanding evolves into attempts to communicate through scribbles, symbols, and pictures (Koenig, 1992; McGee & Purcell-Gates, 1997; Teale & Sulzby, 1986; Yaden, Rowe, & MacGillivray, 2000). Studies have focused on children's understanding of the functions of print and other symbols (Eeds, 1988; Goodman, 1986; Holdaway, 1979; McGee & Richgels, 1996; Snow, Burns, & Griffin, 1998), knowledge of book handling (Clay, 1966, 1985, 1991; Pinnell, 1996), familiarity with formal, written language structures (Bigge & Stump, 1999; Clay, 1985; Ma rtin & Brogam, 1971; Sipe, 2000), and the identification of letters a nd numerals (Clay, 1985; McGee & Richgels, 1996). Such abilities are no longer viewed as prec ursors to reading; ra ther, they are seen as true literacy behaviors evident in young children (i.e., emergent literacy) (Crawford, 1995; Hiebert & Raphael, 1998; Teale & Sulzby, 1986; Whitehurst & Lonigan, 1998). The scribbles evolve into letter s and/or pictures that represen t people or things, then into combinations of letters, and finally into word s first formed with invented spelling and then, finally, into conventionally spelle d words to express the thoughts of the young authors (Dyson, 1985). Ideally, as children matu re, they will learn that they have

PAGE 17

2 something to say and that making their voices heard through their writing comes first and that attention to grammar and conventions come later (Thom as, 2000). Writing is learned in this manner by all children regardless of socioeconomic status or race, and the only difference that may appear between populations of children is the speed at which they move through the writing development cont inuum (Mavrogenes, 1986). That speed is impacted by the differences that exist between the opportunities that children have to be immersed in communication and to engage in making meaning, a task limited by the materials that may or not be available to chil dren in their specific environments (Kress, 1997). In a review of the research availabl e on early writing development and behaviors, Row (2009) discovered that current research in the field of early wr iting is shifting to focus on those differences in order to provi de a better understanding of the link between writing development and environment. Once children have learned how to write, attention turns to how to help them learn to write well and how to do so effec tively. Students ability to write and to communicate their thoughts and ideas through writi ng is critical to th eir success in school and in life. This ability is fostered th rough authentic writing tasks in the classroom (Black, Helton, & Sommers, 1994; Thomas, 2000) and through helpful feedback from the teachers who evaluate their writing (A tkinson & Connor, 2008; Murphy & Yancey, 2008). There are a variety of other factors that influence the writing proficiency of children. In order to learn to become writers, children profitably observe writers in action through the modeling of their te acher, their caretaker, thei r parents, and their peers (Temple, Nathan, Temple, & Burns, 1993). Mo deling includes a teach er sharing thoughts aloud and demonstrating steps taken by writers to complete a piece of writing while a

PAGE 18

3 novice writer observes the process (Tompkins 2008). Morrow (1997) and Temple et al. (1993) agree that some other factors that help children learn to wr ite include exposure to a print rich environment, being encouraged to try new things in wr iting and to take risks in terms of spelling and conventions, bei ng given many opportunities to write, having opportunities to share and to talk about their writing, being allowed to worry only about their handwriting being legibl e rather than perfect, a nd by being exposed to many examples of good writing. Another best prac tice in the teaching of writing involves scaffolding and differentiating instruction to me et the needs of every student at whatever level they are at any particular time (Berry, 2006; Vygotsky, 1978). These are all practices that can be used by teachers to help students become good writers. Background of the Study As a former high school English teacher, I readily admit that I love to teach reading and writing just as much as I love to participate in those activities in my personal life. I also admit that I was much more likely to see a love of reading among my students than to see them with a love for writing. In stead, many of my student s shared that writing was a scary chore that they knew that they had to do but that they just wanted to finish as quickly as possible with a passing score. Even those students who l oved to write and who shared their personal written narratives with me were nervous to do so because they dreaded my reaction to seeing their words on th e paper. How could I celebrate their work while, at the same time, help them to improve their skills? That was the main question that often plagued me when I sat down to grade my stack of 140 student essays and narratives. I believe this question haunts mo st teachers of writing and which is only further complicated by the introduction of standardized testin g rubrics into our

PAGE 19

4 classrooms. Calkins (2005) concurs that such worries are on the minds of many teachers, even those with much experience, who sit with a student or with their papers and must decide how to evaluate their writing or their thoughts about writing. Calkins (2005) wondered if those observing teachers in action while conferencing or engaging in other evaluation practices thought that the process of evaluating writing was No Big Deal (p. 3). I share her curiosity and wonder if that was a view shared by my professors who, therefore, decided to focus their teaching more on writing instruc tion than on evaluation. The teachers quandary becomes one of a tug-of-war. Should teachers respond as they would like to or as they are supposed to? In some schools, teachers are mandated to use the rubric that accompanies the scori ng on the states standa rdized writing test for all writing assignments in their classrooms. In other schools, the use of such rubrics may not be mandatory, but it makes sense to the teachers to help the students get used to the scoring mechanism that will be used to determ ine their eligibility to be promoted to the next grade. In still other schools, the teach ers attempt to standardize their authentic assessment evaluation methods with the hope that those methods will be accepted as a suitable alternative by the admi nistrators who require the st andardized methods to be used (Calkins, 1994). Through my own expe riences with teaching and through my experiences with teaching undergraduates w ho grapple with assessment questions before stepping foot into a teaching position, I have wondered about the ev aluation practices of current teachers of writing and whether those me thods of evaluation are the result of best practices of writing evaluation as found in re search, if they are the methods mandated by the schools where they teach, or if they are methods crea ted by the individual teachers This study aimed to answer some of my questions and to help foster a better

PAGE 20

5 understanding of the choices teachers make wh en faced with the task of responding to and evaluating student writing. Statement of the Problem Once students have navigated all of these learning experiences and have produced writing, a new dilemma is formed. Teachers mu st then decide how to evaluate and respond to that writing, which would seem lik e an ordinary everyday task for teachers, but there is a wide array of evaluation met hods available to educators. For example, educators can choose to respond orally or in writing (Beach & Friedrich, 2006) and can use process or product measures to formulate those responses (Asker-Amason, Wengelin, & Sahlen, 2008). One measure of assessment s hould not be used as the lone method in evaluating a students writing b ecause it is important for the chosen evaluation to be an appropriate match to the assigned writing task (Morrow, 1997). It is, therefore, important for teachers to be familiar and comfortable with a variety of different assessment techniques (Morrow, 1997). In order to dete rmine the best way to approach evaluation, it is essential to review the goals of assessm ent. Rhodes and Shanklin (1993) share that assessment should guide and improve learning, it should guide and improve instruction, and it should help to monitor th e outcomes of instruction. If an educators goal is to meet each of these requirements of assessment, then it is necessary to build a solid repertoire of evaluation techniques. Research in the field of the evaluatio n of composition (Coope r & Odell, 1999; Huot, 1990; Odell, 1980 ) reveals that there are many ch allenges facing educators even when they are able to select an assessme nt method. Which methods they choose also depends, at least to some degree, on their orientation to teaching. In examining the

PAGE 21

6 research on the evaluation of writing, I found my orientation to teacher preparation is of the academic/personal nature (Feiman-Nemser, 1990). I believe that teachers must first know the academic area that th ey are teaching before they can be effective teachers of that material, but I also believe that it is im portant for teachers to create an environment in which their students are able to learn and grow independently (Feiman-Nemser, 1990). For example, if teachers decide to use writ ten feedback as a method of evaluation, they must be careful to contain their remarks to only the most important areas needing attention so as not to hinder their students in their attempts to practice self-evaluation skills (Graves, 1983). In order to practice a more holistic view of evaluation, teachers may choose to have their students complete writing portfolios (Camp, 1985; Elbow, 1986), or they may utilize informal assessment measures using observations and anecdotal notes as a record of student progress in wr iting (Newkirk & Atwell, 1988). Regardless of the selection that educators ma ke when evaluating a writing selection, it is important that the assessment be valid, it measur es what it intends to measure, and it is reliable, it produces the same results upon retesting (Murphy & Yancey, 2008). Some researchers (Huot, 2002) focus on reliabilit y, but others (Williamson, 1993) stress that validity is a more important construct to uphold in writing assessment. Teachers must decide for themselves what they hold to be more important when selecting the methods of evaluation that they will use in their classrooms. Another way to guide the decision-maki ng process of an educator who is attempting to select a method to use in evaluating writing would be to follow the principles of authentic assessment (Rudde ll & Ruddell, 1995) as that framework for evaluation encompasses all types of assessmen t methods and helps to give the teacher a

PAGE 22

7 full picture of the students abilities through frequent asse ssment using a variety of methods that are deemed to be appropriat e to the tasks at hand. Whether current educators choose to follow the advice of one researcher or another, they have many options of different evaluation tools (Beach & Friedrich, 2006; Newkirk & Atwell, 1988; Ruddell & Ruddell, 1995). Essentially, these re searchers have shown educators what to do in terms of the evaluation of writing and have supported those suggestions with research showing that the methods are useful What is lacking, th en, is knowledge of whether or not practicing educators are act ually implementing these research-supported evaluation methods. This study will examine whether the educator participants are putting this assessment research into practi ce during their evaluati on of student writing samples. Rationale for the Study Standardized writing assessments are part of the classroom and often influence the instructional decisions of the teacher (Hillocks, 2003; No Child Left Behind, 2003). Within this context, a clear picture of the evaluation of writing and of how teachers approach the task of evaluating and responding to student writing samples is needed. Additionally, the validity of writing assessments is often questioned (Huot, 2002; Yancey, 1999), and there is a need to identify if teachers are choosing methods of writing assessment that are considered to be valid measures. Huot (2002) stresses the importance of everyone in the field of education coming to agree on a definition for the term validity that extends beyond the one cited by other researchers (Yancey, 1999) who simply state that if an evaluation measures what it is supposed to measur e, then it is valid. Huot maintains that we need to broaden our requirements used in determining validity to

PAGE 23

8 include a look at the methods and theories used to guide the creation of the measurement. His argument is that any measure can be vali d, but that the designa tion of being so does not mean that the information gained from th e use of the measure is actually valid or useful (Huot, 2002). If it is im portant to educators that th eir evaluation of students writing be valid, then they should be searching for effective me thods of evaluating writing that are valid. What, th en, can help to determine wh ether or not a measure is actually valid? While reliability, the agreemen t of independent readers, is another indication of a measures validity, that in and of itself cannot establish validity (C herry & Meyer, 1993). Williamson (1993) insists that an instrument ma y be a valid measure, but the results do not necessarily provide an accurate reflecti on of the knowledge being tested. Moss (1994) believes that reliability is a necessary component of validity. In an effo rt to create a better test of validity, Guion (1980) established a tr i-fold test of validity involving criterion validity (the relationship of a measure to outside criterion) content validity (the domain of knowledge or ability being measured), and construct validity (the construct of the skill that is being measured). The idea was that ha ving to meet three te nets of validity would ensure more valid measurements in education. Huot (2002) shares that a problem became apparent, however, when measures were be ing called valid even though they displayed only one of the three types of validity. Such cl aims were often touted in the justification of the use for multiple choice tests cove ring grammar and mechanics (Camp, 1993). A measurement of the actual constr uct of writing is missing in su ch tests, and that makes it difficult to say that the measure actually te lls us anything about the students writing abilities (Huot, 2002). Moving towards another view of validity, Messick (1989) required

PAGE 24

9 proof that the development of a measure included the consideration of theory, and evidence that the instrument was valid needed to be provided. Mo ss (1998) suggests that educators can ensure the validity of measures only when they constantly and consistently monitor the results and revisions of those evaluative tools. While the validity and reliability of wr iting evaluation methods are certainly important, teachers seem to be most concer ned about finding the methods of evaluation that effectively allow them to identify their students areas of proficiency and areas where improvement is needed (Beach & Friederich, 2006; Cooper & Odell, 1999). I expected with this study to learn more about the c onflict between choosing methods designated as being reliable and valid and those perceived to be the ones most effective for identifying the strengths and weaknesses of their students. There is a myri ad of options in evaluation methods available to teachers of writing, so their primary goal should be to select those methods that best match the goals of their curriculum and that allow them to further individualize the evaluation process for each of their students (Beach & Friederich, 2006). In order to gain perspective of the curr ent status of the methods utilized by educators in their evaluation of and response to student writing, it is helpful to question and observe them during the teaching of writing and to talk with them about their process for evaluating samples of student writing. It is also helpful to examine their responses from the interview and their actions during the observations with respect to their responses on the questionnaire to see if their shared beliefs were c onsistent across all of the phases of data. The cooperation of a pair of fifth grade teachers at a private school provided a context in which I was able to gain a snapshot view of the evaluation practices

PAGE 25

10 currently in use by local teacher s of writing, and I was also ab le to learn more about the beliefs regarding the evaluation of writing as shared by those educators. Theoretical Framework In order to assess students writing, it is important to understand the ways in which students become literate. A sociocultural view of literacy is used to guide this study. Stemming from Vygotskys (1962) social development theory and his belief that students interactions with their environmen t and with the people around them shape their learning, sociocultural theorist s believe that students do not learn and grow in isolation. Nor do they learn simply by receiving sets of rules and guidelines that govern the way that they should learn, read, or write (Prior, 2006). Students come to school with a wealth of knowledge and experiences already in thei r repertoire from which they draw upon as they negotiate (or mediate) their wa y through the school day and through their assignments and relationships (Englert, Mariage, & Dunsmore, 2006; Prior, 2006). Students are learning socialization skills as they grow and are also learning to assimilate their cultural influences with each new experi ence that they encounter, and the key is to recognize that while all children are having a sim ilar growth experience as they navigate school and life, they are individuated as th ey process their expe riences in their own ways (Prior, 2006, p. 55). Essentially, students can process their l earning and living in such a way that allows them to grow as i ndividuals even when they are learning in a group setting. According to Vygotsky (1978), in order to be an effective teacher of children, it is important for teachers to shar e their knowledge with the learners. To put his views into perspective with relati on to writing, he did not mean fo r teachers to simply tell students

PAGE 26

11 how to write with lists of ru les and conventions to follow. Instead, he contended that the best way to foster the act of writing with children was by showing them how to write through explicit and extensive modeling prac tices from brainstorming all the way to revision. Englert et al. (2006) and Hillocks (1984) share the view that teachers are charged with the responsibility of sharing their knowledge as an expert with their students in order to assist their development as young write rs. In order to effectively share their knowledge of writing with their students, that mean s that educators need to be able to focus on meeting their students needs in terms of writing proficiency and to challenge them to grow on an individual ba sis (Beach & Friederich, 2006). This study will explore the evaluation options that are in use and available to help educators identify students points of need. Teachers, however, are not the only source of information for children. Therein lies the challenge for educators. For assessment to be effective, sociocultural theory suggests that it cannot occur in isolation. Assessment of writing is optimal when occurring in situations within each stud ents Zone of Proximal Development (ZPD) (Vygotsky, 1978) and when students are allowed to write with access to tools (i.e., related texts, peers, language clues, etc) (Englert et al., 2006; G ee, 1992). Another view within the sociocultural perspective suggests that the assessment process is further complicated by the fact that teachers are often the dominant authors of students writing assignments in that they are the ones in charge of tel ling the students what, when, and how to write, yet the students are the ones who are held wholly accountable for the resulting writing (Prior, 2006). While there are challenges wh en it comes to implementing school practices that are responsive to the sociocultural philo sophy, it is important to value the everyday

PAGE 27

12 life-worlds of students and to find ways to take advantage of a ny connections between those worlds and the school environment in order to fully support students in their literacy development (Prior, 2006). Purpose The overarching purpose of this study was to describe educators beliefs about the evaluation of student writing. The followi ng variables were considered: public and private school settings, ev aluation methods, and educator s beliefs regarding the evaluation of writing. Research Questions Specifically, the following research questions were addressed: 1. What are the differences in the ways in which educators approach evaluating student writing? 2. How do educators evaluate the effectiv eness of their eval uation methods for judging the quality of stude nts writing samples? 3. What factors impact the evalua tion decisions of educators? Delimitations The sample used in this study was a c onvenience sample that utilized volunteer educators who attended a lo cal university-sponsored writer s conference with their students or who helped select their schools student participants for the conference. Because of this factor, the results of this study will not be generalizable to a larger population. However, by focusing in-depth on a couple teachers, I was able to provide rich descriptions of the enactment of evaluati on that may be transferable to other studies.

PAGE 28

13 Limitations There are several threats to the internal validity of this study that may have occurred during the course of the project. The History Effect (Johnson & Turner, 2003) may have occurred during the study if the e ducators were exposed to any professional development workshops, university classes, etc. between the time that they received the questionnaire about writing and when they actually completed it or between receipt of the questionnaire and the interview and observations. The exposure to any new information about the evaluation of writing between the beginning and end of the study could have caused the teachers to answer the questions on the questionnaire or in the interviews in the way that their educational background and professional development experiences taught them that they should respond rather than responding in a manner that reflected their true behaviors. They may also have ha d difficulty articulating their personal beliefs, so they may have, then, resorted to giving the answer that they believed was correct. It is, therefore, possible that the self-reported data may not be accurate, but at the same time, the answers that they chose to give may allo w a glimpse of what those individuals see as being valued in the educational setting even if that varies from th eir personal preferences (Johnson & Turner, 2003). I reassured the partic ipants that their responses would remain anonymous to everyone except for me. This is a slight threat as it is expected that the professionals involved in the study answered truthfully as guided by the questionnaires instructions relating to their current practi ces. However, because it is possible that the data provided may not be accurate, that possibi lity was considered during data analysis of the results.

PAGE 29

14 Another possible threat to internal validity would be related to instrumentation. Because a portion of the data relies on my remaining consistent over time and across all observations and all participants, it is possible that I and, ther efore, the instrument, could have changed slightly from one observation to th e next. I closely monito red this threat to be sure that each stage of the study and each observation and interview was as close to identical to each other as possible. Video/a udiotaping the observations and interviews allowed for additional reassurance of wh ether or not my goal of keeping the instrumentation the same was met. I may also have added to the threats to internal validity with expectancy effects. If I saw something because I expected to see it rather than actually seeing it, then that false data could impact the results of the study. Again, careful field notes as well as voice and vi deo recorded observati ons and interviews clearly showed the events as they occurre d. A review of those notes and recordings allowed me to double check the accuracy of the data used for interpretation. Additionally, there are other possible threats to the inte rnal validity of the study. The Hawthorne effect could be an issue as the educators in the obs erved lessons and the educator respondents to the questionnaire may have acted or responded differently knowing that they were part of a study th an they would normally (Hunter & Brewer, 2003). Additionally, mortality became an issu e, which threatened internal validity because all of the participants who were selected to partic ipate in the observations and interviews along with the questionnaire did not complete all components of the study. Any threats to internal validity will be addressed in the analysis of the results. The external validity of the study wa s also threatened. Because of the convenience sample and the specificity of the topic studied, the results of this study are

PAGE 30

15 not generalizable to other populatio ns or situations. Only replication of the results of this study can resolve the threat to ex ternal validity. This threat will be addressed again in the results section. Operational Definition of Terms Assessment The process of gathering and discussing information from multiple and diverse sources in order to develop an unde rstanding of what students know, understand, and can do with their knowledge as a result of their educational experiences. Authentic assessment A type of assessment in which students must perform real-world tasks (in this case, writing-related tasks) in order to demonstrate the meaningful application of essentia l knowledge and skills. Authentic audience The readers of a writing selection that are actually invested in the piece in some way. Having an authentic audience in mind during the writing allows students to have a purpose for the task. Belief Something believed or accepted as being true. Educator For this study, the term educator will encompass all teachers, administrators, reading coaches, curriculum sp ecialists, etc. who are either involved in the selection of the participants for the authors conferen ce or who attend the conference with the children as chaperones and are, therefore, eligible to complete the questionnaire. Evaluation The way in which a reader responds to a piece of writing with the intent to give the writing an assessment of quality on it s own or as compared to other selections of writing. Grading The process of assigning a numerical or letter score, based on a pre-determined scale, to a students work.

PAGE 31

16 Narrative writing Writing that tells a story wi th a beginning, middle, and end Private school Parents pay tuition costs for thei r children to attend these schools, and the schools are not bound by la w to administer the FCAT. Public school Schools that receive funding from the state of Florida to provide a free education for children. These school s may include charter schools. Rubric A form of evaluation used for writi ng that lists characteristics sought by the evaluator and presents a scoring system for each of those characteristics. Standardized test A test given to a population that is administered and scored in a consistent manner. Writers conference A local, university-sponsored aut hors conference where childrens authors and illustrators offer presentations on writing to the children who attend. All local schools are invited to attend and to bring their students. A fee is charged for each participant. Writing sample For this study, the writing sample wi ll be the narrative story that the students submitted to their teachers. The samples were requested so that the educators could evaluate them and choose the best one s for those authors to attend the authors conference. Importance of the Study The evalua tion of writing is a complex task. The current educational climate is one in which all students are required to write for standardized tests. With the increased use of assessment measures to monitor the progress of students a nd their writing ability, teachers are being asked to make instruct ional and evaluative decisions that are responsive to the current assessment-driven climate (Conca, Schech ter, & Castle, 2004).

PAGE 32

17 Conversely, it is also necessary for students to be able to write in the real world for authentic audiences and authentic purposes. With these dual goals being present for our students, it is important that educators be fully aware of all of the evaluation methods at their disposal and for them to be able to sel ect the method of evaluati on that best fits the writing task at hand. Having a la rge repertoire of evaluation methods at their disposal means that teachers will be able to evaluate all types of writing done by the students and will, therefore, be better-equipped to show their students how to make improvements in all of the different genres of writing that they do while also learning where their instruction can be altered in order to reach all of their students at their point of need. Unfortunately, much of the current av ailable research centers on evaluating writing that results from a standardized te st, and there is, therefore, a gap in our understanding of the best way in which to meld the techniques of evaluation of writing done for a standardized test with those to evaluate writing done for authentic purposes. More research is needed to help us understand the struggle that educators face in attempting to wrestle with the variety of evaluation methods for writing that are available to them and to help us unde rstand the factors that influen ce the evaluation decisions that they make. The overarching purpose of this st udy was to describe educators beliefs about the evaluation of student writing. The i nquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the e ffectiveness of their evaluation methods for judging th e quality of students wri ting samples? and (c) what factors impact the evaluati on decisions of educators?

PAGE 33

18 Chapter 2 A Review of the Literature In order to better understand how teache rs feel about the myriad methods available to them to use in the evaluation of their students writing, and it is necessary to explore the current and seminal research rela ted to that field. The overarching purpose of this study was to describe edu cators beliefs about the evaluation of student writing. In reviewing the available lite rature, my inquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the e ffectiveness of their evaluation methods for judging th e quality of students wri ting samples? and (c) what factors impact the evaluati on decisions of educators? Introduction This chapter begins with clarification of the concept of evaluation and moves to explore the history of the ev aluation of writing as well as the important constructs of validity and reliability, both generally and as they apply to writing. Next, I examine the available research on teachers beliefs about the evaluation of student writing as those beliefs are the skeleton of this study. Unde rstanding the research on teacher beliefs (Pajares, 2003) will assist in the analysis of the related data for this study. I then move on to review empirical research across the field of writing assessment in order to come to an increased understanding of the current state of educators practices in the field of writing

PAGE 34

19 evaluation. This review was conducted ove r a four-month period and was limited to research which addressed teachers beliefs about the evaluation of writing, the research that detailed the evaluative options availa ble to teachers of writing, and that which explored different factors that may influence the evaluation process of educators. Seminal works referenced by many (more than ten) researchers or those that were recommended by university professionals were included in this review alon g with current research in order to gain an understanding of where th e fields of writing evaluation and teacher beliefs began as well as of where they stand in todays educational settings. Evaluation Defined In reviewing the research related to the evaluation of writing, it becomes apparent that a definition of what the evaluation proce ss involves is a necessary component of this endeavor. Evaluation should not be confused with grading. Grading assigns a specific number or letter to a completed selection of writing when the assignment comes to an end while evaluation can be an ongoing process that may or may not result in a letter or numerical grade (Cooper & Odell, 1999). The assessment of writing, then, is a multifaceted process. It is one that should be done authentically and which should include a myriad of practices including observations and collaboration while being responsive to the needs of both the students and the goals of the curriculum (Ruddell & Ruddell, 1995). According to Cooper and Odell (1999), the aim of evaluation is to pinpoint the strengths and weaknesses of a writing sample, and in order to share the perceived strengths and weaknesses with the students, teachers need to be comfortable describing their response to the writing and need to be able to do so before a final draft is written. Once teachers

PAGE 35

20 are comfortable with responding to writing, then they are better equipped to assist their students in improving the writing. In order to determine whether or not their current evaluative practices are effective, teachers and others involved with education must consider three questions. First, What assumptions are implicit or ex plicit in our evaluati on procedures? Next, Are those assumptions consistent with current discourse theory? Finally, Will the result of using these procedures help us with the problem of improving students writing? (Odell & Cooper, 1980, p. 35). Unfort unately, many teacher candidates do not feel prepared to answer those questions as they feel that they are either not good enough at writing themselves or that they are not strong enough writers to be able to effectively teach and evaluate the writing of their future students (Gallavan, Bowles, & Young, 2007). How, then, do they go about the proce ss of evaluation once they are teachers in their own classrooms? This review of the liter ature in the field of writing evaluation seeks to establish an increased unders tanding of the field through a l ook at its history as well as the present status of those who are involved in the evalua tion of writing on a regular basis. Knowing what options todays teachers have available to use in the evaluation of writing and what the available research can tell us about the current state of writing evaluation in the schools will help increase understanding of the educators participating in this study and the evaluativ e decisions that they make wh en evaluating the writing of their students. History of the Evaluation of Writing The root of writing assessment in the Un ited States appears to be a written examination that Harvard University implem ented in 1873 in order to gauge the writing

PAGE 36

21 ability of the university candidates (Lunsfor d, 1986). That writing test was the catalyst for other schools to design their own writi ng assessments, and those efforts led to the establishment and first meeting of th e National Conference on Uniform Entrance Requirements in English, which was followed by the creation of the College Entrance Examination Board in 1901 (Lunsford, 1986). A fu rther examination of the history of the research of the evaluation of writing finds that assessment followed three trends. From 1950-1970, writing evaluation focused on objective tests while the focus then shifted to holistically-scored essays from 1970-1986 a nd then again shifted to portfolio and programmatic assessment from 1986 through Yan ceys review of evaluation research in 1999. Before the 1950s, there was little resear ch available on the evaluation of writing because the majority of research focused on th e teaching of literature with little regard given to how to assess the learning that occurred as students responded, in writing, to their lessons (Cooper & Odell, 1977). Even though some research was available (see Starch & Eliot, 1912), it was not a full body of research. A seminal work in the field of the evaluation of writing was published by Godshalk, Swineford, and Coffman (1966) and de scribed a method that could be used in order to achieve agreement among independent rate rs as the evaluators were, at that point in time, attempting to make a move from being concerned with reliability to increasing the validity of writing assessments (Mur phy & Yancey, 2008). After that point, most publications that focused on evaluation in writ ing were concerned with ways to maintain the current methods of the time (Huot, 2002). When the evaluation of writing was discussed, it was done so from a practical stylist rhetoric in which the focus was on the grammar, conventions, and style (Cooper & Odel l, 1999, p. xii), or it was in the field of

PAGE 37

22 educational measurement, which conceptua lizes writing as one small area of all educational evaluation (Huot, 2002). The practic al discussions matched the methods of the time, which were mostly multiple-choice tests focused on conventions (Yancey, 1999). Later, in the 1990s, there was an incr eased interest in writing evaluation, and two new journals, Assessing Writing and The Journal of Writing Assessment were born with much of the writing in the journals focusi ng on how to design evaluation methods for the assessment of writing (Huot, 2002). Huot (2002) believes that the field of writing evaluation still suffers from the negative impression that was formed when the field first appeared du ring the late 1800s as a way to determine not who was a skilled writer but rather who was taught by an ineffective teacher and was unable to write coherently. Instead of focusing on that negative impression, it is po ssible to move forward by finding a common ground upon which a majority of writing teachers can ag ree. One possible foundation can be found in the seven beliefs about the teaching of writi ng that Cooper and Odell (1999) view as being necessarily shared by writi ng teachers, and they believe that these concepts must be taken into consideration during the planni ng of writing instruction and of writing evaluation. Those beliefs are: Writing occurs in recursive stages that are different from writer-to-writer. Students should be allowed to do real writing (in paragraph form) from the beginning rather than being limited to sentences or phrases and working up to longer writings. Work can be done on the smaller segments of writing within the paragraph form.

PAGE 38

23 Writing assignments should have an authentic purpose and audience, which are clearly shared with student s at the outset of the assignment. We do not write in the way we speak, and we must pay close attention to the differences between speech and the written language when we learn about writing. It is important to involve speech in the process of writing, and that can be done through conferences, discussions, workshops, etc. Writing is both open to interpretatio n and bound by rules in that students can write about anything and play wi th language within the conventions and formats that they are taught. One way to help students have better retention of the lessons they have learned about writing is to engage them in the practices of self-evaluation and self-reflection. Of course, not all educators will agree with all of these points. The difficulty in finding a common ground stems from the variet y of philosophies and beliefs held by educators (e.g., a belief in the importance of teaching writing as a process, valuing the use of the writing workshop, elec ting to not assign grades to student writing, etc.). Those who share my post-positivist views say that edu cators are free to agree or disagree with any of the points on the list and that they may even agree and disagree with a single item in the group or with the need to evaluate writing in any manner. Validity and Reliability in the Evaluation of Writing An important consideration for educators to acknowledge in their selection of an evaluation method is whether or not that met hod is valid and/or reliable. According to

PAGE 39

24 Cooper and Odell (1977), reliability is an el usive concept in the assessment of writing. For example, they agree that educators can gather a reliable sample of writing from a student by asking them to write single pieces of writing in several different sittings that will then be scored by a group of raters. Howeve r, they believe that such a process only yields reliability of the st udents writing ability only in one genre (Cooper & Odell, 1977). That result is problematic because the success of a students writing in one genre does not transfer to automatic success in a nother style of writing, and another reliable assessment would be needed for each genre of writing (Breland, Camp, Jones, Morris, & Rock, 1987). There is also an issue with th e reliability of the scorers in such an assessment. While it is possible for groups of raters to come to a consensus on a score for a writing sample, they are all approaching the paper with different backgrounds, presumptions, and biases, which are all asp ects that could influe nce the resulting score for better or for worse (Diede rich, French, & Carlton, 1961). Validity is also difficult to obtain th rough writing assessments (Cooper & Odell, 1977). For example, Burgin and Hughes (2009) suggest that assessmen ts which utilize a single sample of a students writing as the source of evaluation ar e inherently dealing with a lack of content validity which results from a one time snapshot of a childs writing. The dilemma, then, is whether or not that method of assessing writing should be used at all. This dilemma is one that has ex isted for decades, and it will likely continue to exist as long as such assessments are utili zed. Three types of validity were originally established as being importan t to Cooper and Odell (1977). Predictive validity is the ability to predict the performance of the pe rson being assessed at another time or the ability to show that the st udents performance on a particular assessment matches other

PAGE 40

25 indications of their achievement in that area (i.e., grades from their teacher). A second type of validity is known as content validity, and that is found when it is shown that a particular assessment is appropriate for measur ing the writing that results from a specific program, curriculum, etc. Fina lly, construct valid ity, the degree to which the assessment actually measures writing ability is also an important aspect of all methods of writing evaluation. Williamson (1993) later identified four areas which he believed to be necessary for teachers to consider in choosing an assessment method. Those were construct validity, contextual validation, authen ticity, and the notion of consequence as a facet of validity. Over time, many new tests were develope d, and the argument about their validity was inevitable. Simply asking whether or not an assessment is valid based on the different types of validity outlined by Williamson (1993) or Cooper and Odell (1977) is insufficient. Researchers must also ask them selves whether the methods they choose are direct or indirect in their approach to assessment (Murphy & Yancey, 2008). Indirect methods of assessment involve th e estimation of probable writing ability through the observations of specific kinds of knowledge and skills associated with writing (Murphy & Yancey, 2008, p. 367) and ha ve been criticized on the basis that consequential and predictive (Hughes & Nelson, 1991) validity can be lacking with such methods. Validity later came to be viewed as a single, unified con cept rather than as having separate types of validity needing to be met (Camp, 1996, p. 136), but the pressure to ascertain the validity and reliab ility of all methods of evaluation for writing remains. Such pressure is good in that educat ors and researchers will always be looking for better ways to evaluate student writing. The important thing is for them to be mindful

PAGE 41

26 of their time and to use their resources effectively while working to improve their evaluation methods so that too much time is not spent looking for a perfect method that may not exist. While there is, likely, no such thing as a perfect evaluation method for writing, the quest to find one will encourag e conversations and the learning of new concepts and methods, which would not occur without the impetus to keep looking for a better method. Teacher Beliefs About Writing The beliefs held by teachers inform th eir instructional decisions and influence their actions (Ashton, 1990; Wilson, 1990). Pa jares (1992) believes that the area of teachers beliefs is lacking in the literature because it is a da unting task for researchers to determine a way to study a mental construc t. Bandura (1986) linked self-efficacy (i.e., beliefs about yourself and what you can do) to what people feel comfortable and confident doing in thei r day-to-day lives. For example, if a person has a high self-efficacy as a teacher, then she is more likely to feel better about what she does in the classroom, and she can reinforce those positive feelings by watching the successes of her students or by comparing herself to her peers (Pajares, 2003). While much of the re search in the field (see Bandura 1997; Graham & Weiner, 1996; Pajares, 2003; Pajares & Johnson, 1996; Pintrich & Schunk, 1995) focuses on the influe nce self-efficacy ha s on the performance of students, there is some rese arch (Berry, 2006) that examines the effect of those beliefs on the writing teaching practices of educators. It is important to note that even though research on teachers beliefs as they pertain to writing instruction is slim, researchers (Nespor, 1997; Pajares, 1992) have consistently found that teachers generalize their beliefs across the subject areas and use their general beliefs about le arning to guide their

PAGE 42

27 selection of instructional proce sses, materials, and even the conversations that they have with their students. One area of interest when dealing with te achers beliefs, then, is how their beliefs can shape their instructional practices. With respect to writing, Berry (2006) observed teachers working as a team in a classroom of learning disabled children. The teachers used their underlying beliefs about the abilitie s of their students base d on their specific disabilities and backgrounds to determine how to respond to the st udents during writing instruction. For example, when some student s offered a sentence dur ing a shared writing, the teachers would accept the sentence as it was without requiring the students to do any revision to make it be a complete and corre ct sentence. When other students offered a sentence, however, the teachers would encour age them to revise the sentence or would scaffold them in a process of verbal revisi on based on their belief that those particular children had the ability to be successful in that process (Berry, 2006). Similarly, the reason that the teachers approached writing as a group or in a shared format was to include everyone in a process where they coul d all feel succe ssful in the creation of a class writing, and instructional practice in writing was, therefore, moderated by this teachers need for community. The teachers be lieved that a feeling of community and belonging was important, and therefore, they made instructional decisions during their planning for writing time that would be sure to address that belief (Berry, 2006). Even though these beliefs did not pertain specifically to the practice of writing, knowing how their beliefs affected their instructional practices in any way helps us understand other decisions that they make in ot her subject areas (Pajares, 1992).

PAGE 43

28 In a separate study by Lee (1998), it wa s determined that not all teachers instructional practices in writing are as close of a match to their be liefs as were the ones in Berrys (2006) study. Lee asked teachers about their beliefs with regards to writing and then asked them to share their current teac hing practices. She found th at teachers believed that discourse and learning about large concepts such as main idea, style, and structure were important but that their instructiona l practices focused on grammar and vocabulary, which displays a sharp dichotomy between wh at they believe to be effective in the teaching of writing and what they were ac tually doing in the classroom. Lee (1998) suggests two possible reasons for the split betw een what the teachers shared that they believed to be important and what they actua lly taught to their students. She suggested that perhaps the students grammar needed much improvement simply to make their pieces readable, so the teachers chose to work on that aspect of their writing first. She also posits that it is possible that the teachers skills in wr iting were not sufficient enough to teach the students how to improve their writing even though the teachers knew that was what they should be doing with them. This lapse between beliefs and practice is in alignment with Banduras (1986) supposition th at having a belief and knowledge of how to accomplish something does not ensure that it will be done in a successful manner. Another interesting finding was that so me of the teachers responses on the questionnaire differed greatly from their actu al practices, which led Lee to believe that the instructors were choosing the answer that they knew would be correct in the eyes of the researcher, even though it was not an accura te representation of what they were doing in their classrooms. For example, she found th at a majority of the teachers in the study believed that the teaching of writing should be explicit through such practices as shared

PAGE 44

29 writing and modeling. Those teache rs, however, did not actually institute those practices during their writing lessons. Furt her analysis of the data revealed that many of the teachers who did not attend to grammar and voca bulary at a higher level held a belief that those areas of writing were topics to be covered by teachers in the younger grades, so their teaching practices focused on the area s of writing that they believed to be appropriate for their students grade level de spite any apparent needs in the students writing that might suggest otherwise (Lee, 1998). Graham, Harris, Fink, and MacArthur (2001) took a similar approach in another study in which they asked teachers to fill out questionnaires coveri ng both their beliefs about teaching writing and their instructional practices. Their findings show that teachers who have a higher self-efficacy, i.e., they believe that they are pr oficient teachers of writing, feel more confident in their ability to teach writi ng, that their students spend more time actually composing writings, and that they are able to incorporate the teaching of grammar into their writi ng times without it becoming the focus of their lessons. Conversely, teachers with low-self-efficacy ar e more likely to avoid teaching grammar, and their students spend less time engaged in writing activities becau se those teachers do not feel confident in their abilities to teach, assist, and guide their students through writing (Graham et al., 2001). A positive finding was that 94% of the teachers responding to the questionnaire felt conf ident in their ability to teach writing and to cause improvement in their students writing. Unfo rtunately, however, on another question that asked whether or not the teachers believed th at they could help students improve their writing when there were factors in place, such as a lack of discipline or the lack of a good home experience, that could be viewed as impediments to their progress, 42% of the

PAGE 45

30 teachers felt that they would not be able to effectively teach writing to those students (Graham et al., 2001). The researchers conclu ded by suggesting that more research be done in an effort to see if teacher beliefs are causally related to th eir ability to affect students improvement in the area of writing (Graham et al., 2001, p. 199). When a teacher comes across factual info rmation that repudiates knowledge that she thought she had, she will revise that knowledge (Nespor, 1987). Beliefs, on the other hand, seem to be more permanent and resistant to changes based on new knowledge or experiences and, at the same time, beliefs ha ve more power over th e decisions that are made in day-to-day life than simple know ledge (Nespor, 1987). However, that exact theory causes other researchers to believe th at beliefs are less important to teachers than is knowledge as they are convinced that knowle dge is more objective and likely to evolve to match new situations (Roehler, Duff y, Herrmann, Conley, & Johnson, 1988). Pintrich (1990) and Berry (2006) offer that perhaps knowledge and beliefs work together to inform teachers practices. Another view, offered by Raudenbush, Rowen, and Cheong (1992), is that teachers beliefs and feelings of efficacy may change based on the subject area as well as the perceived ability level of the students who they are currently teaching. Regardless of how knowledge and beliefs work together, or separately, it is generally understood that beliefs are formed and shaped by a persons life experiences. The earlier those beliefs are shaped, the firm er they hold with lit tle chance of being changed (Pajares, 1992). In the field of educa tion, beliefs can be troubling as they color every action and every memory that teachers have (Pajares, 1992). Beliefs within attitudes have connections to one other and to other beliefs in other attitudes, so that a teachers attitude about a particular educational issue may

PAGE 46

31 include beliefs connected to attitudes about the nature of society, the community, race, and even family. These connections create values that guide ones life, develop and maintain other attitudes, interpret information, and determine behavior. (Pajares, 1992, p 319) When teachers face a situation in the classr oom for which there is no clear-cut answer, they will rely on their belie fs to guide them into acti on (Kagan, 1992). That connection between beliefs and instructional action means that one teachers classroom practices will be naturally different from anothers w ith each classroom environment and lessons reflecting the personal beliefs of eac h particular teacher (Berry, 2006). Posner, Strike, Hewson, and Gertzog (1982) posit that if someone is unsure about her beliefs, then it is possible that new and plausible information can help encourage the creation of new beliefs as long as those would be in alignment with her current belief structure. However, it is also true that even those teachers who are open to the change in their beliefs may experience feelings of di scomfort and frustra tion as the new, and conflicting, knowledge is first introduced to them (Flo rio-Ruane & Lensmire, 1989). Dempsey, Pytlikzillig, and Bruning (2009) agre e that a transformation of beliefs is possible. In a study in which they worked w ith pre-service teachers specifically to raise their self-efficacies with rega rds to writing and writing assessment, they found that walking the teachers through pr actice evaluations of student writing where they could receive feedback from experts on their asse ssment performance led to increases in the self-efficacies and beliefs about assessing wri ting. Obviously, it is not an easy process to shake peoples confidence in their beliefs e nough to force them to make a change. It follows, then, that Guskey (1986) found that professional development workshops were

PAGE 47

32 generally ineffective in bringi ng about changes in the belief structures of teachers unless they had the ability to utilize whatever te chnique was being taught and then had the opportunity to see that it would have positive impact on the students achievement. Some belief changes will be welcomed while others will be resisted, but the important point is that change is possible (Florio-Ruane & Lensmire, 1989). Unfortunately, assessment has often been viewed as a negative, disruptive feature for the teaching of writing, and that view has transferred as a belief held by many teachers, and that type of belief will be difficult to alter (Huot, 2002, p. 9). Such a view is most likely made more difficult to ignore when reading reports like the one by Chait (2010) in which the aut hor reveals that it is hypothesi zed that the removal of the bottom six to ten percent of teachers would lead to increases in student achievement (p. 2). Teachers may find it difficult to think pos itively about assessment when they are being personally assessed with the thr eat of losing their jobs. The questionnaire administered for this study asks teachers to identify their beliefs regarding assessment, and those responses will show if the particip ants have an unshak eable, negative belief towards assessment or if they have a more positive view. Knowing that the amount of writing instruct ion that their students will have is limited, teacher educators must be sure to pr ovide quality instruction in writing when they have the opportunity to do so, and it is also helpful to link the teaching to theory (Norman & Spencer, 2005). The connection of their learning to research and theory enables the pre-service teachers to examine their personal beliefs with respect to theory and to then be aware of any differences that may exist (Berry, 2006; Pajares, 1992). Norman and Spencer (1995) completed a study with 59 pre-servi ce teachers who were

PAGE 48

33 enrolled in two semesters of literacy c oursework. The pre-service teachers began by writing an autobiography about their lifes experiences with writing, and throughout the course, they continually examined their be liefs about writing. Norman and Spencer (2005) found that 80% of the pre-service teach ers credited their former teachers with having an impact on their self-perception of themselves as writers. Generally, they had positive feelings about their elementary teach ers but less positive feelings about their secondary and college-level inst ructors. In fact, those teac hers who were perceived as having had a negative impact on the pre-servi ce teachers self-perce ptions were labeled as insensitive, critical, uncaring, and ineffective (N orman & Spencer, 2005, p. 31). Of the 59 teachers involved in the study, 91% of them held a view that either characterized writing as being a skill that a person is simply born to be good at or as a skill that could be improved with practice and effective in struction (Norman & Spencer, 2005). When questioned regarding how they believed that writing should be taught in the classroom, some common themes emerged. They believed that the writing should be connected to the experiences of their future students, they supported choice for students writing topics and assignments, and they felt that it was important for students to receive positive comments and feedback on their writing (Norman & Spencer, 2005). Because the writing performance of student s can be linked to their self-efficacy (Pajares, 2003), it is unf ortunate that a large number of pre-service teachers have a low self-efficacy with regards to writing (Norman & Spencer, 2005). It is important for teachers to work on improving both the competence and confidence of their students because those two constructs are linked and necessary for success in writing (Pajares, 2003, p. 153).

PAGE 49

34 Review of Pertinent Evaluation Research In the beginning of the evaluation resear ch movement, it was not recognized that different strategies were needed in order to effectively evaluate different genres of writing (Cooper & Odell, 1999). Th e predecessor to all current writing methods came in the study completed by Diederich, French, and Carlton (1961) who were in search of a common approach that could be taken towards the assessment of writing. They found that 94% of the papers involved in their study received vastly different scores from multiple scorers who had the same background. This disparity in grades raised concerns about the validity of the grading method, so Diederic h et al. looked for a new method of writing evaluation that could bring some agreement amo ng graders (i.e., reliability). The result of their endeavor brought about the creation of th e original five-point rubric containing the factors of ideas, form, flavor/s tyle, mechanics, and wording, which nearly every largescale assessment of writing since 1961 has been strictly guided (Broad, 2003, p. 6). In 1980, Odell and Cooper examined four prominent methods of writing evaluation to gauge their effectiveness in assessing student writing. The four methods examined included the General Impression Scor ing technique utilized by the Educational Testing Service, the Analytic Scale developed by Diederich et al. (1961), the assessment of relative readability developed by Hirs ch (1977), and the Primary Trait scoring procedure developed by Lloyd-Jones (1977). The General Im pression process allows writers to write on any topic of their choice. Then raters ar e required to compare written papers to one another rather than against a predetermined scoring guide (Charney, 1984). The Analytic Scale assigns a score to differe nt elements, such as spelling, style, and grammar, of a writing, which can then be a dded together to get an overall score for the

PAGE 50

35 paper (Diederich et al. (1961). The Primary Tr ait scoring procedure adapts the rubric to fit the type of writing being completed as well as to reflect the specific topic of the paper. Relative readability (Hirsch, 1977) is a holistic scoring techni que that examines how well the writer presents ideas on paper. This technique never gained acceptance and was not fully developed (Charney, 1984). It was dete rmined that the Primary Trait scoring procedure, although not perfect, was by far th e most useful of the four methods of evaluation and that it was the only one that wa s based on research in its creation (Odell & Cooper, 1980). The Education Testing Service provided th e creation of the modern techniques in the evaluation of writing (Broad, 2003) as it was the predecessor to holistic scoring and helped to pave the way for assessment to move from solely looking at grammar and conventions to also evaluating the content of essays (Yancey, 1999) While the shift to evaluating whole pieces of writi ng rather than only multiple-choice tests was promising, Huot (2002) noted that no scholars related to the field of English had a hand in the development of holistic scoring. That concern transfers over to many of the evaluation methods in use today. If the methods being used by teachers of writing and English were not developed by people with the same background and understanding of the intricacies of student learning and development in writi ng, then that means that the methods were more likely created by professionals in the measurement field who are more concerned with the validity of a particular instrument than they are with the best way to measure student growth and understanding (Broad, 2003; Huot, 2002). Additionally, as found by Diederich et al. (1961), there w ould not be agreement between the raters in such a case as

PAGE 51

36 the English and educational professionals would likely not co me to the same conclusions about writing as would the measurement professionals. Subjectivity of Raters Some researchers (Cooksey, Freebody, & Wyatt-Smith, 2007; Wyatt-Smith & Castleton, 2005) have examined the different ways in which teachers implement a given set of standards for evaluating writing from an outside source when evaluating the writing of their own classroom students as comp ared to when they look at the writing of students who are unknown to them. They determin ed that instructors find it difficult to separate their personal knowledge of the studen ts from the evaluation of their work as they want to include their insi ghts of the students abilities, effort, and growth into their final score. Obviously, this res earch shows that if the goal is to have a score based solely on the criteria set forth for the evaluation (on a rubric or other form), then it is important that the evaluation be done wit hout knowledge of the author, a nd if the rater is allowed to know the students whose work is being assessed, then you must account for subjectivity in that evaluation, which would be especially important if that rate r was responsible for evaluating work from unknown students as well as students who were known to him or her (Cooksey et al., 2007; Wyatt-Smith & Castleton, 2005). This type of summative assessment, which is based on a score, is completely different than the continual assessment that is done by classroom teachers, which is why it is difficult for those teacher-raters to give a score that would be greatly disparate from the students normal writings. While all teachers should practice objec tivity in an attempt to see their students writing through the eyes of others, subjectivity (i.e. personal knowledge) during the writing evaluation process can be benefici al to both the rater and the writer.

PAGE 52

37 Assessment Options In an effort to determine what goes through the minds of student writers, Emig (1971) worked with 12th grade students. Through think-aloud protocols, interviews, and an analysis of writing products, she attempted to gain insight into what the student writers thought about their writing assignments, what they thought about while they did their writing, and what thoughts they gave to revising those works. She determined that any writings that stemmed from a teachers assignment received extremely little thought and that the time spent writing was very minimal with virtually no revision being done by the student writers. Any writings that the students did outside of class on their own time for their own personal pleasure, however, capture d their interest and were given more attention in the planning, draf ting, and revision stages than were the writings done during class (Emig, 1971). Going outside of the individual classroom writings, Broad (2003) embarked on a study to establish a model for Dynamic Crit eria Mapping (DCM) at a university where the instructors were required to evaluate the writing of their students in an English course utilizing a portfolio system. He observed mee tings of the instructors and analyzed their thoughts and struggles as related to how to decide what to include in the portfolio, how much (if any) revision to allo w their students to do, and what criteria to use in the assessment of the portfolio. Included in th e study were conversations in which the instructors shared their Teachers Speci al Knowledge (TSK), which Broad (2003) defines as direct and exclusive knowledge of the student-author shared by an instructor with his or her trio-mates [teachers shared their knowledge in groups of three] (p. 84). This knowledge was used by the instructors to help one another make decisions about

PAGE 53

38 whether or not particular stude nts would pass the class, and th ey shared advice with one another across their classes to help complete the evaluation of students both in their own classes and in the classes of their trio-mates Broads (2003) assessment of the use of the DCM model is that it provides great benefit for the instructors, but it leaves the students without a clear picture of how their writing wa s evaluated or what they can do to make improvements as they are not privy to the content of the conversations between the teachers. If the goal of the writing teacher is to see improvement in the students writing, then it would be important to find a way to show the students how their writing is evaluated as well as how they can go about improving their skills. Without that information, it would be difficult for the students to grow as writers. Authentic Assessment A search for an exact definition of authentic assessment comes up empty as it appears to be a term that is known by all and one that everyone assumes that the definition is agreed upon without speaking (P etraglia, 1998). Because of that ambiguity, Gulikers, Bastiaens, and Kirschner (2004) set ou t to establish a concrete definition of the term and decided that authentic assessment is an assessment requiring students to demonstrate the same (kind of) competencies, or combinations of knowledge, skills and attitudes, that they need to apply in the criterion situati on in professional life (p. 5). Additionally, Gulikers et al. ( 2004) also determined that authentic assessment exists within a five-dimensional framework which in cludes task, physical context, social context, result/form, and criteria. One of the advantages of utilizing authentic assessment methods is that they provide a high level of construct validity (i .e., they measure the things that they purport to m easure) as long as they are ap propriately matched to a task

PAGE 54

39 (Gielen, Dochy, & Dierick, 2003). According to Birenbaum and Dochy (1996), in order for assessment to be authentic and to give you a true picture of the students capabilities, it is important for the assigned task to be enga ging and to be linked to a real-life learning experience (i.e., it must have authentic a pplications to life). If those goals are accomplished, then students experiencing regular authentic assessments appear to become more motivated to learn after recogni zing that accomplishment on such tasks can help them in their life outs ide of the classroom (Gulikers et al., 2004). This task is complicated, however, by researchers (Honebein, Duffy, & Fishman, 1993) who point out that the term authentic becomes relative to the situation that you are in at the moment. Additionally, authentic assessment can also be vi ewed as a process, rather than as being a single goal or task, where students demonstr ate mastery or improvement while engaged in various activities (Mueller, 2005). Authentic Writing Tasks It is important that teachers recognize the necessity of having all student writing assignments have a specific purpose (from the perspective of the writers) and that the students have a specific audi ence in mind when they are writing (Cooper & Odell, 1999). If students are not given the oppor tunity to write for real purposes with real audiences, it is unlikely that they will be able to reach the level of an expert writer (Gielen et al., 2003). The audience for whom the students writ e has long been an i ssue of discord in education. In a collection of essays written in 1965, Judine po ints out that the authors of the essays contained in her edited volume all selected different audiences for their students because they had a belief of who the audience should be and that they remained firm in their selection across all of the assignm ents that they gave regardless of the nature

PAGE 55

40 of the assignment. According to Sommers (1982), assigning an audience member for the writing would be a mistake. In stead, Sommers suggest that teachers pay attention only to preserving the goal of writing the making of meaning. When students write with an audience member in mind, they automatically change aspects of their writing to meet the perceived expectations of the audience. When such writing is evaluated, the teacher must give feedback to tell the st udents how well they have achieved the goal set for them by the teacher rather than allowing the students to set their own goals based on what they believe to be necessary for their chosen audience (Sommers, 1982). That occurrence of the teacher taking control of th e goals of the writing means that the students have only participated in a writing exer cise rather than doing rea l writing (Probst, 1989, p. 75). This type of exercise is, of course, reflective of real wr iting for schools, but the key is for teachers to offer writing tasks that a ddress the needs of both school and life. In a study where the researchers examin ed 2,000 pieces of writing in order to determine the audience for the writing as we ll as the purpose for it, they identified many factors influencing the writing of students (Martin, DArcy, Newton, & Parker, 1994). When considering audience, for example, the researchers found that students are influenced not only by who they are writing for but also what they think of that audience. The findings of the study were that stude nts viewed nearly ha lf of their writing assignments as being intended for an exam iner to read whether or not the actual intended audience was the teacher (Martin et al., 1994, p. 40). If teachers work to provide writing assignments that have meaning to the students and which challenge them cognitively rather than allowing them to only write to assigned topics and/or audiences, they may find that the students will rise to the challenge by responding with more writing

PAGE 56

41 and more meaningful content, even though it may be perceived as a more difficult task to write with more freedom, than they woul d to a standardized writing assignment (Matsumura, Patthey-Chavez, Valdes, & Garnie r, 2002).This finding is in line with the findings of previous researchers (Berry, 2006; Pajares, 2003) who determined that increased confidence in academic skills leads to increased performance. The action of the teacher giving an assignment with choice im bedded within the task shows that she has confidence in the students, which, in turn, boosts their own conf idence going into the writing assignment. One suggestion for a way in which to work authentic writing into the curriculum comes from Martin (2003) who incorporated the writing of occasional papers into his writing requirements for student s (p. 52). His only requirements for this type of paper were that the students write at least one every six weeks and that they write about something that sparked their interest in wh atever format/genre they would like. The students then read the paper al oud in class, their peers give them verbal feedback about the content of the paper by shar ing their personal connections to the topic, and then the class moves on to another task. The students do not revise the papers unless they want to do so, but they have the power to write what they want, when they want to do so, and in whatever format they desire (Martin, 2003). While this activity is not a structured one with copious teacher feedback attached, it is an exercise in authentic writing and is one in which the students are in contro l of their writing, and they are interested in it (Martin, 2003). Another way to encourage authentic wr iting, either within or outside of the classroom, is to introduce students to a real author who can shar e her experiences in

PAGE 57

42 writing with the class. Teachers can invite authors to come to their school (Moynihan, 2009) or seek a local conference where authors will be present and talking with students. The experience of hearing how authors incorporat e writing into their lives, where they get ideas, how they decide what and how to write, and how the process of revision successfully works for them can spark lasting interest in writing for students (Moynihan, 2009). The more opportunities that students have to think of a goal presented by their teacher and to make their own decisions about what kind of writing they would like to do in order to meet that goal, the more likely they are to take ownership of the writing task and to enjoy writing (Hudson, 1988). Feedback to Student Writers According to Cooper and Odell (1999), al l steps that occur before students hand in their writing assignments pale in significance to the decision of what feedback to give to the students that will be the most helpful to them. Huot (2002) believes that the most important part of writing assessment is th e act of actually reading and responding to writing. There is, however, a dearth of literature in the field of teacher response to writing (Freedman, 1985; Miller, 1994; Nixon & McClay 2007; Phelps, 2000). With a lack of empirical research to fall back on, evaluators of writing must wrestle with the decision of how to best respond to student writers. They must also keep in mind that their responses to written work will have some type of eff ect on the writers. The process of giving and receiving feedback about writing can be difficult and tense for both the teacher and the student (Anson, 1989, p. 2). Because of that tension, teachers need to always be conscious of the specific student to whom they are responding in order to respond to them using terminology and a manner to which the student receiving the response will be

PAGE 58

43 receptive rather than confused or angere d (Huot, 2002). It is also important to individualize the comments to the paper rather th an using standardized comments on all of the papers as knowing that the teacher is responding spec ifically to their paper will help the students with the acceptance of the comments (Matsumura et al., 2002). Gee (1972) found that students who receiv ed negative comments or no comments at all on their writing began to write less a nd began to have less enjoyment while writing. Ideally, students will take the feedback fr om their teachers and will use the comments, critiques, or praise to stre ngthen their writing through addition al drafts, but teachers often find it challenging to get student s to make those changes once they perceive their writing as being completed (Beach & Friedric h, 2006). Even those students who do make changes to their writing based on the respons e from their teacher tend to make surface revisions without receiving detailed and/or qua lity comments from their instructor (Beach & Friedrich, 2006). One way for teachers to ap proach the issue of he lping students to see the reasoning behind the request for revision is to work on making the comments that are given in response to student writing more encompassing and explanatory so that the student understands not only what must be fixed but also why it could be improved upon. For example, instead of telling a student th at the writing is awkwar d, tell her why it is awkward. It is also helpful to explain why a certain aspect of the students writing is effective so that she may unders tand how to transfer that effectiveness to other areas of her writing (Bardine, Bardine, & Deegan, 2000) It might also be helpful to emphasize that the final draft is still a draft, which i ndicates that it is still open to improvement (Haneda & Wells, 2000). Additionally, teachers can work to broaden their feedback from comments that encourage only standardizati on of writing (with respect to grammar or

PAGE 59

44 form) and move to comments that are gear ed toward the improvement of the overall writing (Matsumura et al., 2002). One option to help teachers provide specific feedback to students is for them to write an end note for each student at the conclusion of their wr iting to give praise and/or criticisms. One issue with end notes is that th ere is a large possibil ity that the teachers intentions, i.e., the messages th at she wanted to give to th e students, are not understood by the students in the way th at she meant them to be understood (Smith, 1997). One way to help lessen this problem is for teachers to video or audiotape their comments as they read and respond to the writing as those mediums allow for the opportunity for more thorough and complex feedback from the teacher w ith little effort as compared to what it would take to write the sa me comments for the students to read (Anson, 1997). Within that type of feedback, teacher s who prefer to stay away from comments that could be taken only as criticism can shift to weari ng a reader hat (Elbow, 1981) and then can respond to the students using words to describe how they feel or what they are thinking by sharing with the writer that they were engaged, entranced, bothered, puzzled, etc. by what they are reading (Beach & Friedrich, 2006, p. 226). Ferris (1993) concluded that it is also important for teachers to individualize the feedback so that their comments are suitable for the intended recipient and so that the student writer is able to improve their writing based on those specific comments. It is also helpful for teache rs to be aware of the stude nts goals for their writings so that their feedback does not undermin e those goals (Brannon & Knoblauch, 1982). Unfortunately, some teachers struggle with diffe rentiating their feedback from student to student (Graham, Harris, Fink-Chorzempa, & MacArthur, 2003). Over time, it is quite

PAGE 60

45 likely that every educator de velops her own system of co mmenting on writing, and if we were to examine various student writings from her classes, we would see a pattern in her responses (Smith, 1997). Educators often, however, keep their responses to themselves, so they have few opportunities to see the different patterns of responses that may exist in their community, which means that they are no t faced with differences that could cause them to expand or make changes to their own repertoire of assessment comments. In order to study the effect of written feedback (teacher and peer in oral or written form) on the writing of students, Freedman (1987) su rveyed teachers and students and completed ethnographies of two of the te achers. She determined that in order to have a positive effect on their students writing, teachers allowed the students to remain in charge of their own writing while reminding them of the goals th at they as teachers established for them and were constantly and consistently avai lable to the students whenever support was needed. It is suggested by Chandler (1997) that mo re research is need ed to see how the written or spoken remarks made by instructor s with regards to st udents writing actually impact the student writers both affectively and cognitively (p. 274) so that we can better make adjustments to our assessment pr ocedures. Regardless of the form, it is clear that teachers need to work on responding effec tively to their students writing in order for students to be able to improve their skills as a writer (Beach & Friedrich, 2006). Options for Assessing Writing Many options exist when it comes to asse ssing writing. In order for educators to select an approach that they like the best, th ey must first consider their own beliefs about assessing writing. For example, do they beli eve that the most important part of

PAGE 61

46 assessment is their own opinion of the writing, or are there more facet s that need to be considered when embarking on the evaluati on of student writing (Mathison-Fife & ONeill, 1997)? Is it sufficient to only provide written comments to the writer, or should other forms of feedback be explored and enga ged? Odell (1999) suggests that one of the necessary skills that a teacher must hone to be an effective evaluato r of writing is the ability to match the writing assignment to the learning goal. That is, in order to choose an effective evaluation, it is important to know what the goal of the assignment is. Because different genres of writing have different pu rposes, and often differe nt audiences, it is necessary to evaluate those writings utilizing whicheve r method of assessment best matches the goals of the author and/or th e teacher (Tompkins, 2008). For example, while a teacher might want to determine whether or not the student makes a good argument in a persuasive writing piece, such a criterion would not be app licable to a narrative story (Tompkins, 2008). Given the many different types of writings that are possible, it follows that there is an equally large number of evaluation options to assess those writings. Holistic Scoring Holistic scoring has a rich past of support from those who value reliability over validity in writing evaluation (Cooper, 1977; Deiderich, 1964; Godshalk et al., 1964; White, 1985). In holistic scoring, the whole piece is examined, usually using a four to six point rubric, and a score for the writing is given on the overall impression from reading the work (Cooper, 1977). Holistic scoring can be useful when a rank order of the work of a group of students is desired (Cooper, 1977). One important note is that in holistic scoring, the rater does not make any correc tions or revisions to the writing being evaluated, and the evaluator is supposed to co mplete the reading in two minutes or less

PAGE 62

47 (Cooper, 1977). Obviously, if a reader can only spend two minutes on a writing sample, the amount of response that exists from the r eader is limited. The high rates of reader agreement that testers sometimes brag of do not reflect the way the readers value texts but only how they rate them under special c onditions with constrai ning rules (Elbow, 1996, p. 121). While holistic scoring may not ap peal to todays teachers or those who would prefer to have specific feedback about each paper being eval uated, at the time of its implementation, holistic scoring provided ed ucators with a way to move away from multiple-choice tests towards allowing stude nts to complete actual samples of writing while still being able to prove that such a method was reliable (with carefully trained raters) and valid (Cooper, 1977). Rubrics With the use of rubrics, it is important to be sure that the rubr ic evaluates what the teacher wants it to evaluate and that it is not actually emphasizing the rules of writing rather than the students effo rts to play with language and to share a message with an audience (Wiggins, 1994). In order to ensure that rubrics ar e relevant, valid, and fair and that they support construct validity, th e educator in charge of implementing or developing that set of guidelines measured by the rubric must pa y close attention to whether or not the categories on the rubric ma tch the areas brought to the attention of the students by their teacher during their work on that assignment as it would not be fair to expect students to perform at a high level in ar eas that were not previously taught to them (Broad, 2003, p. 11). One way to foster a bett er match between the teachers goal for the assignment and the evaluation of the assignmen t is for the teacher to develop her own rubric (Wilson, 2007). Teachers must also be sure to work with their students in order to

PAGE 63

48 ensure that the student writ ers understand the criteria on the rubric well enough that they are able to apply it to their writing for a self-assessment before turning it in for the teacher to evaluate (Beach & Friedrich, 2006). Si milarly, it is important to remember that simply by design, rubrics pull the evaluators attention to certain aspects of the writing, and thereby, reduce the likelihood that the wr iters will think through the writing as they normally would when writing without having to address specific points that they know will be assessed (Delandshere & Petrosky, 1998). In some cases, teachers utilize a standardiz ed writing rubric that comes from their administrators, district, or the writers of their local standard ized writing test in order to promote a standardization of evaluation. Even some teachers who report feeling comfortable making changes to such a rubric may not do so because they feel that they should follow the same evaluative path as the teachers around th em (Nixon & McClay, 2007). It is left to teachers to decide if rubrics work within their writing cu rriculum or if they feel that the rubric cons trains their response to the writing in such a way that they could better evaluate thei r students writing with anot her method of response (Wilson, 2007). Teacher Conferences Conferences can take place at the individua l or group level, can occur at all of the different stages of writing, and are an eff ective method to use in both the teaching and evaluating of writing (Murray, 2004) In order to make effective use of conference time, it could be beneficial for students to be allo wed to have the first word in the assessment process by giving them the opportunity to include a self-assessment or self-scoring of their writing with the assignment for the teacher to read before beginning her own

PAGE 64

49 assessment (Mathison-Fife & ONeill, 1997; Murray, 2004; Sommers, 1989). Several decades ago, Beaven (1977) found that teach ing her students how to self-evaluate their writings helped them to become more inde pendent and more cognizant of the strengths and weaknesses within their own papers. Anothe r helpful method of evaluation is to teach the students how to effectively point out strengths and weaknesses in their own writing and in the writing of their peers during c onferences with their teacher and with one another (Cooper & Odell, 1999; Huot, 2002; Morrow, 1997; Spears, 1999). During conferences between the teacher and student, teachers can share what they hope that the students will do with the feedb ack that they are given and can hear the students responses to those thoughts (Frank, 2001). Such a conferen ce is also a great time for teachers to provide one-on-one modeling as a way to help the students see how they can go about the process of assessing their own writing (Beach, 1989). Because of the wide array of skills present in all cla ssrooms, attempting to show students how to improve their work through large group modelin g of selected writing skills will likely be ineffective, so the individual conference pr ovides a great platform for that teaching intention (Beach, 1985). The teachers can struct ure the conference to meet the needs of each individual writer by using language be st suited to the current student and by focusing on that students writing. An indivi dual conference provides an opportunity for the teacher to guide the student through the process of identifying his or her own challenges in the writing and to find areas of strength while listening to the student to get a sense of their current le vel of skill in the proces s of writing (Probst, 1989).

PAGE 65

50 Peer Conferences When time is short, or when a teacher would like to implemen t another type of evaluation method in the classroom, peer conf erences can be helpful as well in helping students identify areas where they need work as well as what is already working for them. The key to successful peer conferences is for teachers to take the time to train their students so that they know what they are look ing for in their peers writing (Berg, 1999). However, if teachers use modeling of a peer conference as a way to train the students, they must be aware that it is possible that the students will take that modeling literally. They may utilize the exact language and guideli nes viewed in the modeled conference to the extent that they limit thei r own language and natural conv ersations about writing with their peers (Swaim, 1998). One way to try to mitigate the effects of the teachers modeled peer conference is to have students share thei r work with one another in small groups and to have them model how they went about assessing their own writing. When the whole group shares their individual processes of se lf-assessment, then it starts to become obvious that there is no one correct way to a pproach writing or self-assessment (Beach, 1989). Teachers can also encourage students to share with one another the emotional impact that their writing has on one another and the content-related questions that they thought of as they read the story rather th an attending to grammar or form. Opening up that type of dialogue could widen the scope of a peer conference and could lead to an increase in the possibilities for revision (Swaim, 1998). As the students become more skilled at the reading and eval uating of their peers work, they begin to appreciate differences in approach, content, organizati on, flavor, and wording and begin to realize that those differences are to be expe cted among writers (Beaven, 1977, p. 149). That

PAGE 66

51 realization allows students the freedom to then explore their writing in their own way and style in a comfortable envir onment, and it simultaneously motivates them to write more so that they have more to share with thei r friends (Beaven, 1977). There is another view on peer conferencing that says that inst ead of empowering student writers, peer conferencing about writing actually just co mes to represent anot her checkpoint that students must pass through on their way to a finished product (Martin, 2004). In order to lesson the trepidation felt by students goi ng into peer conferences, teachers can implement the previously mentioned methods in an effort to increase the usefulness and friendliness associated with the conferences. Portfolios Portfolios are a useful tool to use as part of an assessment plan for writing. They provide a story of where child ren have been, and what they are capable of doing now, to determine where they should go in the futu re (Morrow, 1997, p. 36). They also give the teacher trustworthy evidence of a wr iters ability (Elbow, 1996, p. 120). They accomplish this goal by holding records or copies of a wide variety of assessments or assignments such as writing drafts, observation checklists, audio or videotapes, etc. that were all completed over a period of time (M orrow, 1997). They can even provide a way in which teachers can involve students in the selection and ev aluation of their own writing (Dyson & Freedman, 1991). While Huot (2002) believes that portfolios are effective and that their use is in line with effective practices of writing evalua tion, he also believes that in too many instances, the use of portfolios is being standardized with the decisions regarding how they are used being taken away from the teachers and given to administrators, state

PAGE 67

52 officials, or other people who do not have a personal stake in the classrooms in which they are actually used (Broad, 2003; Calla han, 1999; Huot & Williamson, 1997; Murphy, 1997). One example of such a loss of control by the classroom teachers is found by looking at the Kentucky portfolio assessment, which that state uses in addition to a standardized writing test. Writi ng in the portfolio must in clude the following types of writing: literary (poems, stories, childrens b ooks, plays, etc), personal (narratives, memoirs, etc.), transactional (arguments proposals, historical pieces, researchfocused papers, etc.), and a piece reflect ing on the writers views of his or her development as a writer or the specific papers in the portfo lio or some other dimension of the writing. (Hillocks, 2008, p 325-326) Even though the classroom teachers are una ble to change the requirements for the portfolio, the students in Kentucky are required to be exposed to a multitude of genres to such a level that they can produce their own works in those genres rather than being limited to only the one or two genres covered by the standardized writing assessment (Hillocks, 2008). While the state officials in Kentucky subscribe to the advantages of portfolios, other officials, administrators, a nd teachers may resist the implementation of portfolio assessment due to their belief that th e evaluation of portfolio s is soft, inexact, not rigorous, too mushy, or too slippery (Larson, 1996, p. 278). These views are especially held by those officials who are c oncerned with the ability to be able to compare students between schools or across states (Larson, 1996). Fourteen years after the im plementation of the portfolio assessment in Kentucky, Callahan and Spalding examined the status of writing and writing assessment in that state

PAGE 68

53 (2006). They determined that as a result of officials and administrators wanting the teachers to understand how to best facilitate the students writing for the portfolios, the teachers in Kentucky experienced quality pr ofessional development courses that were based on the principles of the National Wr iting Project. Additionally, at many schools, the majority, if not all, of the teachers were required to be a part of the spring time assessment of the student portfolios. The mix of the continuing professional development experiences and the portfolio evaluations opened the door for the teachers to spend a great deal of time talking about good wr iting and evaluation (Callahan & Spalding, 2006). Even better, the teachers were not the only ones talking about writing. The state collected examples of portfo lios and made them available to the public for viewing, so the students were also able to engage in discussions about the variety of writings that they read in others portfolios (Callahan & Sp alding, 2006). Clearly, some positive effects have come about since the implementation of the portfolio assessment in Kentucky, but there are also teachers who do not like the ti me-consuming process or the amount of time that is given to writing over ot her subject areas in an effort to ensure that all students are able to have completed works in all of th e required genres (Callahan & Spalding, 2006). Standardized Writing Assessments While a common argument against standardi zed writing tests is that they do not test what teachers should be teaching in writing, White (1996) suggests that the protests over testing are, in fact, the rumblings from teachers who would prefer to have no tests at all and do not bear any credence to whether or not the tests in question are actually problematic. There is no way, however, to ignore the fact that the tests are prevalent and that teachers must deal with them. In recent years, the push to hold students and teachers

PAGE 69

54 accountable to standards and to administer st andardized writing assessments has led some researchers to believe that this period of tim e will later be referred to as the standards period (Hoffman, Paris, Salas, Patters on, & Assaf, 2002). Many students experience a standardized test of their wr iting in which they are required to complete a single writing sample for evaluation, but some researchers que stion whether that is an effective way to measure students writing ability (Camp, 1983). The results of such tests often do not inform teachers of what the students know and can do, and without that knowledge being a result of an assessment, it is diffi cult to verify its valid ity (Burgin & Hughes, 2009). One aspect of vital importance that s hould be considered when selecting a standardized assessment is this is the test valid for particular students? In other words, does the material assessed by the test align with the curriculum that is currently in place in the classroom (Morrow, 1997)? That is an impossible feat according to Sharton (1996) because many classroom teachers support the use of process writing by their students, and with that being a common classroom practice, when coupled with its underlying philosophy, it is not possible for a one-shot stan dardized writing test to achieve construct validity. There are advocates (Schaffer, 1995) for what many call formulaic writing (i.e., the type of writing supported by the one-shot assessments of writing), and the reasoning behind that support is that having a guide allo ws students to be successful in completing an essay with many details in an organized format. The Schaffer method is just one method that is available to teachers who are l ooking for a form to use with their students in the preparation for the standardized writing test. The advantage of the Schaffer method

PAGE 70

55 is that it incorporates more than the sta ndard five-point paragraph in that Schafers paragraphs contain eight sent ences, which include commentary statements as well as supporting details. However, just because this format is more detailed than a five sentence paragraph, does not mean that it is teaching the students how to be writers (Wiley, 2000). Because it is still a formula that must be followed exactly, students are unable to work out their own approaches to the writing assignment at hand. They cannot play with language and decide how they wa nt to address each detail on their own. Instead, the students learn that ideas do not play a part in deciding on the structure of their writing at all as one structure will work for all ideas (Pirie, 1997). Regardless of the approach to writing that a teacher takes with he r class, it is important that she monitor her students to be sure that they do not begin to think that ther e is only one way or one form allowed for their writing products. Students must know that there are more kinds of writing than just a five-paragraph essay (Wesley, 2000). Cooper and Odell (1999) believe that st andardized writing tests can have a negative influence on classroom instruction in writing by becoming the driving force behind the writing curriculum in the classroom but they also point out that there are some standardized assessments that utilize draft writing, the formati on of portfolios, and other practices that they perceive to be pos itive activities in the writing classroom. They also share that even if all writing instruction becomes driven by standardized tests, then at least all students will now be writing whereas in the past, many students never wrote full writing samples of any kind. Morrow (1997) be lieves that regardless of your views related to standardized testing, that it is important to recogniz e that such tests are here to stay, so the task becomes one of attempting to get the tests changed so that they better

PAGE 71

56 match the content of the classrooms while a voiding the urge to make the tests overly important to the students. Strengths and Limitations of Available Literature The body of literature representing the wo rk in the field of writing evaluation displays a lack of a theoret ical basis for our assessment practices that has persisted for many years (White, Lutz, & Kamuskiri, 1996, p. 105). While there are improvements that could be made in the research in the field of assessing writing, it is also important to note that there is a great deal of useful information available as well. For example, Huot (2002) presents a full examination of the histor y of the researchers who wish for a greater validity in the evaluation of writing and of the conflict that exists between those in measurement and those in education who emphasize different aspects of evaluation. Many other researchers (Cooper & Odell, 1977; Diederich et al., 1961; Williamson, 1993) also added to the field of reliability a nd validity in the evaluation of writing, but no consensus has been reached. Yancey (1999) provides a thorough l ook at the issue of validity of writing evaluation methods from another view. The representation of the literature av ailable on the various evaluation methods that are utilized by educators also shows an agreement by many rese archers (Gulikers et al., 2004; Huot, 2002; Morrow, 1997) that it is important for all students to be evaluated in a myriad of ways in order to give a comp lete picture of their strengths and weaknesses. For teachers wishing to research multiple options that may be available to them in order to evaluate the writing of their students, th ere is a wide body of re search (Beach, 1989; Huot, 2002; Martin, 2003; Murray, 2004; Wiggins, 1994; Wilson, 2007) available to assist them. However, more research is n eeded regarding the impact that standardized

PAGE 72

57 assessments have on the evaluation of writi ng (Anson et al. 2008; Callahan & Spalding, 2006; Spandel, 2006; Wesley, 2000), and that re search could affect the way that teachers view the different evaluation options available to them. Essentially, this study was conducted in order to extend on the research already completed in the areas of the evaluation of writing. More specifically, this study also extends the research that is currently availa ble on teacher beliefs and adds to the limited selection of research that is available for those interested in teacher beliefs about the evaluation of writing. Hopefully, this study will lead to future studies that can be generalizable to other populations of teachers in order to truly come to an understanding of how teachers approach and respond to student writing and how they select their evaluation methods for student writing samples.

PAGE 73

58 Chapter 3 Method Research Design The overarching purpose of this study was to describe educators beliefs about the evaluation of student writing. The inquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the effectiveness of their evaluation methods for judging th e quality of students wri ting samples? and (c) what factors impact the evaluati on decisions of educators? As I began to consider the best ways in which I could examine how educators approach the evaluation task, wh at factors impact their eval uation decisions, and how to explore whether or not those approaches are al igned with their beliefs about the best ways to assess student writing, I realized that I would need to utilize mixed methods throughout my study. A mixed methods (Maxwell & Loomis, 2003) approach was utilized in that qual itative data was collected with cont ent analysis (Krippendorff, 2004) following, and quantitative data was collected wi th statistical analysis following (Patton, 2002). The qualitative portion of the study was naturalistic and descri ptive in nature in that I carefully observed educators in action in their natural setting without purposefully changing their environment or subjecting them to any experiments (Patton, 2002).

PAGE 74

59 I utilized a questionnaire (Appendix A) to gather data about the educators beliefs about evaluation and writing. However, in order to uncover the thoughts behind the evaluation decisions made by educators when examining student writing, it was necessary to also observe the teachers in the act of evaluation and to talk with them about their beliefs (Hesse-Biber & Leavy, 2 005). While reading the responses on a questionnaire can provide deta ils about their general beliefs, giving the participants the opportunity to answer questions at any length without having to worry about writing the responses on paper or about being confined to a small selection of choices for responses allowed me to glimpse their view of the world (Patton, 2002). Similarly, while I observed the educators in their natural setting and completed the interview in a setting of their choice, it is possible that some of the behavior or that some of the beliefs shared differed from the educators actual belie fs. In order to learn about their inner thoughts, their constructi ons of reality, which they may not have verbally shared during an interview or in a group setting, was to give them the opportunity to share those in privacy on the questionnaire (Patton, 2002). I attempted to capture the beliefs and experiences of all of the participating educators through the different phases of data co llection (questionnaire, inte rviews, and observations) and during analysis so that all of their realities could then be examined together in order to create a type of understanding for th eir experiences (Patton, 2002, p. 98). The combination of the questionnaire, interviews, student writings, and observations provided multiple data sources and enabled me to check for consistency in the data from each of the sources during the analysis of the data.

PAGE 75

60 Context of the Study The purpose of this study was to describe educators beliefs about the evaluation of students writing. I used a convenience sa mple from a local writers conference for students. The conference had student partic ipants from first through sixth grades and childrens authors and illustrators offered presentations on writing to the children who attended. The conference is he ld in the conference center of a large, public university in the southeastern United States. It is a university-sponsored ou treach program to surrounding school districts. Over 250 schools located near th e university are invited to attend and to bring their students. A fee is charged for each participant. The schools that participate in the conference generally use one of six selection methods to select participants: teachers select the most improved writers, teachers select the best writer(s) in their classes, teachers select children who love to write, children self-select to attend, teachers have classroom contests, and whole schools have contests (Personal communication, February 17, 2010). Every educ ator participating in the event was invited to be a participant in this study, and all who agreed were asked to complete a questionnaire. The questionnaire portion of the study was sent to all of the educators who accompanied students to the conference or who taught writing to the students from their school who attended. Using the conference c ontacts as my liaisons to the individual schools, 114 surveys were distributed to fourteen different schools and one homeschooling family in each of three differe nt school districts. One additional school was from a third district for which I wa s unable to obtain permission for conducting research, so that school was not included in this study. No rmally, there are many more participating schools, but this was the first ye ar that the conference returned after a three

PAGE 76

61 year absence. Permission was obtained through the U.S.F. Institutional Review Board, the headmasters/principals of the privat e schools, the homeschooling parent, and the school districts for the participating public schools. Among those participants, I invited two sc hools (one public and one private) to further participate in observati ons and interviews so that I could learn more about the beliefs held and the practices used by educators during the evaluation of student writing. These were the only two schools which indicate d that they utilized a selection process involving the evaluation of student writing for their student participants. One of those schools, a private school, agreed to move forward with the study and completed interviews, allowed me to observe a classr oom lesson, and shared student writing samples with me. The contact from the other sc hool, a public school, did not respond to subsequent requests for permission to conduct the observation and interview portions of the study. Phase One: Questionnaire Writing and Evaluation Question naire. This instrument (see Appendix A) was developed for this study. A pilot study was conducted prior to the start of the actual study in order to determine if there were any questions that needed revision before being distributed to th e educators who participated in the conference. At the a dvice of Dillman et al. (2009), the pilot study used a group of people from varying specialties rather than just educators in order to take advantage of the different opinions and feedback that professionals of other specialties with new lenses and points-of-view could offer. That group did, for example, ask for a distinction to be made between terminologies (such as grading and assessing) that would not have been

PAGE 77

62 likely to draw attention from an educator who was used to hearing those terms. Once feedback was received from that group, then an additional pilot study was done with a small group of educators to be sure that the instrument was understandable to those in the field for which it was intended. I analyzed the resulting comments and questionnaire responses and made the necessary revisions to the instrument before beginning the study (see Appendix B). Development of the questionnaire. The questionnaire included multiple-choice, open-ended, and Likert scale questions. The content of the survey questions was based on the review of the literature in which I searched education-centered electr onic databases for research based on the following key words: beliefs, writing, evalua tion of writing, grading writing, standardized writing, assessment of writing, options for evaluation of writing, etc. I followed those searches with a hand search of many (see References section) books and educational journals to find related research in the fi eld of the evaluation of writing and teacher beliefs regarding writing. I continued search ing the databases, books, and journals until I repeatedly came across the same studies and au thors and until I felt that the review was as encompassing as possible with respect to the areas that I was interested in studying and that some degree of saturation of the categories was achieved. Following my review, I created a simple ta lly to determine which areas of writing assessment were most frequently discussed in the literature. Because many of the themes (the use of portfolios, for example) were rep eated time and time again in the literature at a much higher rate than others (such as the use of a red pe n during evaluation), I was able to see a trend of the more popular and well-researched evaluation techniques and

PAGE 78

63 incorporated all of those into the questionnaire. I also de termined 1) what terminology would be accessible to the participants, 2) what the researchers agree upon as being the best practices of writing assessment, and 3) wh at the researchers view as controversial or less effective practices that are used in evaluating student writing. I relied on the writings of Cooper and Odell (1977; 1999), Culham (2003), Hibbard and Wagner (2003), Huot (2002), Murphy and Yancey (2008), Murray (2004), and Tompkins (2008) when formulating questions for the questionnaire. Th ese researchers and texts presented nearly all, if not all, of the themes in their work with clear explanations of what each evaluation method involves along with a body of research to support their methods. While my synthesis of the research in formed the decisions made during the construction of the questions, I focused on the language used by Culham (2003), Hibbard and Wagner (2003), Murphy and Yancey (2008) Murray (2004), and Tompkins (2008) in order to utilize what I considered to be their teacher-fri endly terms for the questionnaire. I then confirmed their positions on the evaluation of writing using the empirical research of Huot (2002) as well as Cooper and Odell (1977; 1999). I was guided by the research during the formulation of each question to be sure that as much of the assessment process as possi ble was covered by the questions and that I used easily understood language in the writing of the ques tions. For example, the idea that portfolios are valid tools to use in the assessment of writing was echoed in the work of Culham (2003), Hibbard and Wagner (2003), and Tompkins (2008), but that notion was challenged by Murphy and Yancey (2008). I in cluded portfolios on the questionnaire in an effort to determine whether or not th ey are being used by local teachers in the assessment of their students writing and if so, to find out why they are choosing to use

PAGE 79

64 that method of evaluation. Even though ther e is disagreement among some researchers regarding the validity of the use of portfolios in the eval uation of writing, they were mentioned repeatedly in the literature as being a commonly used ev aluation tool. Because of the frequency of the references to the use of portfolios, I decided to include them in the questionnaire. Similarly, one subj ect that is repeated in the li terature is the impact that standardized testing has on th e teaching and evaluation of writing (Anson, Perelman, Poe, & Sommers, 2008; Callahan & Spalding, 2006; Hillocks, 2003; Scherff & Piazza, 2005). In order to see if the testing had an influe nce on the writing assessm ent practices of local teachers, that topic was include d on the questionnaire as well. The mix of question types on the instrument helped to ensure that every question was addressed using whichever format most effectively measured the participants responses (Dillman et al., 2009). The design of specific questions was tailored to the advice of Dillman et al. For example, the sc alar questions have between four and seven categorical response options because a range of four to seven options is recommended as being the most effective number of choices for that type of questi on (Dillman et al., 2009, p. 137). Similarly, I revised the fi rst draft of the ques tionnaire in order to eliminate some of the open-ended questions as Dillman et al. (2009) suggest minimizing the number of open-ended questions because of the possibili ty of higher non-response rates that occur with a large number of open-ended questions being included on a survey. However, I did not omit all of the open-ended questions because as Dillman et al. also suggest, survey constructors should be careful to use the question type that will provide the information that is needed. For several of the questions the only way to truly get a sense of the

PAGE 80

65 respondents feelings and thoughts about the topi cs addressed was to allow them to write their responses in their own words. The questionnaire includes a demographic section that requests information about the educators position and the number of years that they have taught writing to students. If the participant did not teach writing to student s, they were able to indicate whether or not they have ever taught writing and we re asked to share their current position. Following the advice of Dillman et al. (2009) again, I placed an open-ended invitation for the respondents to share any information th at they would like to share regarding the evaluation of writing at the end of the survey with space available on that page for them to use in writi ng their response. Pilot study. A pilot study was conducted with a group of ten local professionals from a variety of fields of work such as health administ ration, insurance, environmental preservation, medicine, and computer sciences. They comp leted the questionnaire and then were asked the interview questions. The pa rticipants of the pilot study were, at the suggestion of Dillman et al. (2009), from fields other than education so that they could point out different issues, ambiguities, or unclear questions that might not be noticed by educators. Unlike teachers, they are not used to hearing the terminology included in the interview. By using a panel of other professionals, I hoped to prevent teacher s from filling-in-theblanks with conjecture rather than realiz ing that they should ask for clarification. The pilot helped to indicate whethe r or not there were questions on the questionnaire or in the interview that needed to be revised because of issues of clarity, answer responses that did not match what the participants wanted to say, and whether or

PAGE 81

66 not there were some response categories that were not used by the respondents and others that they would like to use but were not present (Dillman et al., 2009). While the respondents were not able to an swer all of the questions on th e interview, they were able to provide feedback to help determine whether or not all of the ques tions made sense. In addition, conducting this pilot study also helped give an idea of possibl e non-response items or other problems that could have appe ared if the questionnair e and interview were utilized without first testing th e items with a small pilot samp le (Dillman et al., 2009). One area in which the non-educational pilot group helped to improve the questionnaire was in the chosen word c hoice. Three of the respondents asked for clarification regarding the terms evaluati on and grading and what those words encompassed. When the word assess was substituted in number two and ten, the respondents indicated that they better understood the questio n. In question thirteen, the non-educator respondents were extremely help ful in requesting cl arification in the descriptions provided for each example of writing evaluation methods. Five people asked that I further explain what a checklist was by adding examples to the description. Two other respondents needed cl arification on what students accomplish during a peer conference. Those changes were made, and th e resulting descriptions were much more thorough than the original ones. Once the initial pilo t study was completed, and the suggested changes were made, the revised questionnaire and interview protocol was piloted with a group of eight educators who ranged in years of experience from year of teaching to a retired teacher with forty years of teaching experience. The educators were able to bring another pointof-view to the examination of the questionn aire. For example, when looking at item

PAGE 82

67 number ten, two of the respondents pointed ou t that there may be another source of learning assessment methods that was not list ed in the questions options and suggested that an other option be added. None of th e respondents in the first pilot mentioned anything about that issue as they were less aware of the different learning environments available to educators. When they consider ed item number thir teen, three respondents asked that I move the FCAT scoring rubric to immediately before the Primary Traits Scoring Rubric and justified the move by sayi ng that they believed some teachers may be unsure of the difference between the two and that having the descriptions right next to one another would help them make an immedi ate comparison rather than a guess. Finally, one of the educators pointed out that ther e was no option for her to choose in item number thirty-seven as she was a sixth-grade teacher, and there was no answer choice for 6th grade. That option was promptly adde d to the survey. Hearing from both a noneducator group as well as from a group of edu cators helped to fine-t une the questionnaire and to make it understandable to a larger group of respondents. During the pilot study with educators, I at tempted to establish the validity of the questionnaire. The educator respondents we re asked if they believed that the questionnaire was asking them about their be liefs regarding the ev aluation of writing. All eight respondents responded in the affirma tive lending to the cont ent and face validity (Gall, Gall, & Borg, 2003) of the questionnair e. Further supporting the validity of the instrument were the comments from two of th e educators who said that the questionnaire was specific to my study rather than being ex tremely general in nature. It was, however, not possible to measure the construct validity of the questionnaire because there was not a similar, already validated, instrument avai lable to provide to my pilot group. Also,

PAGE 83

68 because this questionnaire is not predictive in nature, it was not possible to establish predictive validity for this instrument. Questionnaire distribution. Data collection took different forms dur ing this study. I personally distributed questionnaires to twelve of the contact peopl e involved with the re gistration of students for the writers conference who agreed to participate in the study on the day of the conference (76 questionnaires distributed at the conference). Because one school district had not yet given permission to conduct research in its schools, the questionnaires for the three schools located in that district were mailed to the conference contact people once the permission was received. That mailing ( 38 questionnaires) occurred two weeks after the conference. A total of 41 completed quest ionnaires were returned via mail or were completed online. While the questionnaire was given out by hand or through mail in a paper form, it was also available to all participants online via Survey Monkey, a da ta collection site. I chose to hand out and mail the surveys because I knew that it was likely that most of the participants would complete the surveys at the school wher e they work. Some teachers may have had a difficult time accessing the intern et at their school, their school may have protections in place on their computers that would prohibit them from viewing an unauthorized website, like a survey site, and some teachers may have preferred a portable paper option (Dillman et al., 2009). For othe rs, the convenience of the online survey option provided an appealing format for them to complete the quest ionnaire (Dillman et al., 2009).

PAGE 84

69 Following the advice of Dillman et al. (2009), a thank you email was sent to participants a week after the distribution of the questionnaire thanking them for their participation and encouraging them to return the questionnaire if th ey had not yet had an opportunity to do so. The participants not re turning the questionnair es by the end of the fourth week after the distribution/mailing were sent an email asking if they needed more copies of the questionnaire and giving the link to the questionnaire on Survey Monkey. Because all contact had to filter through the conference contact people, individual participants were not able to be reached. Sending an email to the contact person, which could be easily forwarded to th e eligible participan ts at their school, appeared to be the best option for a reminder. As recommended by Dillman et al. (2009) one last reminder in the form of a handwritten note and email was sent to the remaining non-responders two weeks after the initial email reminder. Participant demographics. In completing this resear ch, I utilized a mixed methods approach in collecting data. For the first phase of data collections I obtained 43 responses to my questionnaire (see Appendix B) with 41 of those res ponses being complete. The two incomplete questionnaires consisted of three complete responses on one and tw o complete responses on the other. The majority of the respondent s (73.2%) were teacher s with another 12.2% being reading specialists or literacy co aches, 2.4% were administrators, and the remaining 12.2% were from other categories such as media specialist, writing resource teacher, and homeschooling mom (see Figure 1). Of those participants, 92.7% were females. The participants represented public (65.9%) and private schools (31.7%) as well as one homeschool (2.4%) in three different counties. County A was represented by

PAGE 85

52.5% of the participants while 45% of the pa rticipants worked in County B, and 2.5% of the participants were em ployed in County C. Figure 1: Responses to question 33 : What is your current position? The participants represented a wide range of ages and years of experience. More of the participants identified themselves as being over 50 (32.5%) than did the participants in any other categ ory. Those in the age range of 33-38 years were the next largest (27.5%) group represen ted with another 17.5% being 39-44 years old. The remaining participants were gr ouped into four other categori es of 21-26 years old (7.5%), 27-32 years old (12.5%), 39-44 years old ( 17.5%), and 45-50 years old (2.5%). They represented a range of experi ences with teaching with 27.5% of them having 11-15 years of experience, 25% of them having 6-10 year s of teaching experience, 20% having only 1-5 years of experience, 17.5% with over 20 years of experience, and 10% with 16-20 years of teaching experience (see Figure 2). 70

PAGE 86

Figure 2: Responses to question thirty-five: How many years have you taught writing to students? Please include all years spend teac hing writing to student s in this response even if you are not currently teaching. The participants also indicated thei r highest level of education on the questionnaire. An equal number of participants held bachelor s degrees (20) and masters degrees (20). One teacher indicated that she had a bachelors degree and that she also held a National Board Certification. All par ticipants who responded to this question had experience with teaching, and 97.6% of the respondents had personal experience with teaching writing to students at some point dur ing their career in educ ation. Of that group, 95.1% were teaching to writing to students when they comp leted the questionnaire (see Figure 3). They represented kindergarten (2.7 %), first grade (5.4%), second grade (8.1%), third grade (13.5%), fourth gr ade (27%), fifth grade (40.5 %), and sixth grade (2.7%). 71

PAGE 87

Figure 3: Responses to question 37: What gr ade level do you currently teach? Questionnaire analysis procedures. Data analysis began as each stage of data collection ended. The questionnaires were distributed to twelve of the conference contact people on the day of the writing conference and to the remain ing three contact people two weeks after the conference when permission to do so was obtained from their school district. The period of time for questionnaires to be return ed did not end until approxi mately six weeks after the conference date, so that data analysis began approximately eight weeks after the conclusion of the conference. In order to f acilitate data analysis of the questionnaire responses, those responses received on the pape r copies of the survey were transferred over to Survey Monkey. Once I input the res ponses for the 24 paper surveys, I had a professional peer, also an educator, check my submissions against the paper copies in order to be sure that the entries were a ll accurate and true to the original form. 72

PAGE 88

73 The questionnaire consisted of forty-four questions and represented a mixture of Likert scale, multiple choice, and open-e nded questions. The data was inductively analyzed by developing a coding system to help identify any patterns or themes that arose while reviewing the responses to the ope n-ended questions. This was done by first reading through all of the responses to obtain an idea of the content. Then, in a second rereading the formal generation of codes was ab le to begin in a systematic way as similar words and topics were highlighted and patte rns and themes began to emerge. Constant comparison analysis (Strauss & Corbin, 1990) was used as I read through the open-ended responses to look for themes between the responses from different teachers. After highlighting similar words and phrases, I conducted a frequency count to see how often the patterns of words and phr ases actually appeared and to find which responses were repeated on a frequent basis. Finally, all common themes were retyped in a word processing file within Microsoft Word. The patte rns or themes identified were not labeled by the participants of the study. Instead, they represented sensitizing concepts identified by the researcher (Patton, 2002). In summary, a close content an alysis of all of the data allowed me to identify, code, categorize, clas sify, and label the primary patterns in the data (Patton, 2002). This analysis occurred with the data from individual schools and teachers as well as across the participating schools and teachers. When the coding and categorization was co mpleted, the resulting categories were judged and evaluated for completeness. Firs t, the categories needed to demonstrate internal homogeneity by showing that the items in each category held together in a meaningful way and external heterogene ity by displaying differences between the categories that were bold and clear (Pa tton, 2002). Additionally, Patton (2002) asserts

PAGE 89

74 that the categories must have been check ed for completeness and must meet the following criteria: the set should have internal and external plausibility (i.e., they should appear consistent and should seem to comp rise a whole picture), the set should be reasonably inclusive, it should be reproducib le by another competent judge who can verify that the categories make sense and that the data have been appropriately arranged in the category system, and it should be credible to the people who provided the information. If there was any instance in which it appeared that there was more than one way to classify the information, I decided which classification sy stem would be more important or illuminative (Patton, 2002, p. 466). The same educator who assisted me in ch ecking to be sure th at the data inputted into Survey Monkey was accurate was also able to read through the open-ended responses of three randomly chosen questions from the questionnaire to see if she identified the same patterns and themes. Because she is an educator, she was also able to verify that the themes, in her opinion, w ould be credible to other educators. She concurred with the patterns that I identified. For the scalar and multiple choice questions, descriptive statistics (measures of central tendency and measures of variability) and percentages were used to describe the array of responses from the participants. This study was descriptive in nature, so the quantitative statistics utilized were intended to give readers of the report a snapshot of the views of the educator participants as they we re at the time of the surveys administration. Reporting the measures of centr al tendency (the mean, media n, and mode) as well as the measures of variability (the standard deviations, variance, and range) gives a clear picture of the points-of-view of the educators who re sponded to the questionnaire as well as an

PAGE 90

75 idea of their backgrounds and current educational status. Additionally, a derived score was calculated based on the participants respons es as compared to th eir peers relative to their age, years in profession, position hel d, etc. (Gall et al., 2003). Completing a ChiSquare test in order to examine the responses of the participants with respect to their grade taught, county taught in, etc. provided a p-value that enabled me to determine whether or not any statistically significant relationships existed between those areas. Phase Two: Observations The schools that participate in the writers conference generally use one of six selection methods to select participants: teachers select the most improved writers, teachers select the best writer(s) in their cl asses, teachers select children who love to write, children self-select to attend, teachers have classroom contests, and whole schools have contests (Personal communicati on, February 17, 2010). Schools which had committees to select their student participants for the conferen ce through a selection process that utilized samples of student writing as a means of selection were invited to participate in observations and interviews in order to give me a be tter understanding of the choices made during the evaluation of th e student writing submissions. One private school consented to participate in the study, and the contact person from a public school in a different county indicated that the school might partic ipate. I hoped that a larger number of schools would engage in a selecti on process, but because of the three-year absence of the conference, sc hools that participated in th e past, and may have used a selection process, did not elect to attend the conference this time.

PAGE 91

76 Observation data. Observation data is an important inclus ion in this study because people do not always do what they say they do (Jo hnson & Turner, 2003, p. 312). I wanted to determine whether or not the teachers belief s (as shared in the interviews and on the questionnaire) matched the practices used in the classroom during the teaching and evaluation of writing. I also wanted to gain a richer understanding of the enactment of evaluation practices. As a result of a sma ll beginning sample pool, the final participant number was small. The public school contac t did not respond to requests from me for times to visit her school to observe and in terview a few teachers during the teaching of writing. Because of that non-response, that school was dropped from the observation portion of the study. Educators from th at school did, however, respond to the questionnaire, so they were included as part of the quantitative data. While the private school agreed to participat e, I initially hesitated to utilize them in my study because I have family members affiliated with that school. Ultimately, I opted to observe two of the teachers at th at school during the t eaching of writing. I did not have a personal relationship with the te achers who were observed and interviewed for this study prior to the interviews and observati ons. I felt that this a pproach could provide me with a more meaningful unit of anal ysis and that I would gain a deeper understanding of teacher beliefs and practices in the evaluation of writing if I focused on two teachers during their evaluati on practices and combined that data with their responses from the interviews, on the questionnaire, and on their students writings (Patton, 2002, p. 447).

PAGE 92

77 I observed one of the fifth grade clas ses during a 40-minute writing lesson in which the children produced writing. One of th e teachers of that lesson was the former contact from that school for the conferen ce and was previously interviewed. The coteacher for that lesson was not previously inte rviewed and was not a part of the selection process for the writing conference. Observation data analysis. During analysis of the observational data, it was important to take all related data sources into consideration. Notes, voice reco rdings/transcriptions, student writings, and the researchers reflective j ournal were all utili zed during the analys is stage. Before beginning the analysis, the differe nt participants actions were examined in an effort to distinguish between those who displayed fr ontstage behavior (behaviors that occur because the participants thinks that is what the researcher wants to see) and those who demonstrated backstage behavior (behaviors th at reflect the participants natural motions and conversations) because the fronstage behavior could skew the analysis in a divergent direction if it is not reco gnized (Goffman, 1959). After a first read through of the transcription and my notes taken during th e observation, I underline d points that caught my attention and took notes on a notepad duri ng a second reading. Afte r that, I looked for similarities between the underlined portions and notes, which I th en highlighted. Then, after going back through the high lighted portions, I noted points of agreement, points of disagreement, and patterns. Those notations were compared to the teacher comments on the student writings and on the rubrics used by the teachers to evaluate the students writing in an effort to identify whether or not the teacher comments matched their beliefs and expectations as shared in the in terviews, observation, and questionnaires.

PAGE 93

78 Phase Three: Interviews When interviewing (see Appendix C) a par ticipant, I attempted to build a rapport that was not possible during the comple tion of a questionnaire. Having rapport established can help the interviewee feel more comfortable and more apt to share his or her perspective about the quest ions being asked, and it allo wed the interviewer to have the opportunity to probe the participant fo r more details (Johnson & Turner, 2003). Patton (2002) shares that if participant obser vation means walk a mile in my shoes, indepth interviewing means walk a mile in my head and thereby makes the purpose of the interview clear (pp. 416-417). Having the opportunity to talk with the participants, inperson, allowed me to gain a deep unde rstanding of their stated beliefs. In order to be a proficient interv iewer, Patton (2002) recommends that interviewers must be interested in hearing wh at the participants have to say. Additionally, it is important to be objec tive and open to anything that the respondents would like to share without attempting to influence their answers in any way. I was extremely interested in speaking with th e educators regardin g their beliefs about the evaluation of student writing and tried to s how them my interest in anyt hing that they would like to share so that they would feel comfortable speaking with me and so that they would know that I had no expectations of what their an swers would be. The inte rview is an area of data collection where the participants are allo wed to bring the interviewer into their world rather than being forced to make their wo rld fit on a form created by the researcher (Patton, 2002). Fortunately, I have had numerous experiences with interviewing educators throughout my years as a masters and doctoral student. I worked hard to

PAGE 94

79 portray interest and objectivity and feel that my past expe riences prepared me well for these interviews. The Educator Interview. The Educator Interview is considered a standardized open-ended interview in that the questions were worded the same from one interviewee to the next, and the questions were asked of the participants in the same sequence (Johnson & Turner, 2003; Patton, 2002). This format of the interview allo wed for comparability across interviews by ensuring that all participants had a consiste nt experience with the same questions. The standardized interview has the added bene fit of making more efficient use of the participants time by moving along quickly and of making data analys is easier for the researcher (Patton, 2002). There were prompts written into the interv iew that allowed for the participant to elaborate on the response or to bring in information from a slightly different topic from the main question. Such prompts were helpful in gathering additional information while still staying on-task during the interview. While this format prohibits the interviewer from asking about unforeseen topics that arise, at the end of the interview, I did ask the participants if there was any ot her information that they would like to share with me. The interview consisted of seven ope n-ended questions with probes (see Appendix C) that accompanied each questi on to be used if necessary and three demographic questions. The questions used for this interview were formulated for use in this study with the question types and order being determined with the guidance of the question formation techniques suggested by Dillman et al. (2009). Those techniques include: making sure that the question is applicable to the respondent or providing an

PAGE 95

80 alternative for the respondent if there is a nonapplicable question, asking the question in the form of a complete sentence, asking onl y one question at a time, using simple and familiar words in the question, and using specif ic and concrete word s in the questions. I also followed the guidelines provided for D illman et al. (2009) when deciding upon the order of the interview questions. For example, the questions about the possible impact of the FCAT writing assessment, a standardized writing assessment administered to students in public schools as required by the state in grades four, eight, and ten, on the selection process for conference participants are at the end of the interview because I did not want the educators to be thinking about the FC AT throughout the whole interview and risk introducing a question order eff ect whereby the first response then impacted all of the following responses to questions that were quite possibly, unre lated to the FCAT (Dillman et al., 2009, p. 312). Other considerat ions included the grouping of questions that are related (i.e., asking two questions a bout writing in general, then two about the conference, and then two about FCAT), as king the demographic questions last, and thinking carefully about which question to ask first (Dillman et al., 2009). The number of interview questions was selected based on the recommendations of Patton (2002) who warns that a large number of questions can lead to extremely lengthy interviews that are tiring for the particip ants and time-consuming for the researcher. Because there were a limited number of ques tions, it was important to be sure that the respondents answered each question fully. The probes (see Appendix C) were utilized if, after appropriate wait time, the respondent was unsure of an answer or needed help in formulating a deeper, richer response with more details and elaboration than they provided in their initial response (Patt on, 2002). Similarly, while the respondent was

PAGE 96

81 speaking, I followed the advice of Patton (2002) and was careful to provide the appropriate recognition and feedback to the participant by utilizing such behaviors as tilting the head, raising the ey ebrows, slight nodding, or even remaining silent at appropriate times during the interview. At the end of the interview, the educators were given the opportunity to make any additional remarks that they wanted to share. Interview data. The headmaster at the private school and the teacher who was in charge of the conference participant selection process at that school in pr evious years were asked to participate in a one-on-one interview for the study. Those interviews were scheduled at the convenience of the educators. A time before school was chosen by the teacher, and the headmaster elected to meet after sc hool. The teachers interview lasted for approximately 13 minutes, and the headmaster spoke with me for approximately 10 minutes. The interviews occurred at their sc hool so that the surroundings were familiar, and the context of the interview was natura l for them (Patton, 2002). Those interviews were voice recorded, and transcripts were completed. The co-teacher for the observed lesson was not previously interviewed and was not a part of the selection process for the wr iting conference. I then adapted the interview for her by removing the questions specific to the writing conference selection process and by adding a couple of questions related to the lesson that they taught versus her own experiences as a writing student She was unable to schedule a time to meet in person and requested a written copy of the questions for expediency. I provided those questions to her, which she completed and returned via em ail. Similarly, I asked her co-teacher, the originally interviewed teacher, to complete a few reflection questions after the lesson.

PAGE 97

82 She also requested that those be sent a nd completed via email because of their busy schedule. I complied by sending her those questions, and she returned them to me via email. Interview data analysis. Analysis of the interview data be gan with the coding of repeated responses or phrases found in the notes, tr anscriptions, and/or voi ce recordings of the interviews. The coding process i nvolved the segmentations of data into smaller, related chunks of data that became meaningful again once content analysis exposed cross-case or cross-interview similarities, differences, a nd/or ambiguities (Patton, 2002). This coding was completed with the data from two face-to -face interviews that were conducted at the private school and with the written inform ation shared in response to the interview questions by the co-teacher of the lesson observed during case study data collection. I was looking for areas of agreement or similari ty, areas of differences and new areas that naturally occurred through the reading of the thr ee participants inte rviews (Johnson & Turner, 2003). I looked primarily at the responses of the two teachers when looking for agreement in their responses as a whole and within each question. This was done by completing several readings of the tran scriptions and by looking at my notes. I highlighted any points of agreement or disa greement and made notes on my notepad. Next, I looked to see if their responses matche d the expectations put forth in the interview of their headmaster and highlighted any ke y words that indicated an agreement or disagreement between her respons es and those of her teachers.

PAGE 98

83 Phase Four: Writing Samples Writing sample data. During the course of this study, three se parate sets of student writing samples were obtained. The first set came from the winning writings of the students who were selected to attend the writing conference s ponsored by a local university. While I was not able to speak or correspond di rectly with the judges for th e school, the headmaster of the private school provided some demographic in formation about the j udges and shared some of the anecdotal remarks that they made when returning the student writings to the school. Additionally, I received copi es of the three rubrics used to evaluate the writings of the conference contest participants a nd anonymous copies of the winning writings, those whose authors attended the conference, for one class of second, all of fifth, and all of sixth grades. There were three writings from second grade students, six from fifth grade students, and six fro m sixth grade students. The remaining writing samples were the products of writing lessons taught to a fifth grade class by a team of two teachers. I observed them during one forty-minute lesson during which the 20 students completed dr aft paragraphs on a topic of their choice. I received anonymous copies of those wr itings along with the comments made on the writings by their teachers. The teachers late r taught a follow-up lesson, and I received copies of the 20 final draft paragraphs written by the students on an assigned topic along with the completed rubrics with grades a nd comments from the teachers. The comments on these writing samples were then analyzed.

PAGE 99

84 Writing sample data analysis. Three sets of student writing samples we re collected during the course of this study for a total of 55 collected student wr itings. The first writings came from the winning participants of the conference with three samples fr om second grade, six from fifth grade, and six from sixth grade. Because I did not have the non-winning writings as a basis for comparison, I looked at these writings in search of interesting sections, and I examined the provided rubrics to gain a better understanding of what aspects of writing the educators at that school viewed as being important when evaluating writing. The second set of writing consisted of the 3.8 paragraph drafts written by the students during the observed lesson. I began an alysis of these writings by labeling the samples with names so that I would be able to differentiate between them. I simply picked up the stack and began giving them names alphabetically beginning with a name beginning with the letter a fo r the writing on top, a name beginning with the letter b for the next writing on the stack, and so on until I reached the le tter t at the end of the stack. Then, I began sorting the written co mments of the teachers into categories by highlighting similar comments in similar colors and then checking to be sure that all of the commonly colored comments fit together. Next, I completed a simply frequency tally in order to check on the prevalence of the different categories of comments. The third set of student writing samples resulted from a follow-up lesson to the one that I observed. In this lesson, the students were as ked to write a 3.8 paragraph focused on their morning routines. The students were told that they would receive a grade for this writing. The analysis of these writings followed the same steps as described above for the second set.

PAGE 100

85 Role of the Researcher My goal in conducting this study was to remain as objective as possible in the analysis of data and to acknowledge my bias because I was the one person collecting all of the necessary data. The primary area of c oncern for me was the inclusion of a school as my case study with which I have a persona l connection. Because of that connection, I was careful to always double check my notes an d the triangulation of my data to be sure that my findings could be supported by a nother researcher based on the written and recorded evidence that I possessed. In that sa me vein, I repeatedly asked a peer who is also an educator to see if she agreed with my analysis. There was never an instance of disagreement between my summariza tion of the analysis and hers. Additionally, Patton (2002) reports four ways in which th e presence of an outside researcher can impact the data being collect ed: 1) the presence of the researcher can cause a reaction amongst the par ticipants 2) change s in the actions, thoughts, or physical health of the researcher 3) the preexisting bias es of the researcher and 4) the inability of the researcher to display competence in data collection and analysis (p. 567). I did not notice any obvious signs that my presence affect ed the actions of the participants, but it is possible, especially dur ing the observation, that I may have made the teachers nervous. In order to mitigate that possibility, I attemp ted to talk with the teachers before the observation about other subjects to relieve some of the tension that would have been more likely to exist if I simply entered the room with the sole intention of watching them teach and then leaving again. A series of observations would have provided a richer portrait, but for the purpose s of this study, and in the available timeframe, one observation was completed. However, while th e teachers and I did not have a personal

PAGE 101

86 relationship before meeting for this study, they were used to seeing me around their school. Simply having that previous expos ure may have helped to increase my trustworthiness in their eyes (Patton, 2002). I experienced no changes that I was aware of during the course of data coll ection, I attempted to alleviate concerns about my personal biases by checking with a peer educator througho ut data analysis, and I worked to display competency in data collection and analysis by referring back to research (Dillman et al., 2003; Patton, 2002) for guidance when I was le ss than 100% certain of my next step. One area in which I may have affected the data was in the samples of student writing. It is possible that th e halo effect may have ta ken place because the teachers knew in advance of evaluating the student papers that I would be examining them (Patton, 2002, p. 567). Knowing that an outside party was going to look at their grades for a study on evaluating writing could certainly have influenced the amount and types of comments that they left on the student writ ings. However, it is quite likely that I overestimated the impact that I had on the educators involved in my study, and my awareness of my possible effects on the data and on the analysis allowed me to provide many checks of my competence throughout th e study (Patton, 2002). In terms of my analysis of their comments, I hesitate to ma ke blanket statements about the effectiveness or ineffectiveness of their evaluations because I was so limited in th e amount of time that I spent with them. It is possible that the comments made on these student papers were either altered because of their knowledge that I would look at them or because of the nature of the lesson (these writings resulte d from introductory lessons that would be continued over the next several weeks). More discussion regarding the evaluation of the writing samples can be found in chapter five.

PAGE 102

87 Triangulation Finally, it is important to remember th at in lieu of statistical significance, qualitative findings are judged by their s ubstantive significance (Patton, 2002, p. 467). There is no one way to determine how substa ntive the data are. Instead, it should be considered whether or not the triangulation of data sources supports the findings. It is also helpful to consider whet her the findings are consistent with other knowledge in the field or if they further our knowledge about that field, which, in this case, is how educators respond to writing. Triangulation o ccurred in this study w ith the collection of observation data, questionnaire responses, thr ee sets of student writings with teacher feedback, and interviews with the teachers as well as with their headmaster. Conclusion Analysis of the data gathered in this study occurred at many levels and covered both quantitative and qualitativ e data. A look at the research questions listed on Table 1 shows the relationship between the research questions and the data from the study. Information received on the questionnaire s (Phase one data) helped broaden my understanding of questions one, two, and thre e while the observation (Phase two data) illuminated questions two and three. The inte rviews (Phase three data) brought greater understanding to the answers to the second and third ques tions. Finally, the student writing samples gave me a deeper understandin g of question one (Pha se four data). All four phases of data intermingled and worked to gether to help form a broad picture of the methods of evaluation for student writing used by the educator partic ipants in this study.

PAGE 103

88 Table 1 The relationship between research questions and their data source(s) Research Question Source of Data Data Summary 1. Are there differences in the ways in which educators approach evaluating student writing? Questionnaire Interviews Student writing samples with accompanying teacher comments and rubrics 41 completed questionnaires Two completed verbal interviews One interview completed as a questionnaire 40 student writings (20 with comments and 20 with both comments and rubrics from teachers) 2. How effective do educators believe their evaluation methods are for judging the quality of students writing samples? Interviews Observations Questionnaire Two completed verbal interviews One interview completed as a questionnaire One classroom observation 41 completed questionnaires 3. What factors impact the evaluation decisions of educators? Questionnaires Interviews 41 completed questionnaires Two completed verbal interviews One interview completed as a questionnaire In order to fully encompass all possible areas of interest throughout this study, the research was conducted under a mixed methods paradigm, which allowed all aspects of the educators beliefs and practices to be studied. Using a mixture of both quantitative and qualitative methods allowed for stronger infere nces to be made from the results of the study and also allowed for a greater variety of views from the participants to be showcased (Tashakkori & Teddlie, 2003). All da ta collection procedures, both qualitative

PAGE 104

89 and quantitative in nature, took a realist appr oach in an attempt to gain an accurate portrayal of what is actually occurring during writing evalua tions and to determine if a plausible explanation for those acti ons could be found (Patton, 2002, p. 132). The design of the study was na turalistic in nature as no manipul ations of the educators or of their experiences occurred. Instead, the study was as unobtrusive as possible and allowed the educator-participants to remain in their familiar surroundings and to conduct their activities as they would normally (Patton, 2002). In an effort to gain multiple perspectives of the views of the educators involved in this study, multiple methods of data collection occurred. Gathering several types of data allowed the results to be tria ngulated. When data entered the analysis stage, triangulation helped to show the extent of the consistenc y in the responses across the different data sources. It also helped to make up for some of the weaknesses that re sulted in one area of data collection by covering that area again in another method (Patton, 2002). Additionally, the combination of multiple sources of data gave a comprehensive perspective of the educators beliefs and prac tices as they relate to the evaluation of writing (Patton, 2002, p. 306). A questionnaire, interviews, and observati on were all utilized throughout the data collection stage of this study. The Writing and Evaluation Questionnaire is a hybrid questionnaire and includes both closed a nd open-ended questions (Dillman, Smyth, & Christian, 2009). The inclusion of open-ende d questions allowed for the educators to share views that would be impossible for them to include on a closed-ended instrument while the closed-ended questions allowed a qui ck and easy way for the participants to select an answer that best represented their beliefs. It was necessary to include both types

PAGE 105

90 of questions because Dillman et al. (2009) found that in addition to the open-ended questions providing rich, meaningful informa tion that cannot be expressed in an only closed-ended response format, the open-ended questions provide an easy way for respondents to skip questions when they want to avoid having to fill-i n a blank or to write a lengthy answer. It was, therefore, be tter to include both open and closed-ended questions. It was imperative that the questionn aire be included in this study as it was desired that the largest sample possible be obtained. Because of time constraints and scheduling conflicts between schools, it was im possible to interview and observe all of the teachers involved with the writers confer ence. The questionnaire, however, was sent to every participating school and allowed for a larger sample size. The two types of qualitative data that were collected were observations and interviews. Additionally, stude nt samples of writing were analyzed. These methods of collection were utilized because of their ability to show richer and more diverse beliefs of the participants (Patton, 2002). The observation was useful because of the objectivity that it offered and was helpful in providing a de scriptive picture for da ta analysis. It also helped to reinforce the realist approach as the observation occurred in the schools where the educators work, which provided the participants with a level of comfort and normality that would not be found in another location (Johnson & Turner, 2003). While not quite as founded in the realistic pers pective as are observations, the interview provided an opportunity to probe the participant for more detailed information when it was necessary or helpful to do so (Johnson & Turner, 2003, p. 305). It also allowed entrance into the perspective of the pa rticipants and to see the situ ation through their eyes (Patton, 2002). This melding of the qualitative and quantitative methods of data collection

PAGE 106

91 provided a more complete glimpse into the r ealm of educators be liefs about evaluating writing.

PAGE 107

92 Chapter 4 Results The overarching purpose of this study was to describe educators beliefs about the evaluation of student writing. The inquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the e ffectiveness of their evaluation methods for judging th e quality of students wri ting samples? and (c) what factors impact the evaluati on decisions of educators? Results by Phase of Data Collection Phase one: Writing and evaluation questionnaire. The questionnaire responses help to illu minate the answers to the research questions, so they will be a ddressed with regards to the re search question to which they are related. The research questions were al so addressed by the in terviews, observation, and student work samples, and those will be pr esented after the analysis of the data from the questionnaire. Because of the wide ra nge of topics covered by the open-ended questions (7 total), many patte rns (40 total with much ove rlap) were identified amongst the responses for each individual question. Because the open-ended questions were distinct and addressed many diffe rent topics within writing evaluation, all of the patterns cannot be condensed into a smaller number. However, there are some commonalities that run throughout the responses to those seven questions, which can be placed into five

PAGE 108

93 categories. Those five categories are instru ction/planning, student skills, growth and development, feedback, and limitations and will be discussed further in chapter five. Research question one. The first research question, What are th e differences in the ways in which educators approach evaluating student writ ing? was addressed by eighteen different questions on the questionnaire. There are many evaluation options available to educators, and their responses to these questions show that they all go about the evaluation of writing in different ways. The first questi on on the questionnaire, What do you see as the purpose(s) of writing assessment? provided insight into the ways in which educators approach the evaluation of st udent writing. This was an ope n-ended question, so through an analysis of the responses, I identified four different patterns in the responses of the educators: students performance in relati on to the lesson objec tives, assessment of students writing skills, monitoring the progress and growth in student writing, using assessment results to guide instructional deci sions, and using assessment results to guide feedback for the writers. Lesson objectives. The first pattern identified in the educator responses to this question was that they believed that one purpose of writing assessmen t was to determine whether or not their lesson objectives were met. For example, thr ee respondents shared that the results of the writing assessments showed them whether or not their students had followed the correct form in their writing as requested by the teacher. Another pattern in the responses was that the educators view writing assessment as a way to check on the level of the writing skills of their students. Some of the skills mentioned included revision, grammar,

PAGE 109

94 mechanics, and the different writing traits . Many points in the responses (37 total) referred to the ability of writing assessm ent to show the teacher the progress, development, and growth of the students in their writing abilities. Educators shared that writing assessments allow them to look for ar eas of strength and weakness in addition to establishing if the students are on, above and below levels with respect to their grade/age-appropriate expecta tions. Additionally, educator s indicated that assessment results were often used to guide their instructional decisions For example, one respondent shared that she assessed her students writings while thinking about how the results could direct future lessons while another wrote that th e assessment results helped her decide what craft/trait to t each next. Some responses (11 total) included thoughts about the ways in which writing assess ment helped them to provide teachers, parents and most importantly students with feedback about their wr iting. While all of these responses were related to whether or not the students met th e objectives of the lessons, the majority of the responses (37 total) were most concerned with whether or not the students met those objectives while showing growth in their writing. Frequency of assessment of writing. The second question on the questionnai re, How often do you assess student writing? also gave information related to the differences in the ways in which educators approach the evaluation of st udent writing (see Figure 4). Over 50% of the respondents assesses student writing either once a mont h or less (25.6%) or once every couple of weeks (30.2%). An equal numbe r of participants assessed student writing once a week (14%) or once a day (14%) while 16.3% of the respondents gave other as their response. A look at their responses showed replies such as quarterly, twice per

PAGE 110

month, and varies. T hose teaching 1st, 2nd or 3rd grades seemed to report assessing writing on a more frequent ba sis than those teaching 4th, 5th or 6th grades. A Chi-square test was performed to te st whether there was any association between frequencies of assessing writing assi gnments and teachers experience, grades taught, and age. Here the null hypothesis of no association against the alternative hypothesis that there was a significant association was test ed. I would reject the null hypothesis if the p value of the test was le ss than 0.05. Chi-square test analysis (p = 0.816) reveals that there was no significant a ssociation between fr equency of assessing writing assignments and a teachers e xperience, grades taught and age. Figure 4: Responses to question two: How often do you assess student writing? In addition to looking at t hose factors with relation to their responses, I also compared the responses of teachers from C ounty A with those from County B to see if there were any differences in the distributi on of their responses (see Table 2). The most frequently selected response (37.5% for County A teachers and 38.9% for County B 95

PAGE 111

96 teachers) was that the teachers assess stude nt writing once every couple weeks. There was a difference in the response, however, for those reporting assessing student writing once a month or less. While only 12.5% of the teachers in County A selected that response, 33.3% of the County B teachers re ported only assessing writing once a month or less. Table 2 Responses to question two Other Once a day Once a week Once every couple of weeks Once a month or less Total 1 2 1 3 1 8 Public County A 12.5% 25.0% 12.5% 37.5% 12.5% 100.0% 3 2 0 7 6 18 Public County B 16.7% 11.1% .0% 38.9% 33.3% 100.0% 4 4 1 10 7 26 Total 15.4% 15.4% 3.8% 38.5% 26.9% 100.0% In response to the question, What pe rcentage of the time that you spend assessing all of your students work would you say is spent assessing writing assignments? the educators report that they spend varying amounts of their assessment time on assessing writing (see Figure 5). A larg e group of educators (41.9%) marked that they spend 25-49% of their assessment time on writing assignments. Close behind that group was the 50-74% range where 32.6% of th e teachers spend their time on writing assessment. Another 23.3% of the teachers spend 0-24% of their time assessing writing assignments, and only 2.3% report spending 75-100% of their assessment time dealing with writing. A similar patter of responses was found across experience of teachers, grades taught, and the age of the teachers. A Chi-square test was performed to test whether there was any association between the percentage of time being spent on

PAGE 112

assessing writing and their expe rience, grades taught, and ag e. Here, I tested the null hypothesis of no association against the altern ative hypothesis that th ere is a significant association. The Chi-square test revealed th at there was no significant association (p = 0.288) betw een percentage of total work in assessing the writing assignments and the teachers experience, grades taught and age. Figure 5: Responses to question three: What pe rcentage of the time that you spend assessing all of your students work w ould you say is spent assessing writing assignments? Most important aspect of writing during assessment. Another area where the educators display differences in their responses regarding the ways they approach the assessment of writing (see Figure 6) is found when asking, What is the most important aspect of writi ng that you are looking for when you assess a students writing? The majority of edu cators (56.1%) reported their belief that ideas/concepts would be the most important aspect that they look for when assessing writing. The next most common answer, Other , was used by 19.5% of the respondents, and this category included clar ifications such as, whate ver trait I just taught, 97

PAGE 113

elabo ration, and all of the above. An additional 12.2% of the respondents selected organization as the most important asp ect of writing for assessment purposes while correctness in grammar/punctuation, voi ce, and fluency onl y received 4.9%, 4.9%, and 2.4% of the responses resp ectively. A chi-square test wa s performed to test whether or not there was an association between what teachers look for as the most important aspect of writing during assessm ent and their experience, grades taught, and age. The results of that test were that there was no significant association (p = 0.406). Figure 6: Responses to question eight: What is the most important aspect of writing that you are looking for when you assess a student writing? The next question, Why do you feel that this aspect is the most important part of the writing to consider when you are asse ssing writing? was an open-ended response question. An analysis of the responses to this question revealed six different patterns to those responses. Those patterns were: this aspect greatly im pacts the readability of the writing (whether through illegible handwriting, spelling errors, or an unclear main idea), 98

PAGE 114

99 this aspect is vital to the meaning of the paper, other aspects come naturally with time and can be done later, this aspect is especially difficult to master, this aspect is required to be taught by our standards or curriculum, and all aspects are equally important. The most prevalent pattern among the re sponses (in 20 responses) was that the most important aspect of writing is the one th at impacts its readability. Some of those responses included, it needs to flow, students n eed to clearly state their though ts, and it is necessary for a piece of writing to flow. The next most prevalent pattern (found in 18 responses) in the responses was that their chosen aspect of writing was the most important because it was vital to the meaning of the pape r. The educators shared that their chosen aspect was the foundation for writing, the essence of the message, and the thi ng that helps their writing to make sense. Other (13 total) responses (the other skills are usually developmental and will develop over time) shared the idea that their chosen one required teaching while some of the others would come to the writer with time and experience. Another pattern (6 responses) spoke to the perceived higher difficulty level of some aspects of writing as compared to othe rs. Those educators who believed that their chosen aspect was the most difficult wrote, ideas/concepts (and voice) are more abstract skills, most important but difficult for some students, and the other stuff is more concrete seemingly easier to develop. While three educator s wrote that all aspects of writing were equally important, six others sh ared that they are required to focus on the aspect of writing that they selected as be ing the most important, and one clarified by writing, this is what the state is looking at in order to get a score of 4. Overall, factors that teachers perceived to affect the readabi lity and the meaning of the students papers were reported to be the most important aspects of writing.

PAGE 115

100 Drafts and evaluation. The educators were then asked, Do you ever have assignments in which your students write more than one draft for you? An overwhelming majority (82.9%) reported that they do have students, at least on o ccasion, write more than one draft of a writing assignment. The pattern of responses does not change when looking at the experience and age of teachers. However, the kindergarten teacher and the two first grade teachers were the only grades represented where all res pondents reported only having their students write one draft. In a follow-up question, How do the stude nts receive grades for those papers? the respondents answers were fairly evenly distributed among the answer choices (see Figure 7). The most common choice, The final copy and drafts are put together for one grade, received 36.4% of the responses while the next most common choice, The drafts are not graded, received 27.3% of the res ponses. The remaining responses were split between the other two options, Other and, Every draft has a separate grade, and those two choices received 15.2% and 21.2% of the responses respectively. Some of the other options reported by the participants included, Drafts can be evaluated using a checklist for completion and coaching. Final co pies are graded as well, and, Sometimes drafts are graded based off of the mini-lessons I have focused on. Sometimes, theyre not graded at all.

PAGE 116

Figure 7: Responses to question twelve: How do th e students receive grades for those papers? A chi-square test was performed to test whether there was any association between whether teachers assess assignments with more than one draft and their experience, grades taught, and age. Here, I tested the null hypothesis of no association against the alternative hypothesis that there was a signifi cant association. I rejected the null hypothesis if the p value of the test was less than 0.05. The chi-square test analysis reveals that there was no significant asso ciation between whet her teachers assess assignments with more than one draft and their experience and ag e (p = 0.204). There was a significant association revealed between grades taught by teachers and assessing assignments with more than one draft (p = 0.023). Teachers who taught older students (in grades two through six) were more likely to ha ve their students write more than one draft of their writing. Because it would logically be more difficult for kindergarten and first grade students to write a first draft, that statistical finding did not surprise me. I would expect older students to write more drafts than younger ones. 101

PAGE 117

102 Table 3 Responses to question 11: Do you ever have assignments in which your students write more than one draft for you? Grade Yes No Total 0 1 1 Kindergarten .0% 100.0% 100.0% 0 2 2 1st .0% 100.0% 100.0% 3 0 3 2nd 100.0% .0% 100.0% 4 1 5 3rd 80.0% 20.0% 100.0% 9 1 10 4th 90.0% 10.0% 100.0% 13 2 15 5th 86.7% 13.3% 100.0% 1 0 1 6th 100.0% .0% 100.0% 30 7 37 Total 81.1% 18.9% 100.0% Chi-Square = 14.618 p = 0.023 Frequency of use of evaluation methods. The next section of the questionnaire asked the respondents to mark how often you use each of the following methods of assess ment while assessing the writing of your students and had a scale of rarely, if ev er, once in a while, frequently, and almost always. Refer to Table 4 for the distributi on of responses. With the exception of the FCAT rubric and self-assessment, the most commonly chosen response was frequently. Portfolios had 36.6% of the responses for both frequently and once in a while, while the most common answer (35.9%) for the FCAT rubric as rarely, if ever, and self-

PAGE 118

103 assessment had 41.5% of the respondents indicating that they used that assessment once in a while. Table 4 Distribution of responses to the question, P lease mark how often you use each of the following methods of assessment while asse ssing the writing of your students. 1 Rarely, if ever 2 Once in a while 3 Frequently 4 Almost always Checklists [Method by which the teacher notes whether or not the student has accomplished what he or she has been asked to do but without judging the quality of the work (i.e., has five sentences, made a cover, wrote a narrative piece, etc.).] 14.6% (6) 26.8% (11) 46.3% (19) 12.2% (5) Teacher Conferences [The teacher and student meet to discuss the students writing. This conversation may include a discussion about the strengths and weaknesses, suggestions for revisions, attention to conventions, etc.] 0.0% (0) 22.0% (9) 51.2% (21) 26.8% (11) Peer Conferences [Students meet with one or more of their peers to share and discuss their writing. These meetings may include suggestions for revisions, sharing what they like or dislike about each others writing, etc.] 2.4% (1) 39.0% (16) 48.8% (20) 9.8% (4) Holistic Scoring [This method of evaluating writing requires the evaluator to look at all components of a writing sample in conjunction when giving a final grade rather than assessing individual characteristics separately.] 17.5% (7) 25.0% (10) 45.0% (18) 12.5% (5) Portfolios [Teachers have students compile samples of their writing over the course of a certain timeframe (a grading period, the whole year, etc.) in order to evaluate the 14.6% (6) 36.6% (15) 36.6% (15) 12.2% (5)

PAGE 119

104 writing.] Observations [During the times when students write, the teacher watches to see what each student is doing and may make a mental or anecdotal note about what he or she observes.] 7.3% (3) 26.8% (11) 43.9% (18) 22.0% (9) Rubrics [When assessing student writing, the teacher looks at specified characteristics as outlined on a rubric and decides how well the student succeeded in each area before adding those scores together for a final score.] 4.9% (2) 7.3% (3) 46.3% (19) 41.5% (17) FCAT Scoring Rubric [The teacher uses the FCAT rubric to evaluate student writing. The rubric requires teachers to look at focus, organization, support, and conventions.] 35.9% (14) 15.4% (6) 33.3% (13) 15.4% (6) Primary Traits Scoring [The teacher predetermines what characteristics of the writing are the most important as well as what will be assessed and how it will be assessed. This method is specific to each assignment.] 15.4% (6) 20.5% (8) 38.5% (15) 25.6% (10) Self Assessment [Students are given the opportunity to evaluate their own writing. They may use criteria established by the teacher or may create their own criteria.] 24.4% (10) 41.5% (17) 29.3% (12) 4.9% (2) Other (Please specify in the box below) 0.0% (0) 0.0% (0) 100.0% (1) 0.0% (0) A chi-square test was performed to test whether there was any association between each method of assessment and teacher s experience, grades taught, and age. The test was conducted separately for each me thod of assessment. The analysis revealed that there is no significant association between any me thod of assessment and the educators experience, grades taught, or age.

PAGE 120

There were three follow-up questions aski ng the educators to identif y which three of the writing assessment methods they used most frequently (see Figure 8). Observation methods (29.3%) and teacher conferences (19.5%) were the two most frequently used assessment methods. Rubric s were the next most popular method with 14.6% respondents identifying them as the most frequently used method of assessing writing assignments. Checklists and holistic scoring were each chosen as the most frequently used methods by 9.8% of the respondents. Finally, peer conferences were the least frequently used method of assessment for 7.3% of respondents. This pattern remains the same when accounting for different levels of experience, grades taught, and for the age of the teachers. A chi-square test wa s performed to test whether there was any association between the most frequently used method and the experience, grades taught, and ages of the respondents. The analysis reve als that there is no si gnificant association between most frequently used method and t eachers experience, grades taught and age. Figure 8: Responses to question 14: Select the me thod of assessing student writing listed in the previous question that you use most frequently. 105

PAGE 121

106 In order to find out more about how educat ors differ in their choices of how to go about evaluating student writing, they were as ked, Why do you choose to use these three methods of assessing student writing more often than othe r methods? This was an openended question so that the respondents could share their decision-making process. This question resulted in a wide range of res ponses. The 41 educators brought up 75 different points in their answers. Among the reasons give n for selecting those methods, I identified six different patterns of responses. The educat ors shared that their chosen methods offer feedback to the students, offer feedback to th e teachers, work togeth er with a variety of methods for effective evaluation, are mandated to be used or are used because they are quick and easy to use, give students owne rship of the task while helping them to understand the purpose of the assignment, and are comfortable and familiar to the teacher. The most repeated reasoning (listed in 22 of the responses) was that they chose their most frequently used methods for evaluating writing based on the feedback that those methods offered to teachers. One teacher shared that her chosen methods help me to know my students strengths and weaknesse s the best, while another said that her choice allows me to see specific areas of writing as they improve. Many (20 total) educators stated that they felt that it was important to use a variety of assessments in writing for their combined effectiveness a nd because they work well in conjunction with each other. Twelve others mentioned that at least one of their methods of writing assessment was used because of a mandate or because it was a quick and easy option. It is easier and faster, it is mandated, and as a 4th grade team, we use these. Nearly the same number (11 total) of educators mentioned that their most frequently used methods of writing evaluation were chosen because they provided some ownership for the students

PAGE 122

107 and/or clarified the purpose of the assignment for them. In their words, it helps the children to be aware of the steps, it allows students to gain independence, and it gives them the buy-in. Fi nally, four teachers mentione d that they utilized those methods that they were most comfortable and most familiar with in the classroom. There were varied responses to this questi on, but the majority of responses (20 total) showed that teachers hope to gain helpful f eedback from their students writings while engaged in the evaluation process. Use of formal vs. informal assessments. Looking at another difference in the ways that educators could approach the evaluation of writing, another question on th e questionnaire asked them, Do you use both formal (written feedback, rubrics, grad es, etc.) and informal (observation, anecdotal notes, conferences, etc.) methods of asse ssing writing? A majority (95.1%) of the respondents indicated that th ey do use both formal and informal methods of assessing writing, and that pattern was the same across all grades, ages, and years of experience. The follow-up question was, What percentage (for a total of 100%) of your time spent evaluating writing is spent on informal assessments and on formal assessments? The respondents were asked to give a percenta ge between 0 and 100 for both formal and informal assessments to add up to 100% of their assessments (see Tables 5 and 6).

PAGE 123

108 Table 5 Percentage of time spent on informal assessments Percentage Frequency Percent Valid Percent 10.00 1 2.3 3.3 20.00 1 2.3 3.3 25.00 2 4.7 6.7 30.00 5 11.6 16.7 35.00 1 2.3 3.3 40.00 5 11.6 16.7 50.00 6 14.0 20.0 60.00 5 11.6 16.7 70.00 1 2.3 3.3 75.00 1 2.3 3.3 80.00 2 4.7 6.7 Total 30 69.8 100.0 Missing 13 30.2 Total 43 100.0 The percentage of the tota l evaluation time spent on informal assignments ranged from 10 to 80 percent. The median was 40%, which means that 50% of the teachers spent more than 40% of their total evaluation time on informal assignments. The percentage of the total reported evaluation time spent on formal assignments ranged from 20 to 90 percent with a median is 50%. This means th at 50% of the teachers spent more than 50% of their total evaluation time on formal as signments. The educators spend a substantial amount of time on the evaluation of writing with both formal and informal assignments. However, they spent slightly more time, on average, using formal assignments.

PAGE 124

109 Table 6 Percentage of time spen t on formal assessments Percentage Frequency Percent Valid Percent 20.00 2 4.7 6.7 25.00 1 2.3 3.3 30.00 1 2.3 3.3 40.00 5 11.6 16.7 50.00 6 14.0 20.0 60.00 5 11.6 16.7 65.00 1 2.3 3.3 70.00 5 11.6 16.7 75.00 2 4.7 6.7 80.00 1 2.3 3.3 90.00 1 2.3 3.3 Total 30 69.8 100.0 Missing 13 30.2 Total 43 100.0 The ANOVA test procedure was used to test whether there was a significant difference in the average percentage of tim e spent between the different categories of experience, the grades taught and the differe nt age groups of the educators. Using the ANOVA allowed me to test the multiple means across all of the different units of analysis (years of experien ce, grades taught, etc.). In conducting this test, the null hypothesis is that the mean is the same for all groups. The te st produces an F-statistic, which is used to calculate the p-value. As in the Chi-square test, if p = <.05, there was a statistically significant result, and the null hypothesis would be rejected. This analysis was done separately for formal and informal assignments. Refer to tables D103-D109 in Appendix D to see the summary statistics of the percentage of time spent on informal and formal assignments for the different categor ies of experience, grades taught, and age

PAGE 125

110 groups of educators as well as for the ANOVA re sults. Those results reveal that there is no significant difference between different cate gories of experience, grades taught, and age of teachers in terms of the percenta ge of time spent on informal and formal assignments. Because there were only two private schools in the study (with only two responses from one of those schools) and b ecause those two schools were in different counties, the ANOVA was not run for a privat e versus public school comparison as it would be in a study with a more repres entative group for the private schools. FCAT rubric use. The next two questions asked the educator s about their experien ce with using the FCAT Writing Assessment rubr ic. The first question asked, Do you ever utilize the FCAT Writing Assessment rubric to score papers? The respons es to this question were fairly evenly distributed with 55% of the respondents indica ting that they did utilize the FCAT rubric while 45% of the respondents indi cated that they never utilized the rubric. In order to gain more information on this topic (see Figure 9), th e next question asked those who responded affirmatively, How often do you use the FCAT Writing Assessment? Many (47.8%) of the respondent s indicated that they used the FCAT rubric on a monthly basis while 21.7% used it weekly, 4.3% used it daily, and 26.1% used it on another basis. Those responses were individualized a nd included, once a trimester, three times a year, at least two times monthly, and every six weeks, 4th grade only. A chi-square analysis showed that there was no significant association between the frequency of the use of the FCAT rubric and the teachers years of experience, grades taught, or age.

PAGE 126

Figure 9: Responses to question 28: How often do you use the FCAT Writing Assessment? The next question asked the educators, Do you use any standardized writing assessments (SAT, FCAT, etc.)? The major ity (57.5%) of the res pondents replied that they did not use any standardiz ed assessments for writing. A ch i-square analysis revealed no significant association betw een the use of the standardized writing assessments and the educators age or years of experien ce, but there was a si gnificant (p = 0.044) association between the use of the standardized writing asse ssment and the grades taught by the educators (see Table 7). Teachers of older students are mo re likely to use a standardized writing assignment than those of younger students, which was a similar finding to the likelihood of teachers of older st udents to require their students to write more than one draft of their writing. 111

PAGE 127

112 able 7 hip between the grade being taught and the use of standardized assignments T Relations Grade Teaching Yes No Total 0 1 1 Kindergarten .0% 100.0% 100.0% 0 2 2 1st .0% 100.0% 100.0% 3 0 3 2nd 100.0% .0% 100.0% 0 5 5 3rd .0% 100.0% 100.0% 6 3 9 4th 66.7% 33.3% 100.0% 6 9 15 5th 40.0% 60.0% 100.0% 0 1 1 6th .0% 100.0% 100.0% 15 21 36 Total 41.7% 58.3% 100.0% Chi-Square = 12.960 p = 0.044 Finally, the last question th at provides information about the different ways in which e tten ducators approach the evaluation of student writing was, How often do you provide your students with written feedb ack on their writing assignments? A large percentage (85.3%) of the respondents give written feedback to th eir students either almost always (39%) or frequently (46.3% ) with an additional 12.2% giving wri feedback once in a while and 2.4% doing so rarely, if ever (see Figure 10).

PAGE 128

Figure 10: Responses to question 32: How often do you provide your students with written feedback on their writing assignments? chi-square analysis found no significant association between the frequency of written nd the ich ty A feedback and the educators age or years of experien ce, but there was a significant association (p = 0.0001) between the frequency of written feedback a grade levels taught by the educators. The frequency of giving feedback is more significant for 2nd-6th grades than in kindergarten or first grade (see Table 8), wh makes sense because kindergarten and first grade students would likely have difficul reading written feedback from their teachers on their writing. 113

PAGE 129

114 able 8 aching and frequency of giving written feedback T Grade te Grade Almost always Frequently Once in a while Rarely Total 0 0 0 1 1 Kindergarten .0% .0% .0% 100.0% 100.0% 1 0 1 0 2 1st 50.0% .0% 50.0% .0% 100.0% 1 2 0 0 3 2nd 33.3% 66.7% .0% .0% 100.0% 0 4 1 0 5 3rd .0% 80.0% 20.0% .0% 100.0% 6 3 1 0 10 4th 60.0% 30.0% 10.0% .0% 100.0% 5 8 2 0 15 5th 33.3% 53.3% 13.3% .0% 100.0% 1 0 0 0 1 6th 100.0% .0% .0% .0% 100.0% 14 17 5 1 37 Total 37.8% 45.9% 13.5% 2.7% 100.0% Chi = 47.= 0.0 -Square 76p 9 001 ed to research question one. ng the responses of the particip Summary of findings relat In reviewing the data collected, I found many patterns amo ants. While there were few areas of gr eat difference in their answers, significance was found when looking at the ev aluation practices of the teac hers of grades two through six as compared to those of the teachers of kindergarten and first grade. The teachers of the older students were more likely to have their students write more than one draft of their writing assignments, were more likely to utilize standardized writing assessments, and were more likely to give written feedback to their st udents regarding their writing assignments. Nearly half (49%) of the participants repor ted spending 25-49% of their

PAGE 130

115 rch question two. ion, How do edu cators evaluate the effectiveness of their ev f writing evaluation. hat you use nt ng the le 9). total assessment time on th e assessment of writing, so regardless of their chosen approach, writing does appear to be view ed as important by many of the educator participants. Resea The second research quest aluation methods for judging the qua lity of students writing samples? was addressed by six different quest ions from the questionnaire. Perceived effectiveness of most freque nt methods o The first question was, Please rate the method of assessing student writing t most frequently (as designated in the previous question) on a scale of 1-4 with 1 being minimally effective and 4 being extremel y effective. This question was repeated for each of the three methods of assessing stud ent writing that the educators selected in response to the questions discu ssed in the previous section. In identifying the level of effectiveness that they believed their most frequently used method of writing assessme held, 51.2% indicated that they believed th at method to be effective, and 41.5% chose to designate that method as being extremely effective. Only 7.3% labeled their most frequently used method as being somewhat effective, and no one selected the minimally effective option. When looking at those same responses while isolati public schools in two different counties, 12.5% of the teachers in County A reported a belief that their most frequently used method of assessing student writing was only somewhat effective while no teachers in Count y B selected that response (see Tab

PAGE 131

116 able 9 ness of most frequently used method of a ssessing writing by county T Effective Somewhat effective Effective Extremely effective Total 1 3 4 8 Public County A 12.5% 37.5% 50.0% 100.0% 0 8 10 18 Public County B .0% 44.4% 55.6% 100.0% Total 1 11 14 26 For the second and third choice methods of evaluating student writing, the respons ories re e cted es were quite similar although there we re more responses in the lower categ for these two methods (see Figure 11). For the methods chosen as being used second most frequently, 39% of the educators id entified their method as being extremely effective with 48.8% labeling their choice as being effec tive. Those rankings we followed by 12.2% of the respondents indicating that their second-choice methods wer somewhat effective, but no re spondents selected the minimally effective category. Again, a look at the comparison of public school teacher responses by county shows 12.5% of the teachers in County A reporting th at their second most chosen method of evaluating student writing was only somew hat effective with an additional 62.5% choosing the effective response. In the res ponses of County B edu cators, 100% sele either effective (44.4%) or extremely effective (54.6%) as their response.

PAGE 132

Figure 11: Responses to question 17: Please rate the method of assessing student writing that you use second most frequently (as designat ed in the previous question) on a scale of 1-4 with 1 being minimally effecti ve and 4 being extremely effective. The third choice method rankings were si milar, but this time, 2.5% of the respondents ranked their third choice as being minimally effective, and 12.5% labeled their choice as being somewhat effective. There were still more educators choosing their answers from the upper categories as 50.0 % of the respondents selected effective as the descriptor for their third choice, and 35% decided that their third choice was extremely effective. These percentages were similar to those found when comparing responses from one county to the other. Overall effectiveness of all writing evaluation methods. Another question, Please thi nk about ALL of the different methods of evaluation that you use when reviewing student writi ng. As a whole, how effective do you believe that the method(s) of evaluati ng writing that you utilize are? asks educators to think of the big picture with regards to their work on evaluating student writing. A large majority 117

PAGE 133

118 (68.3%) believe that their methods for eval uating writing are effective. Another 22.0% of the educators ranked their methods as be ing extremely effective while 9.8% see their methods as being somewhat effective, and no one selected minimally effective as a descriptor for their methods. Table 10 reflects the responses of the educators at the public schools in County A and County B. The majority of the responses fe ll into the effective category, but each county had respondents who believed that ALL of their methods combined together were only somewhat eff ective in the evaluation of student writing. Table 10 Responses by county of teachers beliefs a bout the effectiveness of their evaluation methods Somewhat Effective Effective Extremely Effective Total 1 4 3 8 Public County A 12.5% 50.0% 37.5% 100.0% 1 14 3 18 Public County B 5.6% 77.8% 16.7% 100.0% 2 18 6 26 Total 7.7% 69.2% 23.1% 100.0% A follow-up question asks, Why do you feel that way? This was an open-ended response question with 36 responses. After analyzing all parts of all responses, I identified six patterns among the responses. Th ose patterns were: these methods are not effective, these methods promote growth in my student writers, these methods provide variety for my writers, these methods allow me to meet individual needs, these methods help my students, and these methods a llow me to assess their writing daily. Many educators (12 total) indicated that they be lieved that their methods of writing assessment were effective because they witnessed growth in their students writing. They wrote,

PAGE 134

119 They give the children the most opportunity for growth, I can see growth in my students writing, and we see improvement in student writing over the school year. The rest of the responses were close in numb ers. For example, six respondents indicated that they had negative feelings about methods that they viewed as being ineffective. They shared, Only slight differences are noted, a nd I think there is no perfect way to score writing. Six respondents also believe that th eir methods help them to provide variety with their assessment practices, and one of those educators wrote, I am using a combinationto accurately determine by stude nts strengths and weaknesses. Five educators indicated that they like utilizing methods that allow them to assess childrens writing on a daily basis and sai d, I can assess daily if need be, and we discuss his progress daily. The final two patterns were represented by seven educators. Four of those believed that their methods were effec tive because they allow them to help their students by allowing them to know exactly what is expected of them and [to] give them guidance, and three respondents favored their methods because each individual can be assessed at their particular level. While a variety of responses we re given, the largest pattern shows that the teachers in this sample were most concerned with seeing growth in the writing of their students. Perceived helpfulness of FCAT rubric. The last question that helped to give more information about how educators evaluate the effectiveness of the methods used to assess the writing of their students was, How helpful do you feel that the feedback fr om the FCAT rubric is to your students? The majority (68.2%) of the respondents to th is question labeled the feedback from the FCAT rubric as being helpful to thei r students while somewhat helpful and

PAGE 135

m inimally helpful both garnered 13.6% of the responses. Only 4.5% of the respondents indicated that they believed the feedback from the FCAT rubric to be extremely helpful to their students (see Figure 12). A chi-squa re analysis was run to see if there was a significant association between the teachers perceptions of the helpfulness of the FCAT rubric feedback and their ages, years of experience, or grades taught. No significant association was found. Figure 12: Responses to question 30: How helpful do you feel that the feedback from the FCAT rubric is to your students? It is, however, intere sting to look at the responses to this question when separated by public schools in County A and County B. An examination of Table 11 reveals that only six educators in County A reported utilizing the FCAT rubr ic, and out of those six, they all believe that the FCAT rubric is hel pful to their students. Nearly twice as many educators in County B (11 total) reported us ing the FCAT rubric, and while five (45.5%) of them believe that the rubric is helpful to the students, five more find that it is only somewhat helpful (27.3%) or minima lly helpful (18.2%) to their students. 120

PAGE 136

121 Table 11 Responses by county of teachers evaluating the effectiveness of fee dback from the FCAT rubric County Extremely helpful Helpful Somewhat helpful Minimally helpful Total 0 6 0 0 6 Public County A .0% 100.0% .0% .0% 100.0% 1 5 3 2 11 Public County B 9.1% 45.5% 27.3% 18.2% 100.0% 1 11 3 2 17 Total 5.9% 64.7% 17.6% 11.8% 100.0% The feelings of the educat or participants regarding the effectiveness of their methods used for the evaluation of student writing vary from thoughts of minimally effective writing methods to methods that are perceived as being extremely effective. An examination of the responses of the educator s when labeling their three most frequently used methods of writing evaluation shows a trend in that the number of teachers classifying their chosen methods as being eith er extremely effective or effective declined from the most oft used method to the third most used method. At the same time, the number of teachers identifying those most frequently used techniques as being minimally or somewhat effective increased from 7.3% w ith the most frequently used method to 15% with the third most used method. That trend ma kes sense in that the most frequently used methods of writing assessment s hould logically be those that the teachers believe are the most effective methods. Supporting that trend were the responses of the teachers when asked how effective all of their writing ev aluation methods were when thought of all together. 90.3% of the educators believed that their methods of evaluating writing are effective. It appeared from their responses th at at least some of th e educators based their judgments of effectiveness on the growth that they saw in the writing of their students.

PAGE 137

122 Overall, the responses indicated that the participants in this study believe their methods of evaluating student writing to be effective. Research question three. The final research question addresses a nother angle of the evaluation process by asking, What factors impact the eval uation decisions of educators? Time spent evaluating i ndividual student papers. The first question on the questionnaire wh ich addressed this topic was, Do you spend more time assessing some individual students papers than others? A large majority (92.9%) of the respondents do spend more time on some papers than on others. In order to further clarify the factor s that cause the educators to spend more time on one paper over another, the next question asked, Wha t accounts for differences in the amount of time spent on various papers? This was an open-ended response question. In analyzing the responses, I identif ied five different patt erns that characterized the responses. Those patterns were: needs/skill, time, feedback, t ype of assignment, and readability. A large number (37) of respondents referred to the n eeds and/or skills of their students as a deciding factor in whether or not they would need to spend more time on a particular paper. Some of those comments included, i f students have a natural talent for writingit generally is easier to assess, some students have challenges when writing and some students require more support to be successful. A large group (15) of the respondents expressed a desire to give feedback to their writers because, there are things that need to be corrected and commented on, and sometimes I just need to suggest and idea for improvement or consideration. Many (1 4) of the educators indicated that time played a part in the amount of effort that they could put forth in grading their students

PAGE 138

123 writings. Sometimes you spend a lot of time tr ying to determine what exactly the student is trying to say, one with more challenge s takes much longer, and more time is spent looking for needs with the neediest writers are only a few of the comments from those educators who were concerned about time. Elev en teachers mentioned that the readability of the students writing can influence the amount of time spend on a paper. They lamented, Some papers are harder to read and mentioned that the ease of handwriting could impact their ability to read the paper and to help them move forward. Finally, a smaller group (6) of educators shared that the type of assi gnment being done could affect the amount of time they spent reading a pa per because, Expository essays require a different amount of time than creative and narr ative pieces of writing, and the length of the paper, as well as the type of rubric bei ng used can all be a factor when the teacher decides how to approach the evaluation of a pa rticular writing. Teachers seem to be most focused on helping those students with challenge s or those who are in need of assistance with specific skills in their writing. A follow-up question, How would you char acterize those pape rs you spend more time responding to as contrasted with those th at take less time? required the respondents to think more about their students writ ing. This was also an open-ended response question, and I identifie d four patterns among the responses from the participants. Those patterns showed that the educators were th inking about whether or not the writer was challenged, feedback, whether or not there were aspects of the paper that affected its readability, and if there were specific skills that the writer needed to work on to improve. The most frequent response (19 references) included references to the writer and his challenges. I spend more time with my st ruggling writers papers to look deeper into

PAGE 139

124 their needs and some writers need more assistance to improve. Many (18) educators referenced the students skill levels with reference to time. One respondent shared, Usually papers I spend more time on require the students to add more detail, elaboration, mechanics and choose vocabulary words more appropriate for pieces. Similarly, a group (13 total) of respondents was concerned about the amount of feedback needed by certain writers. Some children need one on one confer encing more frequently than others, and some need more coaching. Finally, a small group (9 total) of res ponses pointed out the factors like handwriting, s pelling errors, and sloppy, poor grammar that affect the readability of student papers. While some teachers pinpoint readability issues as the cause of increased time required for gr ading, the majority of the teachers indicate that they spend more time on those papers that need the most help. Feelings about the assessment of writing. Moving away from questions particular to the content of student writings, the next question asked, How do you feel about assessing student writing? The respondents were given a scale with four choices from which to choose their response. The majority (56.1%) of educators selected positive as th e word to best describe their feelings regarding the assessment of student writi ng. Another choice was somewhat positive, and that option was selected by 31.7% of the respondents followed by 4.9% who chose somewhat negative and 7.3% who felt negative about th e assessment of student writing (see Figure 13). A chi-squa re analysis was run but no significant association was found between the educators feelings about the assessment of student writing with respect to their ages, grades ta ught, and years of experience.

PAGE 140

Figure 13: Responses to question seven: How do you feel about assessing student writing? Table 12 shows the distribution of res ponses when separated by county for the public schools. In County A, 100% of the teachers had a positive feeling about the evaluation of student writing. In County B, 83.3% of the teachers reported positive feelings with 5.6% reporting somewhat negative feelings and an additional 11.1% shared that they had negative feel ings about evaluating student writing. Table 12 Feelings of teachers with regards to the assessment of writing Positive Somewhat Positive Somewhat Negative Negative Total 6 2 0 0 8 Public County A 75.0% 25.0% .0% .0% 100.0% 8 7 1 2 18 Public County B 44.4% 38.9% 5.6% 11.1% 100.0% 14 9 1 2 26 Total 53.8% 34.6% 3.8% 7.7% 100.0% 125

PAGE 141

Source of evaluation methods. Another question related to the factors th at im pact the evaluation decisions made by teachers was, Where did you learn the di fferent assessment methods that you use to assess student writing? A large number (48.8%) of the respondents se lected school or district based training as their answer. Afte r that, 22% indicated that they learned their assessment methods from college or unive rsity courses, and another 12.2% were educated by their reading coaches or literacy specialists. A few (2.4%) shared that they learned their methods from peer teachers while 14.6% c hose the other response and shared that they learned their methods fr om a combination of college, public school trainings, and peer teachers, college course s as well as from mentors during previous years of teaching, and after thirty years of teaching, you tend to accumulate many methods from all of these sour ces and more. Nearly half of the respondents learned their methods of assessment from thei r school or coun ty trainings. Figure 14: Responses to question 10: Where did you learn the different assessment methods that you use to assess student writing? 126

PAGE 142

127 Mandated methods of evaluation. I wanted to find out if any of the f actors influencing the decisions of the respondents were beyond their control, so th e next question asked, Are you mandated by your school to use any of the three methods of assessment that you use most frequently? A large number (58.5%) of the educators are re quired by their school to use at least one of the methods of assessment that they use mo st frequently in their classroom. All eight of the public school teachers from County A reported that they are mandated to use at least one of the three methods of assessment fo r student writing that they use most often (see Table 13). In County B, 77.8% of the teachers reported the same, but 22.2% of those teachers shared that they are not mandated to use any of their most-oft used methods for evaluating student writing. Table 13 Rates of mandated assessments of writing County Yes No Total 8 0 8 Public County A 100.0% .0% 100.0% 14 4 18 Public County B 77.8% 22.2% 100.0% 22 4 26 Total 84.6% 15.4% 100.0% The next question asks for more info rmation with, Which method(s) of evaluating writing are you manda ted to use? This was an open-ended response question in order to provide the respondents with the opportunity to share any mandated assessments that they may have had. The res ponses fell into six categories. Rubrics were the most oft mentioned mandated method with 19 references to having to use a specific rubric. Ten of those specifically named th e FCAT rubric, and a county-designed rubric

PAGE 143

128 the Six Traits rubric and a school-designed rubric were also mentioned. The other mandated forms of evaluation were conferen ces (five references), portfolios (two references), checklists (two references), observations (three references), and holistic scoring with anchor papers (four references ). The most popular response, then, was that the FCAT rubric was the most frequen tly mandated method of writing evaluation. Reasons for choosing FCAT rubric. In order to determine whether there we re reasons beyond a mandate for teachers to select and use the FCAT rubric in the evaluation of their students writing, I asked, Why do you choose to use the FCAT rubr ic? This was an open-ended response question, and I was able to id entify four patterns in the responses provided. The most prevalent pattern (mentioned by eight respondents) was that of using the rubric because it was mandated. These responses included, m andated by writing program in county, mandated, and we are forced to. The next pattern of responses, seen in the responses of six educators, was one of the teachers be lieving that the FCAT rubric was good to use because they believed that it gave them a quick view of the skills of their students. They shared, it gives a snapshot of what the child would achieve on the st ate test, it is a good means to look at writing strengths and w eaknesses, and with expository essays, it is a quick glance at what the expectations are for 4th graders. A few others (three educators) viewed the FCAT rubric use as al lowing their students to practice for the test. They commented, to prepare students, and I use the FCAT rubric in order to prepare students and teachers. Finally, one respondent attributed her choice of using the FCAT rubric to the fact that the rubric is holis tic. The majority of these responses reflected

PAGE 144

129 reasons outside of the teachers personal preferences for the use of the FCAT rubric in the evaluation of student writing. While there were only two instances of sta tistical significance (related to draft writing and the practice of giving feedback) found in the questionnaire data, the shared responses of the participants gave a glimpse into their beliefs with regards to the evaluation of student writing. Examining the re sponses with respect to the demographics of the participants also helped me to learn more about their evaluation practices. The data showed that there were many factors that impacted the evaluation decisions of the participant educators. A positive finding was that over half of the respondents had a positive fee ling about the assessment of student writing. The participants also noted that the needs and sk ills of their students had a great deal of influence on their choices of evaluation methods as well as on the time that they spent assessing the student writings. A final point of interest was th at nearly half (48.8%) of the participants obtain their methods of writing ev aluation from school or district trainings with an additional 22% findi ng their methods in college courses. While many factors were mentioned by the participant group, thei r feelings about evaluation, the needs and skills of their students, and the source of their evaluation methods were a few of the notable factors found in their responses on the questionnaire. All of the responses to th e questions on the questionnaire help me to better understand the beliefs of the teachers in this sample with regards to the evaluation of student writing. In order to achieve a more intimate, working knowledge of how those beliefs transfer to practice, I conducted interviews and an observation, and I also collected student writing samples. The result s of those endeavors are analyzed here.

PAGE 145

130 Phases two-four: Interviews, obser vation, and student writing samples. Maple Court Prep (MCP) is a private school in the southeastern United States. With a student population of approximately 500 students ranging in age from three to fifteen, this school employs over 100 teachers a nd staff members. This school is part of the International Baccalaureate Program for the middle years, and the administration and faculty pride themselves on staying current with educational research and with the research-based methods of inst ruction that are implemented in the classrooms at all grade levels. MCP boasts of its ability to teach the whole child, and to that end, students at the school are exposed to a large number of special classes on a regular basis, participate in Brain Gym activities during th e school day, and have the opportunity to take their learning off-site many times during th e school year (Headmaster, personal communication, April 7, 2010). Throughout the school year, MCP is visited by hundreds of teachers from around the world who come to learn about the different educational methods in place at that school. MCP has been attending the writers conference for several years and is always one of the larger student groups in atte ndance. As a matter of fact, because the headmaster wished that she could bring her whole student body to the conference, she started an annual authors conference at MCP. In this particular year in order to select student participants to attend the writers conf erence at the university, the headmaster had a meeting with representatives from each gr ade level that would be sending students. They decided that they would send thr ee students from each grade level as representatives for the school to the conference. The team agreed that they would continue their practice from the previous year s and have outside judges do a blind reading

PAGE 146

131 and selection of the winning student writings. When asked why her school chooses to use that method of selecting particip ants, the headmaster replied: I think its fair and equal, and no one can say, What about me? or anything else. Its blind reading. I thought it was an excellent process, so I would recommend it for everybody if th ey can its hard for everybody we have a lot of contacts, and a lo t of people who are happy to help us with things because of all we do for others. Its synergy. (Personal communication, April 7, 2010) I asked if she could share more informati on about the judges, and she said that one woman was a former elementary school teacher who runs a program at a local university and trains teachers in public schools. Another judge, a male, was a former professor at an Ivy League university who then moved on to become a dean of a university. The judges, according to the headmaster, were impressed and one said that she was blown away by the quality of writing presented by the students. A look at the three rubrics compiled by th e teachers at MCP gave me an idea of the expectations held for their student writers The three separate rubrics were used for early primary (first and second grade), intermed iate, and sixth grade, but all three shared common features in that the main categor ies of assessment were the same for the intermediate and middle school rubrics and similar to the early primary one. Focus, support/elaboration, organizati on, and presentation were the larger categories, which were then broken down into more specific gui delines for the judges. For example, under the organization section on the middle school rubric, one of the guidelines was, Arrangement of words in sentences is varied. Under the same section on the intermediate rubric, the guid eline read, Dialogue is re alistic and illuminates story elements. Finally, on the upper primary rubr ic, the judges were asked to see if the writing shows sentence fluency.

PAGE 147

132 I was given the winning writings for one cl ass of second graders (three writings), all of the fifth graders (six writings), and all of the sixth grad ers (four writings). In order to share a representation of those writings, I used all of the second grade writings and then randomly selected three each of the fifth and sixth grade writings so that there would be an equal number of writings representing each grade. I then selected the three most interesting excerpts from each of the writings to share on Table 14. The interest of those excerpts was determined by the use of ex clamations, a presence of student voice, complex sentence structure, or engaging ideas. I did not complete a formal grading or scoring of the papers because of a lack of a basis for comparison. Table 14 Sample sentences from conference participant writings Second grade Fifth grade Sixth grade I tried and tried to stop but I couldnt. So thenbam! Ah, Im stuck. I got on the bench. I tried to get out and I did. Yah, but then I noticed that I was flat! He picked Sarah up in a firemans carry and whisked her out to the car with Rachell following close on his heels. The town all rushed at me as if they were going to run me over. Moving was difficult for me due to all of the energy I had exerted. I cried, cried, and cried some more until everybody came in and asked what happened to me. I said, I do not know. Why? After we went home, I told my mom that I was watching TV when the ceiling fell on me. Oh, honey! Im so sorry. said mom. Just look at youyoure FLAT! Thump. Thump. Thump. The swollen marigold basketball hit the thick pavement. Melanie leaped up and threw the basketball through the tatte red net with its jet-black base. Melanie was a tall girl with thick, mahogany colored hair. Today, she wore her hair in two braids resting on her back. That was it. She was gone. Gone forever, and there was nothing I could do. It felt as if my heart had been dropped to the ground and all hope lost. There was no way I could go on without her in my life. But I had to, and I had to keep going She was my best friend, and we were like two peas in a pod. Mom, I said. Im flat! Im going to my room. I slid under the door. I guess A couple of years ago, we were probably the happiest family in all of London. The intensity of the spotlight envelops my soul, all eyes bewitchingly follow

PAGE 148

133 it was pretty impressive. The very next day I was at the surfboard pool. I know Ill be the surfboard. This is fun. I thought. I didnt know being a surfboard could be so much fun! Together we were like good coffee, rich, warm, and strong. Business was booming, birds were chirping with glee, and when we were feeling blue it seemed that magic rays of sunshine would shine down on us. my every move, electricity reverberates through my veinsI am the god of the theatre and I majestically shimmy with poise and scoot across the stage. Boy that was a funky dance! (Cue theme song Dora the Explorer) Ugh! If only Ava hadnt texted me at that moment; my fantasy was getting pretty bizarre. During my interview with the headmaster, I asked her to share her beliefs about writing with me. Knowing that I was also going to interview one of her teachers, I was curious to find out if their beliefs about writing would be sim ilar. The headmaster shared: One of the beautiful things about wr iting is I think you can teach about life. A lot of things kids do, they fi nish, theyre done, and they dont want to redo it. Well, when you get a job, if you have a builder, and they need to fix something, they better come back and fix it. You grow up in school thinking what? I did it already; Im done. I dont have to touch it. But the writing process has you do a draft. Fi rst, you get your ideas. Then you do a draft. Then you edit it. Then other people edit it. Then you read it to people. You hear how it sounds. Its not a one-step process. Its a developmental process. Its one of the best ways to teach a good work ethic that something can always get better. Thats part of the process. (Personal communica tion, March 4, 2010) When I asked one of her fifth grade teachers, Mrs. Tenley, what she enjoyed about teaching writing, she replied: I think writing for some children is su ch a natural way of expression that the joy that I watch in their faces an d their choice of words and the ability to put thoughts together is just incred ible. Its inspiring for othersI think my most enjoyable thing about teachi ng writing is if I enjoy teaching it when we teach small sections at a time because that way, the kids can practice it, and they actually get it. (Personal communication, April 7, 2010)

PAGE 149

134 While the two responses did not have a lo t of commonalities because the headmaster spoke about the process of writing while the te acher addressed the feelings involved with teaching writing, they both spoke positively about writing and seemed to believe that their students would be successful if they, as educators, were eff ective in teaching the students. After speaking with this teacher, I wanted to go further than simply hearing about her beliefs and practices during the teaching and assessing of writing. I wanted to see her in action. She and her co-teach er, Mrs. Drake, agreed to let me observe them during a writing lesson with their fift h grade students. The lesson was a forty minute lesson entitled, Writers secret What is a paragraph? I was able to see the last few minutes of the lesson that finished with one group of students upon my arrival. The students learned about a 3.8 paragraph and were then given six slips of paper with one sentence printed on each slip. The students needed to put those slips in order based on which one was the main idea, which ones were the details, etc. As the teacher revealed the correct order of the sentences on the SmartBoard, the students ma de sure that their strips matched that order and then glued them onto a sheet of paper. After the lesson was over, Mrs. Tenley shared with me that I had just witnessed the conclusion of a lesson with the same topic as the one that I was there to observe but that the lessons were differentiated according to the needs of the students in each group. While I waited for the groups of students to change rooms and to prepare for their lesson, I took a few minutes to look at my surroundings. The classroom had five round tables, and each table had four chairs. In th e center of each table was a mini-milk crate filled with pencils, index cards, and a table number. Those items sit atop of a wire basket

PAGE 150

135 filled with notebook paper. Mrs. Drake went around and placed sheets for the lesson in the basket before the kids entered the room Under each table was a rectangular storage cube used to hold the kids note books and other larg er supplies. Across the front of the room, where the wall meets the ceiling, were colored cutouts of keys labeled with the words, flexi bility, integrity, owne rship, failure leads to success, commitment, balance, this is it, and speak with good purpose. Across the left side of the room were learner profiles: caring, reflective, risk-takers, inquirers, knowledgeable, open minded, thinkers, principle d, communicators, and balanced. At the back of the room was a list of attitudes, which I was unable to read from my vantage point. Hanging from the ceiling were four fluorescent colored pennants with ribbons hanging from them. On the whiteboard at the fr ont (peeking out from either side of the mounted SmartBoard), were the days schedule and the class jobs. N early everywhere the students look, they had reminders about how to act and treat others. These reminders were presented in a friendly and colorful wa y while, at the same time, giving the message in a more mature way than you mi ght find in a primary classroom. When the teachers were ready to start, they called for attention at the SmartBoard where the writers secret handout was bei ng projected via the ELMO device. A small, yellow finger pointer was used to show the students where the teachers were at on the handout. The two teachers team-taught the lesson. Mrs. Drake stayed closer to the ELMO to move the pointer while Mrs. Tenley circ ulated and checked on students as she moved and added to the lesson from wherever she was at when the moment pr esented itself. Mrs. Drake shared the following with the students: A good paragraph represents a complete and interesting picture. All the sentences work together to complete the picture for the reader. So even

PAGE 151

136 though in your minds you might be th inking, Wow, a paragraph is pretty short, if its done well, the picture ca n be complete. It all depends on the way that you structure it and the details and support you put in it. (Observation, September 21, 2010) Throughout the lesson, the students appeared to be mostly attentive and taking notes. Mrs. Tenley continued to circulate and w ould check on and assist students when notes seemed to be lacking on students papers. Meanwhile, she continued to add to the lesson from her current place in the room with the teachers speaking together seamlessly. Together, the teachers guided the students through a sample 3.8 paragraph while identifying the three parts (topic, body, and cl osing) that the students must include in their own paragraphs. The teachers discussed the varying characteristics of paragraphs and reminded the students that paragraphs could be long, short, it can have descriptive words, it can have complex se ntences, it can have transition words, figurative language, and similes and metaphors. Next, it was time for the students to write their own 3.8 paragraphs. In order to get them started on writing, the teachers asked the students to take out their heart maps, which are construction paper hearts glued onto a sheet of paper and th en covered with all of the ideas and things that the students w ould like to write about at some point during the school year. The following directions were given: What were going to ask is that you actually take that topic a nd turn it into a 3.8 paragraph so we can see how you are as a writer. Its short enough for us to be able to read kind of where we can take you as fifth graders in this particular writing group. The teachers put a sheet of notebook paper on the ELMO and together (speaking back-and-fort h) demonstrated how they wanted to use that paper while, at the same time, reminding th e students that they could use a portion of

PAGE 152

137 the paper as a planning area. Mrs. Drake rema ined at the ELMO in order to demonstrate on that paper what they were asking the students to do. While the students worked, both teachers walked around to assist them as needed. While the kids worked, soft, classical mu sic played in the background. The students continued to work for approximately 15 minutes. During that time, all of the students were writing with periodic breaks to refer to the top of their papers (where theyd drawn their organizational maps) and to reread what they had already written. The pencils were moving, and heads were down. When one girl finished, she raised her hand and asked Mrs.Tenley to read over her writing. After the first girl fi nished writing, she pulled out a novel to read. Another girl quickly followed with her own book as she finished writing. Throughout the whole writing time, the teach ers continued circulating, talking with students, reading over differe nt writings, and occasionally sharing writing with one another. When the fifteen minute timeframe was completed, Mrs. Tenley directed those who were writing to keep going with their writing, but she also asked if there was anyone who wanted to share their writing. She said, We want to see what you can come up with for coaching or the compliments as needed. A couple of students read their work, and then I could see from the daily schedule on the board that it was time for them to move to another activity. After the last student shared her writing, Mrs. Drake said: You had a lot of complex sentences. I could tell for how long they were and you needing to have a pause in breathing, which was good. Other people kind of get straight to the point in a shorter sentence style. Either way, we are excited to see what you wrot e, and we are excited to give you some comments. So, could you turn t hose into my basket? (Observation, September 21, 2010) Below is the paragraph written by the girl with the complex sentences.

PAGE 153

138 All-State Chorus is amazing. I tried out last year, made it last year, and I just auditioned this year. First, when I tried out last year I was insanly nervous because it was my first time ever! Then I was relieved when I found out that I had made it. Next, when I did All-State, it was hard, really hard. I had to memorize six or seve n songs, but, it definitely payed of because in the end we sounded great! Fi nally, I started all over again, and I am back to the nervousness after tryi ng out and having to wait to find out. In the end, I am really happy that I ha ve the chance to do All-State Chorus because it is an amazing experience. (Student writing, September, 21, 2010) After the lesson, I asked the teach ers to tell me if th ere was anything that they would have done differently if they were to teach the lesson again. Mrs. Tensley, shared, I believe that we could have checked for understand ing to be sure that all understood the 3.8 paragraph, and Mrs. Drake stated, I would tr y to build in a comparison so that students could see the difference between narrativ e and expository for this particular 3.8 paragraph assignment. The teachers allowed me to have anonymous copies of the students 3.8 paragraphs, which they later checked for understanding of the 3.8 paragraph. According to Mrs. Tensley, the teachers sit together a nd review all the writi ngs as a team. They discuss each piece together. In this case, no gr ade was assigned to the draft. The class did, however, complete a subsequent 3.8 paragra ph for a grade, and the teachers again allowed me to have anonymous copies of thos e writings. On the table below, there is a breakdown of the different comments that were made on the individual writings with the alphabetical names that I assigned them in order to identify matching student writings.

PAGE 154

139 Table 15 Teacher comments on student 3.8 paragraphs Name Gender Observed Lesson Paragraph Comments Follow-up Lesson Paragraph Comments Follow-up Lesson Rubric Comments Followup Lesson Points/ Percent Alan Male You followed the format. Well done! Inserted the word and Misspelled choose Smooth closure w/ morning routines 13/93 Barbara Female You followed the format. Congratulations! Placed a reminder to indent the paragraph. Circled three misspelled words Replaced to with too. Misspelled too indicated that the writers restating of the topic in the conclusion was basic 14/100 Christa Female Made a note for writer to indent. Added the letter n to a word where it was missing. Followed format Made a note for writer to indent. OK I got it. Reword thought? N/A 14/100 Darla Female I can feel the excitement. Followed format! Circled one misspelled word Noted that especially was misspelled Noted that especially was misspelled 14/100 Ed Male Avoid being redundantuse thesaurus to find synonyms for cool. Underlined the six times that he used cool in his paragraph. Showed writer that he needed to indent. Gave some letters a triple underlining to show that they should be capitalized. Circled the word capitals to indicated that he was missing some capital letters. -Commented that the three ideas in the body of his paragraph were basic. 11/79

PAGE 155

140 Fiona Female I love your enthusiasm for softball. Nice work Circled misspelled word Drew an arrow to show that next to last sentence should have been the last sentence and wrote, Closing sentence on front We discussed that doing hair can be a support detail for getting dressed Your spelling was correct. -Your word choice is terrific! -Made a note that her topic sentence was cute. Made a note that her ending was clever 14/100 Gillian Female Followed format -Enjoyed reading N/A Noted that Wednesday, waffles, and off were misspelled. 14/100 Henry Male N/A Circled the word usually twice to indicate that it was misspelled. I would love to know what you discuss on the bus. Noted that usually was misspelled. 14/100 Isabel Female Followed format! Enjoyed reading Made a note that indention was needed at beginning of paragraph. Your last sentence brought the paragraph to a close. The topic needed to be restated. Remember to indent. A was placed in the spelling box as she had no misspelled words. 12/86 Jamie Male Great details! We need to work on sentence structure to avoid run-on sentences. See semicolon techniques. Teacher added Triple underlined a letter that needed to be capitalized. Yeah! (written in the topic sentence section of the rubric). Very clever! 13/93

PAGE 156

141 three semicolons throughout the paper. She also added the word when where it was forgotten. Ken Male Followed format! Noted that paragraph needed to be indented. This is great! Added the letter s at the end of the word help where needed. Enjoyed reading 14/100 Lisa Female Enjoyed! A semi-colon helps combine 2 sentences Added a semicolon where needed. Noted that two words were misspelled. I love the question technique. Noted that definitely and Wednesday were misspelled. 14/100 Mike Male Followed format! Noted that one word was misspelled. Noted that alert was misspelled. 13/93 Nolan Male You need to add 1 additional sentence and support plus a conclusion. Indicated that the paragraph needed to be indented. Noted that the writer used brushing teeth as a supporting detail twice. Added ending punctuation where needed. N/A 8/57 Opal Female Cute narrative for future use. Well continue to work on 3.8 format Opal, I enjoyed! We need to work on format. Semicolon was added. Triple underlined a N/A N/A

PAGE 157

142 letter to show that it should be capitalized. Polly Female Details are fun! We need to work on sentence structure to avoid run-on sentences. See semicolon techniques. Added three semicolons and then showed her how to break one other run-on into two sentences with a period and capital letter. Be careful to avoid overuse of the words so and then Indicated that the paragraph needed to be indented. Inserted seven commas and one semicolon. Deleted several words and provided alternatives. Showed that up and stairs should be combined as one word. Basic sentence Arrow notation Noted that breakfast was misspelled and that she should check her paper for additional circled misspellings. 12/86 Quentin Male Count sentences to make sure it is a 3.8 paragraph. Indicated that the paragraph needed to be indented. Combined one sentence with the word and Indicated that the paragraph needed to be indented. Noted that the conclusion was Basic and that he needed to close. 11/79 Rita Female Cute! See semicolon below. You followed the format. Congratulations! One semicolon was added to correct a run-on sentence. Indicated with triple underlining that Wednesday should be capitalized. Noted that the ending was abrupt. 13/93 Sarah Female Good details! Indicated Correct use of 13/93

PAGE 158

143 We need to work on sentence structure to combine some and avoid runons. See semicolon techniques. One semicolon was added to correct a run-on sentence. that the paragraph should be indented. Noted that usually was misspelled. Triple underlined a letter to show that it should be capitalized. Added the word and where needed. punctuation would have helped. 4 ideas getting dressed, eating breakfast, playing with dog, packing backpack. 8 sentences. You have 10 sentences. Tara Female Followed format Indicated that paragraph should be indented. Corrected the word their to theyre Added the word want where needed. Corrected spelling from wrighting to writing Indicated (by circling with sp) that twelve words were misspelled. Indicated that the paragraph needed to be indented. Added a comma and semicolon where needed. Correct use of punctuation would have helped. Noted some of the misspelled words Wrote, Cute! in the topic sentence area of the rubric. 12/86 A review of the comments on this table give s an overview of the type of feedback that the students received on their two writing attempts involving a 3.8 paragraph. The first one, resulting from the lesson that I obs erved, was simply a draft and was used to check for the students unders tanding of the 3.8 paragraph format. The most prevalent comment on the drafts was, Followed format! On each draft, the teachers circled all of the end punctuation marks as a way to track the number of sentences written, and they circled any misspelled words. After reviewi ng the comments on the drafts, I noted that they fell into five patterns: corrections, format reminders, compliments,

PAGE 159

144 spelling/punctuation corrections or notes, and tips. The compliments (18 total) were the most liberally used comments on the stude nt papers and included, good details, congratulations, and details are fun. The next most repeated pa ttern (16 total) found among the comments were those related to the format of the writings. The teachers wrote, followed format, should be indented, and we need to work on sentence structure. The teachers also made corrections (5 tota l) by adding missing letters and/or words and correcting misused homonyms. The same frequency (each a total of 6) was noted for both spelling and punctuation corrections and tips to the writer. The tips included comments like, avoid redundancy, and see semico lon techniques to avoid run-ons. A week after the writing of those drafts the students wrote another 3.8 paragraph. This time, it was not a topic of their choice. Instead, the teachers asked them to write about their morning routines. These paragraphs were graded by the te achers with a rubric, and each student (with the exception of one) r eceived a numerical score. Eight students received a score of 14, five had a score of thirteen, three scored twelve points, two received eleven points, one st udent had a score of eight poi nts, and one student who did not follow the 3.8 format (she wrote a narr ative) received only comments on her paper rather than being assigned a grade for her efforts. The comments and markings on these writings fell into the same patterns noted on the drafts: corrections, format reminders, compliments, spelling/punctuation corrections or notes, and tips. On these writings, the most repeated comments/correct ions (24 total) were relate d to spelling and punctuation with misspelled words being circled and/or corrected, punctuation being corrected, and letters that needed to be capitalized being triple underlined. The next most repeated comments (14 total) were compliments, whic h included, cute, very clever, yeah,

PAGE 160

145 and the drawing of happy faces. Twelve comments were made relating to the format of the writing with eight of t hose comments being reminders to indent the paragraph. Those comments also included, you need eight sent ences and have ten, and you needed to close. The teachers also provided eight tips su ch as avoid overuse of the words so and then. Finally, twelve corre ctions were made with regards to misspelled words, missing words, missing capitalization, or missing semicolons. The patterns of comments from the drafts and final writings are di scussed further in chapter five. I was curious to find out if the teachers be liefs about writing that they had shared on the questionnaires were a match to their act ual evaluation practices, so I referred back to the questionnaire responses of the two fifth grade co-teachers, Mrs. Drake and Mrs. Tenley. Mrs. Drakes response to the ques tion regarding her th oughts on the purpose of writing assessment were: The purposes of assessing writing are varied. Sometimes its to evaluate the piece as a whole. Sometimes its to evaluate one trait of the piece. Sometimes its to see if revision r ecommendations were made. Sometimes its to see if editing skills are progressing. Sometimes its to see if grammar and mechanics rules are bei ng followed. Sometimes its to check to see if the form of the writing is being followed. Sometimes its to check the flow of the writing. (Questi onnaire response, February 17, 2010) Mrs. Tenleys response to the same question was, Writing assessments should be an ongoing process throughout the year. To me they measure a students ability to demonstrate different writing skills at that time (Questionnaire re sponse, February, 28, 2010). Because the teachers work together on a daily basis, my expectation was that their responses on the questionnaire related to cla ssroom practices would be similar. However, that was not always the case. For example, wh en asked what percentage of their time was spent assessing student writing, Mrs. Tenley said 74-50% while Mrs. Drake selected 49-

PAGE 161

146 25%. Another area where their responses diffe red was on the question asking them what the most important aspect of writing is, and Mr s. Tenley wrote, All of the above we try to start with one area and then move into a new area of assessment. At the end, we are doing it all because it has been taught, re-taught or reviewed, while Mrs. Drake selected organization. Two other areas of responses showed di fferences between the responses of the two teachers. On the section of the questionnai re that listed eleven different types of writing evaluation and then asked the teachers to indicate how frequently they used those methods, Mrs. Drake indicated that she freque ntly uses checklists, teacher conferences, holistic scoring, and rubrics, th at she once in a while uses peer conferences, portfolios, observations, and primary traits scoring, and th at she rarely, if ever utilizes the FCAT scoring rubric or self assessment. A review of Mrs. Tenleys responses for the same question reveal that she believes that she almost always uses checklists, observations, rubrics, and primary traits scoring, that she frequently uses teacher conferences, portfolios, the FCAT scoring rubric, and se lf assessment, and that she uses peer conferences and holistic scoring once in a wh ile. The last area of disagreement between the responses of the teachers was in response to the quest ion, Please think about ALL the different methods of evaluation that you use when reviewing student writing. As a whole, how effective do you believe that th e method(s) of evaluating writing that you utilize are? Mrs. Tenley replied extr emely effective and provided the following reasoning: There are many components to assess ing writing. I strongly believe it takes many approaches to have students progress and become confident writers. I believe this is why we str uggle as teachers because in order to meet each childs individual needs it takes time. Finding the balance and

PAGE 162

147 making sure writing occurs daily in our classrooms is sometimes hard for teachers. That is why we write in all subject areas (Questionnaire response, February 28, 2010). In response to the question asking her to rate the effectiveness of a ll of her methods of evaluation for student writing, Mrs. Drake gave the response, Effective, and she followed that response with, We strive to en courage our students as writers to stretch and reach goals they have set for themselves and we have set for them. Using these methods has helped us meet their needs (Questionnaire response, February 17, 2010). In their interviews, the two fifth grad e team teachers and the headmaster all agreed that writing was part of ev erything we do (Headmaster, personal communication, March 4, 2010) and that its ok ay to have fun with writing (Mrs. Drake, personal communication, September 23, 2010). It was also agreed that it is necessary for teachers to model writing for thei r students, that teache rs need continuing education in order to build th eir own skills, and that a va riety of different types of writing, teaching, and evaluation are necessary in order to be effective teachers and evaluators of writing. The following thoughts gi ve a good representation of the beliefs of the teachers and headmaster at MCP. When I was growing up, the focus on writing was entirely different. Formal writing instruction began more in middle school than in elementary. The focus of writing at an early age was more on the look of literacy in a paragraph or being able to respond to a question correctly. My teaching is very different. Our goal is to ensure that students master the national and state standards of writing. We try to take each child and develop a love for writing and an unde rstanding and appreciation of how an author can use words in a variety of ways to engage the reader. (Mrs. Tenley, personal communication, September 27, 2010). I will laugh at myself through writing a nd sharing and I want my students to be able to do the same. I want them to learn to use writing as a tool for reflection and to realize we are all human and its okay to have fun with writing. I want the students to look forw ard to writing time rather than see

PAGE 163

148 it as a daunting task (Mrs. Drake, personal communication, September 23, 2010). I think writing is like an ything: if you have a ta lent for it, youre gonna love it; youre gonna want to do it. If writing is something that is a challenge for you, then it takes a lot mo re stretching for the teachers to do things: different types of writings, different topics. Lots of excitement, lots of reading to each other. Lots of va riety like we do with everything that we do, would be key to doing that. Its ju st part of whats integrated into what we do. You start small; you build big (Headmaster, personal communication, March 4, 2010). The experience of talking with and obse rving the participating teachers at MCP allowed me to see the live representation of the data shared on the questionnaire. The ability to see the teachers beliefs regarding the evaluation of student writing come to life gave a new face to the data. Additionall y, having the participants complete the questionnaire, conduct a lesson for observati on, sit for an interview, and share their graded student writing samples allowed for tr iangulation to be achieved during analysis when the intersections between all four phases of data collection were recognized. Categorical Analysis In order to better synthesi ze the results previously disc ussed, the remainder of this chapter will examine the patterns and categorie s that were revealed during the analysis process. Those patterns and categories form threads that wind thr ough all four phases of data collection and which work together to co mprise a full picture of the beliefs about the evaluation of student writing as shared by the educator participants in this study. Triangulation of the data is presented in the analysis of each category as pieces of data are brought together in agreement from each of the phases of data collection in order to comprise the overall picture of educators beliefs about the evaluation of writing. Through the course of analysis five separate categories of responses were identified:

PAGE 164

149 instruction and planning, st udent skills, growth and development, feedback, and limitations. The underlying patterns within ea ch of those categories can be viewed on Table 16 and will be referen ced throughout this chapter. Table 16 Visual representation of the categorie s and patterns found across all responses Instruction/Planning Related Participant Comments students performance in relation to the lesson objectives (Q) direct future lessons (Q) using assessment results to guide instructional decisions (Q) assessment results help determine what craft/trait to teach next (Q) these methods allow me to meet individual needs (Q) I can assess daily if need be (Q) these methods allow me to assess their writing daily (Q) we discuss his progress daily (Q) a variety of writing and evaluation methods needed (I) each individual can be assessed at their particular level (Q) teachers need continuing education to improve evaluation skills (I) And then what were gonna ask is that you actually take that topic and turn it into a 3.8 paragraph so we can see how you are as a writer. Its short enough for us to be able to read kind of where we can take you as fifth graders in this particular writing group. (O) beginning with a draft writing lets teachers know what to teach for final (S) Student skills assessment of students writing skills (Q) revision (Q) this aspect greatly impacts the readability of the writing (Q) grammar (Q) this aspect is vital to the meaning of the paper (Q) mechanics (Q) other aspects come naturally with time and can be done later (Q) different writing traits (Q) this aspect is especially difficult to it needs to flow (Q)

PAGE 165

150 master (Q) this aspect is required to be taught by our standards or curriculum (Q) students need to clearly state their thoughts (Q) all aspects are equally important (Q) the essence of the message (Q) needs/skill (Q) the thing that helps their writing to make sense (Q) whether or not there were aspects of the paper that affected its readability (Q) see semicolon techniques to avoid run-ons (S) if there were specific skills that the writer needed to work on to improve (Q) they measure a students ability to demonstrate different writing skills at that time (Q) gives a quick view of skills (Q) think its important for students to strive to revise their initial drafts instead of just wanting to be done the first time (I) good practice for standardized test (Q) I think compliments other than coaching and you had a lot of complex sentences, I could tell, for how long they were and you needing to have a pause in breathing which was good. Other people kind of get straight to the point in a shorter sentence style. (O) it is easier to stretch/evaluate talented writers (I) Fix it on the next line. It works. Its a draft. (O) revision is a necessary skill to teach and assess (I) We circle words that were uncertain about and we put sp at the top. That sp lets the teacher know, Ooh, you were unsure. A dictionary when we edit and revise would be helpful for you. When we also look at your writing, we circle words and put sp just the same to coach you and let you know, Oh, you needed to know that this word needs fixing when you make your final copy. (O)

PAGE 166

151 spelling and grammar can be addressed on student writings (S) Growth and development monitoring the progress and growth in student writing (Q) strength and weakness (Q) these methods promote growth in my student writers (Q) establishing if the students are on, above, and below levels with respect to their grade/ageappropriate expectations (Q) these methods help my students (Q) Give the children the most opportunity for growth (Q) whether or not the writer was challenged (Q) I can see growth in my students writing (Q) methods encouraging reflection improve writing (I) we see improvement in student writing over the school year (Q) progressive writings within one format can show growth (S) The most important part would be able to take something, read it. No matter what the context subject is be able to reflect on it. (I) Its really great when we evaluate you as writers to see the date on which your writing appears. We can go back and say, Wow, look how much progress so-and-so has made since their initial 3.8 paragraph! (O) Feedback using assessment results to guide feedback for the writers (Q) help me to know my students strengths and weaknesses the best (Q) these methods provide variety for my writers (Q) Sometimes I just need to suggest and idea for improvement or consideration. (Q) peer editing is useful evaluation method (I) You edit it. Then other people edit it. (I)

PAGE 167

152 feedback on drafts and final writings is helpful (S) Either way, we are excited to see what you wrote, and we are excited to give you some comments. (O) We need to be better at the conference. (I) Limitations these methods are not effective (Q) It is easier and faster (Q) time (Q) it is mandated (Q) type of assignment (Q) Only slight differences are noted (Q) mandated methods (Q) I think there is no perfect way to score writing. (Q) Is there one right rubric that we could entertain and use? (I) Anything you can send me that would be wonderful because that is really it [evaluation of writing] is something we need to work on as a school. (I) Sources of data: Q = ques tionnaire, I = interview, S = student writing samples Instruction and Planning Within the first category, instruction a nd planning, I identifie d seven patterns of responses (see Table 16) that were related to the ways in which teachers utilize their methods of evaluating writing to help guide their in structional decisions. Data to support the formation of this category was found acro ss all four phases of data collection. For example, the comments on Table 16 show that teachers shared the usefulness of their chosen methods of instruction in helping them to make important instructional decisions. They indicated that methods of evaluating writing which helped them to see on which

PAGE 168

153 writers crafts future lessons needed to focus we re useful and that they liked the ability of being able to assess writing frequently for this purpose. On the questionnaire, four questions s upported the patterns identified in this category. The first one, How often do you asse ss student writing? showed that more teachers (30.2%) chose to assess their student s writing at least once a week. Based on the other data discussed here, it can be in ferred that one of the reasons for a more frequent rate of assessment w ould be to help teachers gauge their students needs and to allow them to plan their instruction accord ingly. On another ques tion, Do you ever have assignments in which your students write mo re than one draft for you? an overwhelming 82.9% of the teachers reported that they do ha ve students write more than one draft on some assignments. It is quite likely that the choice to do so is related to the teachers ability to plan their lessons based on the n eeds of the students as seen on the draft writings in order to increase that success on final writings. As repres ented by the array of responses when asked to mark how frequently they used all of the different evaluation methods on the list, the educator participants in this study rely on a wide variety of evaluation methods in the assessment of thei r students writing. Th at reliance provides those teachers with more information to use in the planning of their future writing instruction. Similarly, a large ma jority (95.1%) of the responde nts indicated that they use both formal and informal methods of assess ing writing, which shows their commitment to using a variety of methods of assessment. Th at variety leads to a wealth of information that the teachers can use to guide their planning of writing lessons. Those same views were reflected in the interviews, during the observation, and on the questionnaires of the educators at MCP. During the interviews with the teachers and

PAGE 169

154 headmaster at MCP, it was shared that they believed in the importance of continuing education for all educators because learni ng new methods of teaching and evaluation would help them to better plan future less ons on writing and to help them become more effective at teaching and assessing all stude nt writers. Those same teachers set-up two writing lessons to help their st udents learn the 3.8 paragraph format. They were able to use the draft paragraphs written in the first (observed) lesson to inform their instruction for the next lesson and to see where the students were at with regards to their skill level before moving forward with plans to have the students write a final paragraph in a subsequent lesson for a grade. The practice of first allowing a draft writing and then revising the follow-up instruction to be a better match for stude nts needs is an effective practice to utilize and will l ead to a more effective eval uation process (Odell, 1999). Overall, the views of many of the educator participants in this study indicated a strong preference for utilizing the results of their ev aluations of writing as a way to guide their future instruction of and planning for writing lessons with their students. Student Skills The second category, student skills, repres ents all of the responses shared by the educator participants where th ey indicated that they valu e methods of evaluation that allow them to gauge the strengths and weaknesses of their students skills. All four phases of data collection contributed to the de velopment of fifteen patterns (see Table 16) within this category. The listing of skills refere nced by the educator participants included revision, grammar, mechanics, and spelling as well as senten ce structure, the flow of writing, and other writing traits This category is the largest of all five identified categories because there are so many different skills available for student writers to

PAGE 170

155 master. On the questionnaire, when asked, Do you spend more time assessing some individual students papers than others? a large majority of respondents (92.9%) replied that they did spend more time on some pape rs, and on the follow-up question asking for the reasons why they spend more time on so me papers over others, all 37 of the respondents indicated that the st udents skills, or lack thereo f, were a determining factor in the amount of time they spent evaluating the writing. An examination of the observation, interview, and student writing sample data from MCP also revealed several references to the importance of student skills. Revision was mentioned during both the interviews and the observation as bei ng an important skill for writers, and many comments were written on the student writing samples with regards to the spelling, sentence structur e, capitalization, et c. skills that the teachers wanted the students to notice. These practices only rein forced the data from the questionnaires and confirmed that educators utili ze their varying methods of evaluation of writing as a way to check for and to improve student writing skills. Growth and Development Many educator participants shared a belie f that effective methods of evaluating student writing were those that showed them whether or not the students writing displayed signs of growth and development. Within this categ ory, I identified six patterns of responses (see Table 16) that represented th e beliefs of the educat or participants with regards to their views of the connections between writing evaluati on and the ability to gauge the growth and development of their student writers. These patterns stemmed from the data collected across all four stages of data collection during this study.

PAGE 171

156 On the questionnaire, the educators indi cated that they looked for changes in strengths and weaknesses, signs of progress, and the students current writing levels with regards to their grade-leve l standards of writing. On th e question that asked the participants to indicate how frequently they used all of the given methods of evaluation, teacher conferences, peer conferences, and portfolios were the ones that I would designate as having the most potential to show the growth and progress of student writers. All three of those methods were us ed frequently by the respondents with the portfolios having a tied response between f requently and once in a while. In the follow-up questions asking the participants to select the three methods of evaluation that they used most frequently, teacher conferences and portfolios were chosen as the second and third most used methods, and they we re both rated as being effective by the respondents. Portfolios (Elbow, 1996; Morrow, 1997) and conferences (Murray, 2004) are designated as being effective in the rese arch, and the selection of these two methods as being used frequently reflects the commitme nt of the educators to utilize methods of evaluation which allow them to monitor the gr owth and progress of their student writers. Those areas of importance were echoed during the observation and interviews and within the student writing samples. For exam ple, on the initial writing drafts, six students received comments from their teachers drawin g their attention to th e presence of run-on sentences in their paragraph. However, on the final writing assignment, only two of those students wrote run-on sentences in their paragraphs. While that is not hard evidence that the other four students will fo rever know when they are or are not writing a run-on sentence, it does show that the possibility for growth to be seen from one writing to the next does exist. The teachers were sure to verbally addre ss the concept of growth with

PAGE 172

157 their students during the observed lesson when th ey requested that the students date their papers so that they would have a reference to refer back to during future writing lessons for how much progress they had made from the time of their first writing in the 3.8 paragraph to the current one. The measure of growth and development of student writers from one writing to another was repeated in the responses of many educator participants throughout all four phases of da ta collection and appears to be part of their belief structure with regards to the ways in wh ich they can utilize the results of writing evaluation. Feedback It is important to note that the evaluation of writing doe s more than just inform the teachers. The feedback provided by va rying methods of evaluating writing also informs students about their progress, and that feedback was valued by many of the educator participants in this study who referre d to the helpfulness of the evaluation to the writers who learn where improvement is needed as well as to the teachers who learn how to best help their students progress. Within this category, I identified four patterns of responses (see Table 16) that demonstrated the shared beliefs of the educator participants throughout all four phases of data collect ion. The respondents reported the results stemming from their methods of evaluation as being useful to them when looking to provide feedback to their student writers. They also liked the variety of feedback offered to the writers by different methods of evalua tion. On the question asking the participants, How often do you provide your students with written feedba ck on their writing assignments? 85.3% of them indicated that they either provide feedback almost always (39%) or frequently (46.3%) on th eir students writings. That frequency

PAGE 173

158 supports the patterns found in this category, which demonstr ate that teachers value the importance of feedback on their students writing. The same sentiments were echoed by the educators in the observation and interviews and were evident in the student writing samples. The headmaster referenced the feedback gained from peer conferences as being helpful, and her teachers also noted the importance of being skilled in the area of conferences in order to provide effective feedback to their student writers. Finally, the teachers worked togeth er to provide written feedback on the students writing drafts in or der to give the studen ts help that would guide them to improve their writing on their next assignment. Overall, the data showed that the majority of educator participants in this study enga ge in the process of sharing feedback with their students regarding th eir writing on a regular basis because they believe that feedback is impor tant to young writers, which is c onsistent with the beliefs of Cooper and Odell (1999) as well as Norman and Spencer (2005). Limitations of Writing Evaluation In addition to the previous categorie s which showed the help provided by different evaluation methods to educators, the final category, limitations, was formed by comments from educators who believed that no methods of evaluating writing were perfect. Four patterns (see Ta ble 16) were identified within this category. This category, unlike the previous ones, only represents data represented in the questionnaire as no limitations were noted by the educators during the observation or in the interviews. The participants believed that they were limite d in their evaluation methods by a perceived lack of effectiveness, a time constraint, the type of assignment, and the fact that some choices are taken out of thei r hands by being mandated to use particular methods of

PAGE 174

159 evaluation. One of those limitations, the ma ndated use of a particular method of evaluation, was reported on a closed-ended ques tion from the questi onnaire. 58.5% of the respondents indicated that they were mandated to use at l east one of the methods of assessing writing that they use most freque ntly. The most commonly reported mandated assessment in use was the FCAT rubric. In a follow-up question asking the teachers, How helpful do you feel that the feedback fr om the FCAT rubric is to your students? 27.3% of the respondents indicated that th e feedback was only somewhat helpful (13.6%) or minimally helpful (13.6%). Only 4.5% of the participants felt that the feedback from the FCAT rubric was extremel y helpful to their students. Perhaps those teachers agree with Broad (2003) and feel that the criterion designated on the FCAT rubric are limiting the scope of the student s writings. The lack of confidence in the helpfulness of the feedback from this rubric to their students high lights the importance of this category despite the fact that the pert inent data stems from only one phase of data collection. Summary A review of the categories formed by th e educator participan ts in this study, instruction and growth, stude nt skills, growth and de velopment, feedback, and limitations, found that those categories covere d many areas of evaluation. The fact that educators gave thought to the possible impact s of evaluation of writing before and after their teaching time is reassuring. Additionally, th e trends in responses of the participants showed that many of the educators involved in this study hold beliefs that appeared to be consistent with the literature in the ar ea of writing evaluation. That topic will be discussed further in the next chapter.

PAGE 175

160 Chapter 5 Discussion This study focused on educators evaluati on decisions in order to provide an insight into their perceived beliefs about evaluating writing and their reported actions when practicing those beliefs during the evaluation of their students writing. The information gleaned throughout this resear ch sheds light on th e actual evaluation practices of educators and also adds to the literature base involving writing and assessment. In order to achieve that goal the population used in this study was economically and environmentally (e.g. priv ate and public school educators were both included in the questionnaire pa rticipants) diverse and included as many participants as possible. The study examined the educators beliefs about the importance of writing, the various methods of evaluation used to assess student writing, and te achers feelings of effectiveness, or ineffectiveness, as related to the evaluation of student writing through the Writing and Evaluation Ques tionnaire (Appendix B). Two teachers at the school which utilized a contest in the selection of participants for the conference based on student writings and who agreed to participate, were observed during a writing lesson with the resu lting student writings being collected for analysis. Finally, a small sample of willing e ducator participants was asked to complete an interview in order to further share their beliefs about wr iting and evaluation for this study. The overarching purpose of this study was to describe educator s beliefs about the

PAGE 176

161 evaluation of student writing. The inquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the e ffectiveness of their evaluation methods for judging th e quality of students wri ting samples? and (c) what factors impact the evaluati on decisions of educators? Current Evaluation Climate In this study, the participants were asked to share the different practices used in their classrooms to assist in the evaluation of writing in order to help me understand more about which evaluation practi ces are currently being us ed. Huot (2002), Moss (1994; 1996), and Guba (1978) feel that teachers and students, the ones who are the most affected by writing evaluation practices, s hould be the ones who make the decisions regarding which methods to use and how to us e them effectively in their own classrooms. By giving attention to evaluation and by changi ng the methods being used to evaluate our students (especially by being sure to base those methods on current research), we can preserve the things that are viewed as va luable (Huot, 2002; Moss, 1996). It is important for researchers and teachers to work on finding methods of assessment that are backed by research, that satisfy outside administrators, and that are beneficial in the classroom (Huot, 2002). It is also important for teachers to continually ev aluate the effectiveness of the evaluation methods that they choos e (Nixon & McClay, 2007; Odell, 1999). More research is needed into the fiel d of teacher beliefs about teaching and evaluating writing, and one of th e goals of this study was to offer a glimpse into the teacher beliefs of a small sample of teachers. That research must include the connection that exists between their beliefs and the pr actices that they put into place in their

PAGE 177

162 classrooms (Berry, 2006; Lee, 1998; Nespor 1986; Norman & Spencer, 2005; Pajares, 2003; Pajares, 1992). Additionally, giving more attention to writing assessment during teachers college years or during professiona l development workshops in order to work on specific skills that could help teachers during the process of evaluating writing could raise their beliefs about their own abilities to assess writing, so more research in this area would be suggested (Dempsey et al., 2009). Mu rray (2004) agrees that it is possible that educators can use newly learned skills for their own writing and apply them to their teaching and evaluating with their students. More research into that phenomenon will allow us to see the best way to approach th e possibility of introduc ing new beliefs about the evaluation of writing into the mindsets of teachers. In 1975, Jerabek and Dieterich observed that while more research was needed in the area of writing assessment, even the resear ch that was completed was not being used in the classrooms. I believe that to be an appropriate assessment to make today. There is more research available to educators to assist them in deciding how to best evaluate their students writing, but perhaps more attenti on should shift to completing studies that monitor how frequently and how effectively the findings from studies focused on writing assessment transfer into the practice of wr iting teachers. Anson (1989) advises caution in the acceptance and the promotion of new or ne wly utilized assessments of writing as he believes that there will always be research to be found about the difficulties of one method of assessment or another. Instead of becoming excited and fully embracing new methods without questions, it is important fo r teachers to become well-versed in the available literature in their quest to ma tch writing assessments with their tasks.

PAGE 178

163 While we may be used to hearing or r eading media reports te lling us that the United States is falling behind the rest of the worlds students, other countries are actually struggling with some of the same issues related to the evaluation of writing that American teachers are facing (Berge, 2002; De uchar, 2005; Lee, 1998). It is important that we do everything that we can to help our students become writers and not just writers but writers who learn as they write. They can learn about the world around them, their family and friends, and most of all, they can learn about themselves if they are taught how to be writers (Reeves, 1997). Teachers have the ability through their responses to students writing to cultivate those writers, and understanding how teachers think, feel, and go about their responses to writing will help us to see what needs to be done differently in the future and what we are already doing well. Contributions to the Field This study attempted to fill in gaps in knowledge about teachers beliefs about the evaluation of writing. A review of the research questions showed the contributions that I hoped to make to th e field of the evaluation of writing: (a) What are the differences in the ways in which educators approach evaluating student writing? and (b) What factors impact the evalua tion decisions of educators? While the literature reviewed previously presents the many op tions (see Bardine et al. 2000; Smith, 1997; Wilson, 2007) that are available to teachers looking for methods of evaluation, observing, talking to, and reading the t houghts of current, practicing ed ucators gave an up-to-date snapshot of the methods actually utilized, and how educators go about selecting those methods, in the evaluation of student writing in the real world. (c) How do educators evaluate the effectiveness of their evaluation methods for j udging the quality of students

PAGE 179

164 writing samples? The answer to this question relate s to teacher beliefs and provides an opportunity to see if the participating educator s believe that they are effective in their chosen methods of evaluation. Such informati on is much needed as there is a lack of information about teachers beliefs (see Dempsey et al. 2009; Lee, 1998; Pajares, 2003) as they pertain to writing and, even more specifically, to the evaluation of writing. Teacher Beliefs Teacher beliefs about writing formed the crux of this study. Whether sharing their beliefs explicitly by answering a question aski ng how they felt about assessing writing or implicitly when sharing their most frequen tly utilized methods of evaluating writing, the beliefs of the educator participants colore d every response given. While there was always a danger that the beliefs shared on the quest ionnaire were not an accurate depiction of their true beliefs or of their classroom practices (Lee, 1998), the hope was that the participants would accurately report their be liefs. Lending some credibility to that hope was the fact that the teac hers who were observed duri ng a lesson of teaching writing reported beliefs that were a match to their teaching practices. Because the beliefs of teach ers are resistant to change (Nespor, 1987) and because they control the instructional decisions of educators (Kagan, 1992; Nespor, 1987), it was important to give educators the opportunity to share their beliefs through the questionnaire and interview portions of this study. A review of their shared beliefs and the implications of those views follow along with an examination of some of the tensions found between the responses and practices of the observed team of teachers and the research presented here.

PAGE 180

165 Approaches to the Evaluation of Student Writing A look at the data which addresses the first research question, What are the differences in the ways in which educators approach evaluating student writing? reveals that the educators reported a wi de variety of methods in use for the evaluation of student writing. As a parent with children in school and as a future teacher educator, I am relieved to know that this sample of t eachers recognizes the importance of including a variety of methods of evaluati on in their classroom. However, I was disappointed to see that none of the teachers made any reference to the importance of having authentic assessments as part of th eir evaluation methods (Cooper & Odell, 1999). While it is possible that some educators do utilized such methods and just neglected to mention them in their responses, it is also possible that they are not aware of the importance or usefulness of authentic writing tasks and a ssessments. Additionally, with the multiple references to the use of standardized asse ssments for their students writing, it seems plausible that at least some educators who use those assessments may feel that reconciling that use with an authentic writing task would be impossible. Because nearly half of the educators report ed gaining their assessment practices from professional development courses or from college course s, those would be ideal opportunities to present more information regarding the importance of authentic assessments and authentic writing tasks to educators with the hope that they would transfer that knowledge into their practices. More research wi ll be needed to further explore the types of writing assignments that precede the chosen evaluation methods in order to determine whether or not authentic writing tasks and assessments are being utilized in the classroom.

PAGE 181

166 Evaluation as an instructional guide. As noted previously, the educator responde nts in this study repeatedly referenced the importance of using the results of thei r evaluations as a guide for their future instruction. Several participants referred to the value of evaluation methods that helped them to direct future lessons, to see which craft/trait needed to be taught next, and allowed them to meet individual needs in their lesson planning. The key is that they need to engage in careful planning and reflect ion in order to ensure that those methods are the best fit for the need s of the students (Spandel, 2006) If the chosen methods of evaluation are not a good match for the studen ts, then the resulting plans would also be less than effective. It is also important to consider the reasons behind the educators desire to use the assessments to guide th eir instruction, and a closer look at what information is being used to inform instructi on is necessary as well. For example, if the educators are focusing only on the grammar a nd mechanics of their students writing because that is an obvious area where grow th can be viewed through the decline of marked errors, then more education about the best use of evaluation as a guide for future instruction is needed. In order to determine whether or not their chosen methods of evaluation are effective for use with their st udents, educators could work to answer the questions put forth for that purpose by Odell and Cooper (1980, p. 35): 1) What assumptions are implicit or explicit in our evaluation procedures? 2) Are those assumptions consistent with current discour se theory? and 3) Will the result of using these procedures help us with the problem of improving students writing?

PAGE 182

167 Mandated evaluation methods. Despite their best efforts, complication ar ises when teachers are mandated to use a specific method that may not align with thei r beliefs about evaluating writing. They may recognize that using an assessment method w ith a narrow focus (like the FCAT rubric) can prevent overall writing growth but feel powerless to change the situation (Anson, 2008). As reported on the questionnaire by one part icipant, the teachers feel that this is what the state is looking at in order to ge t a score of 4. Other re spondents shared that they utilized certain methods of evaluation because they were mandated and forced to. Having a supportive administrator who is knowledgeable about writing can help teachers to implement more methods in their classrooms (albeit sometimes in addition to mandated ones) (McGhee & Lew, 2007). While it was not unexpected to find that many teachers are being mandated to use the FCAT rubric in the evaluation of their students writing, or that its use was more prevalent am ongst the teachers of the children in second through sixth grades, it was surprising to me th at the overall findings show its use to be among the top three methods of evaluation in use by the teachers in this sample. This status concerns me and would lead me to do further research in order to determine the actual reasons (beyond a mandate) for their jus tification of the use of this rubric on a frequent basis. If the educators are simply accepting the use of the FCAT rubric when mandated or if they are implementing it because they feel that it is what everyone else is using, then their reality needs to be better informed. They need to be made aware that it is possible to implement other evaluation methods that will help their students writing grow and develop, which would then mean that their students would succeed on the FCAT without having become slaves to the rubric. If educators were more aware of

PAGE 183

168 research-based evaluations, th en they might feel more pos itive and courageous about their selections of assessments. As eviden ced by the responses duri ng this study, there are many different ways in which the participants approach the evaluation of student writing. The next step would be to delve deeper into their reasons for selecting the methods that they use. Effectiveness of E valuation Methods The second research question was, How do educators evaluate the effectiveness of their evaluation methods for judging the qua lity of students writing samples? It appeared that most of the educator participants based the effectiv eness of their methods of writing evaluation on the perceived growth viewed in the writing of their students. One respondent shared, We see improvement in stude nt writing over the school year. Other teachers echoed those sentiments when justifyi ng their classification of their evaluation methods as being effective. However, the ove rall impression left by the responses related to this question was that many teachers seem to be less than clear in how to improve the effectiveness of their chosen methods. The l ack of clarity in the area of improvement could stem from their lack of confidence as teachers of writing (Graves, 2002; Napoli, 2001). My assumption was that the majority of teachers would feel that their methods of evaluation were extremely effective. While I know that the mandating of some methods could slightly lower that expectation, it seem s logical that educators would only select and implement evaluation practices that they feel are extremely effective. Perhaps another source for the lack of feelings of e ffectiveness could be the lack of attention being paid to the validity and reliability of the methods that are currently in place.

PAGE 184

169 My assumption was that the educators i nvolved in this study would represent a group of high-efficacy teachers who felt posit ively enough about teaching writing that they saw the value in taking their students to a conference about writing. However, the lack of confidence in the e ffectiveness of their evaluation methods shows a tension between their feelings of efficacy, with re gards to teaching, and their use of varying evaluation methods. Gathering more informati on on the efficacy of educators in future studies would help to shed light on this area and to help increase understanding of the ways in which it might be possible to enc ourage the use of research-based, effective methods of evaluation. Validity and reliability of wr iting evaluation methods. In the beginning review of the literat ure, it was obvious that validity and reliability are two of the most important constructs to take into consideration when selecting an evaluation method (Camp, 1996; Cooper & Odell, 1977; Hughes & Nelson, 1991; Williamson, 1993). That belief was not, howev er, reflected in the responses of the educator participants of this study. The absence of any references to reliability or validity leads me to believe that perhaps the teachers are daunted by the idea of needing to gather a myriad of valid and reliable assessments for their students, so they avoid that situation altogether by relying on methods shared with them by others (Brela nd et al., 1987). In the study, the majority of teachers (48.8%) reported that they l earned about their methods of evaluation through professional developmen t workshops and continuing education courses provided by their districts. If t hose educators are feeling overwhelmed by the responsibility of select ing their own effective measures of evaluation or if they are less than confident in their abilities as writers (w hich could lead to le ss confidence in their

PAGE 185

170 abilities as evaluators of writing), then th ey may be more apt to accept any methods of evaluation that are perceived as being effectiv e because they come fr om an administrative source (Gallavan et al., 2007). Additionally, if teachers suffe r from the fee ling that the results of their evaluations could reflect negatively upon their teaching (Huot, 2002), then it is possible that they would be more likely to feel safe in implementing those provided and/or mandated methods of evaluation rather than choosing their own methods for their students. Again, the importance of the educat ors having a higher sense of efficacy and the confidence to select their own ev aluation methods becomes obvious. There are, however, many ways that teachers can further boost their feelings of effectiveness in the evaluation of student wr iting. Because many of the teachers reported the use of rubrics as an evaluation tool, it w ould be ideal for schools to offer training in the effective use of rubrics. Having more know ledge of the best way to use the rubrics would increase teacher confidence, and therefore, increase th eir feelings of effectiveness (De La Paz, 2009). My hope would be that as their confidence increases, their desire to learn more about the effectiveness of thei r particular methods of evaluation might increase as well so that they will feel co mfortable and confident in diverging from the path of evaluation established by their peer s, administrators, a nd districts (Nixon & McClay, 2007). Importance of feedback. A positive finding was that all of the res pondents reported engaging in methods of evaluation that require feedback to be given to their students. One participant said that the most effective methods of evaluating writi ng provide teachers, parents, and most importantly students with feedback about their writing. Responding to student writing

PAGE 186

171 is, possibly, the most important part of writing evaluation, so it is encouraging that all participants in this study are engaging in that practice (Huot, 2002). Especially encouraging is the fact that 85.3% of the pa rticipants report giving written feedback on their students writing either always or fre quently. One way to encourage the teachers to continue to use and to increase their feedback on writing is to teach them to engage their students in conversations about that feedback so that they can see the effect that their time and comments have on the students (Bagley, 2008). The important aspect of feedback that educators must be aware of is th at in order to be helpful, feedback must be specific to the writing of each student (B ardine et al., 2000; Beach & Friedrich, 2006; Cooper & Odell, 1999; Mats umura et al., 2002). While one educator stated, There are th ings that need to be corrected and commented on, not all educators know how to effectively utilize feedback with their student writers. Unfortunately, on the writi ng samples collected during this study, much of the feedback consisted of general compliments or comments that were not designed to help the student writers to improve their wr iting. Similarly, on th e open-ended questions of the questionnaire, many of the educators reported being con cerned with factors influencing the readability of their students papers. Those factors were listed as spelling, handwriting, grammar, and mechanics, which are all easily marked by the teacher for correction. The question, then, is whether the educators truly believe that those factors negatively impact the written message of their student writers or if they focus on those areas because they provide an avenue by which the educators can provide the students with tangible feedback that requires little higher level thought on their part. Because feedback is such an important aspect of writing evaluation, and because it is widely in

PAGE 187

172 use, more research in the area of educating teachers regarding the best ways to improve their response practices would be extremely beneficial to both the teachers and students. Selection of evaluation methods. All respondents on the surv ey reported the use of more than one evaluation method. One participant reasoned that she made the choice in order to gain the combined effectiveness of a number of different methods of assessing writing being used together. Again, this is an encouragi ng practice to see among the participants as using a variety of assessment measures is alwa ys preferable to using just one (Breland et al., 1987). The combination of methods can increase their effectiveness (Burgin & Hughes, 2009). All of the respondents reported th at they based their selection of methods of evaluation on their previous experience with those methods and shared that they use those which they feel have proven to be eff ective (as evidenced in their students writing growth). The teachers are, however, limited by what they know, how they learned, and how they were taught to evaluate writing. In or der to continue to grow as an evaluator, they need to continue to lear n and to, perhaps, change some of their current beliefs about the evaluation of writing (Florio-Ruane & Lensmire, 1989; Gallavan & Bowles, 2007). Obviously, the participants in this study who indicated that they obtain their methods of evaluation from professional development courses or from college courses are willing to change some of their practices when given new ideas. While some others may be uncomfortable and frustrated when first as ked to implement new methods of evaluation (Florio-Ruane & Lensmire, 1989), it makes me hopeful to see research (Dempsey et al., 2009) which shows that good continuing education courses can be the catalyst for a shift towards increased use of more effective me thods of evaluation. Additionally, as those,

PAGE 188

173 even the skeptical, give new methods a try and see that they are effective in their classrooms, they will be more likely to continue the use of those practices (Guskey, 1986). Any increase in use of effective practices and in the teachers ability to identify them as being effective, is good news for the students. Factors Which Influence the Evaluation Decisions of Educators Standardized testing and writing evaluation. The final research question addressed in this study was, What factors impact the evaluation decisions of educators? There were many factors identified as impacting the decisions of the educator participants. The one that seems to be the most obvious is the mandated use of standardized testing rubrics. This is an unfortunate situation as Scherff and Piazza (2005) found that most writing instru ction that occurs in the test-influenced classrooms was often at odds with research -based practice (p. 271). It is concerning when so many respondents on the questionnai re reported being mandated to use the FCAT rubric as an assessment tool. It is even more concerning when the participants qualified its use by explaining th at they want to prepare students and that the FCAT rubric gives a snapshot of what the ch ild would achieve on the state test. Such statements make it seem as though the educators are simply accepting that the standardized assessment is effective whether or not they have data to support that idea. While it would be understandable if they chose to use the rubric afte r seeing its benefits with their students, that is seemingly not the case based on the lack of extremely effective and effective res ponses given when asked about the effectiveness of the FCAT rubric.

PAGE 189

174 Hillocks (2002) contended that students would be better served if the teachers received high-quality professional devel opment with the money that is used for standardized testing. The pressure that teachers feel to alter their curriculum in a way that they feel is expected of them in order to increase the performance of their students on standardized assessments is leading to the deprofessionalism of teachers, which, in turn, leads to a reduction of confidence for those teachers (Hollingworth, 2007, p. 341). Other researchers posit that the standardized tests can lead to the implementation of high quality professional development thereby ha ving a positive impact on the assessment procedures used by teachers in those programs, but that impact was not seen in this study (Callahan & Spalding, 2006). The key is to recognize that the scope of the results stemming from standardized tests that require writing is extremely limited. That limited feedback to teachers seems to be at odds with the responses of the educators who shared that they prefer the use of evaluations that help them see growth and development in their writing of their students over time. It seems possible that the educators who report the frequent use of the FCAT r ubric are simply using it be cause it is mandated and not because they view it as an effectiv e tool in the evaluation of writing. Generally, it is hoped that students can ta ke their learning fr om school to expand and apply it to their current and later liv es beyond the classroom, but Anson (2008) suggests that the results from standardized tests are not real results at all (p. 114). Instead, he recommends that educators recognize the narrow applications of the writing required for standardized tests and that they work hard to ensure that their students experience writing across a range of genres and for a variety of purposes. That recommendation is supported by others and re quires that the method of assessment be

PAGE 190

175 chosen based on which evaluative practice is the best fit for the current assignment (Tompkins, 2008). With some teachers feeling that their jobs are in jeopardy based on their students standardized test scores and with them being willing to employ whichever teaching techniques they are told to use by outside professionals even when those techniques fly in the face of what those te achers know to be effective, theory-based instructional practices, it is eas y to see the negative force that the standardized assessment of writing can have in classrooms, on teach ers, and on students (Hollingworth, 2007). Additional factors influenc ing evaluation decisions. There were other factors, aside from manda ted standardized testing rubrics, that impacted the evaluative decisions of the res pondents in this study. For example, several participants mentioned ease as one of th e reasons for selecting the methods of evaluation that they did. Its important for teachers to resist the easy methods and to look for those best suited to their students needs (Wiley, 2000). Using the easy methods is also another way to avoid having to worry a bout whether or not newly selected methods of evaluation are effective. However, as previo usly stated, it is important for educators to continue to learn and grow as evaluators which cannot happen if they are unwilling to reexamine their current evaluation practices. The teachers feelings about writing asse ssment were one factor that contributed to their selection of different methods fo r evaluating writing. Some teachers reported feeling less than positive about writing assessment, and it seems quite likely that those were the teachers who chose quick and eas y methods of evaluation or who simply relied on the mandated methods that were given to them to use. Adding quality professional development courses to change th eir current feelings or to add to their

PAGE 191

176 knowledge base (Nixon & McClay, 2007) or addressing those feelings while they are still preservice teachers (Dempsey et al., 2009) can lead to a wider repertoire of methods to choose from. There will always be a variety of factors th at can affect the evaluation decisions of teachers on any given day. It is our responsibility as teacher educators to make sure that the teachers have enough know ledge of how students best learn to write and how to effectively match their evaluation to that learning regard less of any outside factors that may try to in fluence their decisions. Beliefs into Practice Being able to complete a case study at Maple Court Prep (MCP) allowed me to see some of the beliefs shared on the Wr iting and Evaluation Questionnaire in practice and helped me gain a deeper understanding of the beliefs of three e ducators with regard to the evaluation of student writ ing. The ability to hear about, see, read about, and track the educators beliefs about th e evaluation of writing from pa per into practice enabled me to be confident in my discussion of my observations. All three of the educator s interviewed supported the us e of continuing education as a way to help teachers stay current rega rding the most effective ways to evaluate writing. Teachers who are open to continuing education are more likely to learn and implement new methods into their pract ices (Murray, 2001; Nolen, McCutchen, & Berninger, 1990). I was pleased to hear all three educators reference a desire to learn about new methods or even new ways to implement already familiar methods. They were willing to change their practices if they were shown something that they felt would be more effective with their students. That wil lingness is the first step towards increased effectiveness in their evaluation practices.

PAGE 192

177 I was also pleased to note that the teachers beliefs and practices were largely a match. The following list shows the beliefs that were shared on either their questionnaires or in the interviews, which were then observed during teaching or through their evaluation of the students writing: Spelling was not counted against th e student writers on the drafts or for a grade. They modeled writing during their lesson. They encouraged the selection of a desired/interesting topic during the first writing (Martin, 2003). The draft was not graded. They used a rubric in th e grading of the papers. The ability to see their beliefs transfer into practice assured me that their responses on the questionnaire and during the interviews were honestly reported. However, I did not agree with all of thei r practices. For exampl e, a review of the comments given by the teachers to the students on both their drafts and their final writing leads me to believe that stronger and more specific/helpful comments would help the students to improve their writing. It is beneficial, even for ex perienced teachers, to review their comments in order to see what they are really saying and how helpful they are (Crone-Blevins, 2002; Smith, 1997; Sprinkle, 2004). There were too many compliments given without any advice, and even when advice was given, on semi-colon use, no explanation was provided. There were many co mments made on every paper, but their students would benefit more from fewer comment s that were more constructive in nature. Because the comments were so brief in nature it is likely that the students, who may not

PAGE 193

178 be aware of the teachers values, do not fu lly understand the comments or the teachers intentions. If they hope to trul y help their students to progress in their writing skills, more detailed and more specific comments will be needed on future student writings. One other area that was notable was th at one student lo st a point on the assignment because the paragraph had two extr a sentences. The teachers commented that there were ten sentences included rather than the required eight a nd deducted a point for the failure to follow the format. I thought that the students should be allowed to do more than was required. There is research that shows that keeping students writing to a standard during the learning process actually helps their writ ing to improve more in the long run (Matsumura et al., 2002). There is, howev er, also research that shows that such a strict adherence to the rubric is emphasizing the rules of wr iting rather than praising and helping students in their quest to expand th eir writing skills (Wiggins, 1994). While I would have worked with the st udent to see if she effectivel y used her two extra sentences, the educator participants chose to abide stri ctly by the rubric. Because I only observed one lesson, it is difficult to say whether this adherence to the format is only maintained until the students show proficiency in the given fo rmat or if they always strictly adhere to the rules. A look at the winni ng writings from the conference participants leads me to believe that the students are allowed to inject their own creativity and form into the previously taught formats once they have beco me proficient in that genre of writing, but more observations would be neces sary to confirm my thoughts. Overall, this observation was a good example of what instruction l ooks like when the teachers beliefs match their instruction. It does, however, raise a question of how much the co-teachers negotiate with one another in their planning as their separate responses on

PAGE 194

179 the questionnaire were not always an exact match. This would be an area of further exploration for future studies. It is also im portant to note that b ecause these teachers are in a private school sett ing, they are not bound to the same standardized testing rules as are the teachers in the public schools. They have more free dom than most public school teachers and are fortunate to have a headmaster who values research-based practices. Implications for Future Research During this study, I was repeatedly struck by ideas for future research stemming from the encounters that I had with teach ers or from reading their questionnaire responses. In order to gain a more complete understanding of how teachers beliefs transfer to their pract ice of evaluating student writing, it wo uld be beneficial to observe a teacher during planning, teaching, and evaluation times to see how she negotiates her beliefs with the needs of her students and how that transfers into instructional time and then results in a writing piece to be evaluated. Completing such a study over a longer period of time would also help stimulate new ideas for questions that could be added to the Writing and Evaluation Questionnaire befo re sharing it with a wider audience of educators in an effort to reach a sample size that would allow for generalizeability. If such a study was completed at a school lik e MCP where most of the teachers teach in teams, then the opportunity would be present to also look at the influence that a teaching partner has on the beliefs of an educators. Additionally, it was interesting to me that there were many grammatical and punctuation errors in the respons es of the teacher participants While their responses were certainly not being graded, my expectation was that a teacher who was writing about the evaluation of student writing would take car e to do so in a correct way. Either the

PAGE 195

180 teachers were not concerned with how their wr iting came across, or they need additional assistance with some areas of their own writing. If that is the case, it is quite possible that some of their feelings regard ing the evaluation of writing and that some of their beliefs about their effectiveness as an evaluator of writing are influenced by a personal feeling of inadequacy as a writer (Napoli, 2001) or as a teacher of writing (G raves, 2002). That is a dangerous place to be as such feelings may l ead to less attention being given to writing and its evaluation (Pardo, 2006). Future research in the area of teachers personal writing beliefs would help to increase understanding of how to best help teachers be comfortable with both the teaching and the evaluation of writing. Finally, school or district-b ased training has a great in fluence on the methods used by teachers in the evaluation of student writ ing, and that influence was evidenced by the 48.8% of the respondents to the questionnaire who indicated such trainings and workshops as being their primary source for finding methods to use in the evaluation of student writing. Knight, Wiseman, and C ooner (2000) found that many professional development programs have not been evaluate d for effectiveness. Because of the widereaching influence, research into the t echniques taught and the types of programs available would be worthwhile. Through a survey, interviews, and observat ions, I examined the varying beliefs held by educators with regards to the evaluation of student wr iting. All of the data from four different phases of data collection combined to present a snapshot of a small sample of educators and their beliefs about evaluating student wri ting. The evaluation of writing is a complex task. Yet it is an extremely im portant and high-stakes one (Hillocks, 2003). Teachers are being asked to make instruc tional and evaluative decisions that are

PAGE 196

181 responsive to the current assessment-driven climate (Conca, Schech ter, & Castle, 2004). Conversely, it is also necessary for students to be able to write in the "real world" for authentic audiences and authentic purposes. With these competing goals in mind, it is important that educators understand the range of evaluation methods that are effective and for them to be able to select the method of evaluation that best fits the writing task at hand. Having a large repertoire of evaluati on methods at their disposal means that teachers will be able to evaluate all t ypes of writing done by the students and will, therefore, be better-equ ipped to show their students how to make improvements in all of the different genres of writing that they do wh ile also learning where their instruction can be altered in order to reach all of their students at their point of need.

PAGE 197

182 References Anson, C.M. (1989). Response styles and ways of knowing. In C.M. Anson (Ed.) Writing and response: Theory, practice, and research (332-365). Anson, C.M. (1997). In our own voices: Us ing recorded commentary to respond to writing. In P. Elbow & M.D. Sorcinelli (Eds.), Learning to write: Strategies for assigning and responding to writing across the curriculum (pp. 105-115). San Francisco: Jossey-Bass. Anson, C. M., Perelman, L., Poe, M., & Sommers, N. (2008). Symposium: Assessment. College Composition and Communication, 60 (1), 113-164. Ashton, P.T. (1990). Editorial. Journal of Teacher Education 44(1), 2. Asker-Amason, L., Wengelin, A., & Sahlen, B. (2008). Process and product in writing a methodological contribution to the assessm ent of written narratives in 8-12-yearold Swedish children using ScriptLog. Logopedics Phoniatrics Vocology, 33 (3), 1651-2022. Atkinson, D., & Connor, U. (2008). Multilingual writing development. In C. Bazerman (Ed.), Handbook of Research on Writing: Histor y, Society, School, Individual, Text (pp. 515-532). New York: Lawrence Erlbaum Associates. Bandura, A. (1986). Social foundations of thought and ac tion: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall. Bandura, A. (1997). Self-efficacy: The exercise of control New York: Freeman. Bardine, B. A., Bardine, M. S., & Deegan, E. F. (2000). Beyond the red pen: Clarifying our role in the response process. English Journal, 90 (1), 94-101. Beach, R. (1985). The effects of modeling on college freshmans self-assessing Unpublished research report. University of Minnesota. Beach, R. (1989). Showing students how to assess: Demonstrating techniques for response in writing conferences. In C. Anson (Ed.), Writing and response: Theory, practice, and research (pp. 127-148). Urbana, IL: Nati onal Council of Teachers of English.

PAGE 198

183 Beach, R. & Friedrich, T. (2006). Response to writing. In C.A. MacArthur, S. Graham, & J. Fitzgerald (Eds.) Handbook of writing research (pp. 222-234). New York: The Guilford Press. Beaven, M.H. (1977). Individualized goal setting, self-evaluation, and peer evaluation. In C.R. Cooper & L. Odell (Eds.) Evaluating writing: Describing, measuring, judging (pp. 135-156). Berg, E.C. (1999). The effects of trained p eer response on ESL stude nts revision types and writing quality. Journal of Second Language Writing 8(3), 215-241. Berge, K.L. (2002). Hidden norms in assessment of students exam essays in Norwegian upper secondary schools. Written Communication 19(4), 458-492. Berry, R. (2006). Beyond strategies: Teacher beliefs and writing instruction in two primary inclusion classrooms. Journal of Learning Disabilities 39, 11-24. Bigge, J., & Stump, C. (1999). Curriculum, assessment, and instruction. Belmont, CA: Wadsworth. Birenbaum, M., & Dochy, F.J.R.C. (Eds). (1996). Alternatives in assessment of achievements, learning processes and prior knowledge. Boston: Kluwer. Black, L., Helton, E., & Sommers, J. (1994). Connecting current research on authentic and performance assessment through portfolios. Assessing Writing, 1(2), 247-266. Brannon, L., & Knoblauch, C.H. (1982). On studen ts rights to their own tests: A model of teacher response. College Composition and Communication 33, 157-166. Breland, H.M., Camp, R., Jones, R.J., Morris, M.M., & Rock, D.A. (1987). Assessing writing skill New York: College Entrance Examination Board. Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing Logan, UT: Utah State University Press. Burgin, J., & Hughes, G. D. (2009). Credibly assessing reading and writing abilities for both elementary student and program assessment. Assessing Writing, 14(1), 25-37. Calkins, L.M. (1994). The art of teaching writing. Portsmouth, NH: Heinemann. Calkins, L.M., Hartman, A., & White, Z. (2005). One to one: The art of conferring with young writers. Portsmouth, NH: Heinemann. Callahan, S. (1999). All done with best intentions: One Kentucky high school after six years of state portfolio tests. Assessing Writing, 6(1), 5-40.

PAGE 199

184 Callahan, S., & Spalding, E. (2006). Can hi gh-stakes writing assessment support highquality professional development? The Educational Forum, 70 (4), 337-350. Camp, R. (1983, April). Direct assessment at ETS: What we know and what we need to know Paper presented at the meeting of the National Council on Measurement in Education, Montreal. Camp, R. (1985). The writing folder in post-s econdary assessment. In P.J.A. Evans (Ed.) Directions and misdirecti ons in English evaluation (pp. 91-99). Ottowa, Canada: The Canadian Council of Teachers of English. Camp, R. (1993). Changing the model for the direct assessment of writing. Validating holistic scoring for writing assessment: Theoretical and em pirical foundations Cresskill, NJ: Hampton. Camp, R. (1996). Response: The politics of methodology. In E.M. White, W.D. Lutz, & S. Kamusikiri (Eds.) Assessment of writing: Politics, policies, practices (pp. 97-104). Chait. R. (2010). Removing chronically ine ffective teachers: Barriers and opportunities. Washington, D.C.: Center for American Progress. Chandler, J. (1997). Positive control. College Composition and Communication, 48 273274. Charney, D. (1984). The validity of using holisti c scoring to evaluate writing: A critical overview. Research in the Teaching of English 18(1), pp. 65-81. Cherry, R. & Meyer, P. (1993). Reliability issues in holistic assessment. In M.M. Williamson and B. Huot (Eds.), Validating holistic scori ng for writing assessment: Theoretical and empirical foundations Clay, M. (1966). Emergent reading behavior Unpublished master's th esis, University of Auckland, New Zealand. Clay, M. (1985). Concepts about print Portsmouth, NH: Heinemann Educational Books. Clay, M. (1991). Becoming literate: The cons truction of inner control Portsmouth, NH: Heinemann. Conca, L. M., Schechter, C. P., & Castle, S. (2004). Challenges teachers face as they work to connect assessment and instruction. Teachers and Teaching: Theory and Practice, 10 (1), 59-75. Cooksey, R. W., Freebody, P., & Wyatt-Smit h, C. (2007). Assessment as judgment-incontext: Analysing how teacher s evaluate students' writing. Educational Research and Evaluation, 13 (5), 401-434.

PAGE 200

185 Cooper, C.R. (1977). Holistic evaluation of wr iting. In C.R. Cooper & L. Odell (Eds.) Evaluating writing: Describing, measuring, judging (pp. 3-32). Cooper, C.R., & Odell, L. (Eds). (1977). Evaluating writing: Describing, measuring, judging. Urbana, IL: National Council of Teachers of English. Cooper, C.R., & Odell, L. (Eds.). (1999). Evaluating writing: Th e role of teachers knowledge about text, learning, and culture Urbana, IL: National Council of Teachers of English. Crawford, P.A. (1995). Early literacy: Emerging perspectives. Journal of Research in Early Childhood Education, 10 71-86. Crone-Blevins, D. (2002). The art of response. English Journal, 91(6), 93-98. Culham, R. (2003). 6 + 1 Traits of Writing: The Complete Guide New York: Teaching Resources. De La Paz, S. (2009). Rubrics: Heuristics for developing writing strategies. Assessment for effective intervention, 34 (3), 134-146. Delandshere, G., & Petrosky, A.R. (1998). Assessment of complex performances: Limitations of key measurement assumptions. Educational Researcher 27(2), 14-24. Dempsey, M. S., Pytlikzillig, L. M., & Br uning, R. H. (2009). Helping pre-service teachers learn to assess writing: Practice and feedback in a web-based environment. Assessing Writing, 14 (1), 38-61. Deuchar, R. (2005). Fantasy or reality? The use of enterprise in education as an alternative to simulated and imaginary contexts for raising pupil attainment in functional writing. Educational Review, 57 (1), 91-104. Diederich, P.B. (1964). Problems and possibilities of research in th e teaching of written composition. In D.H. Russell, E.J. Farrell, & M.J. Early (Eds.), Research design and the teaching of English: Proceedings of the San Francisco conference (pp.52-73). Urbana, IL: National Council of Teachers of English. Diederich, P.B., French, J.W., & Carlton, S.T. (1961). Factors in Judgments of Writing Ability (ETS Research Bulletin 61-15). Princeton: Educational Testing Service. Dillman, D., Smyth, J.D., & Christian, L.M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Somerset, NJ: Wiley. Dyson, A.H. (1981). Oral language: The r ooting system for learning to write. Language Arts 58, 776-784.

PAGE 201

186 Dyson, A.H. (1985). Three emergent writers and the school curriculum: Copying and other myths. The Elementary School Journal 85, 497-512. Dyson, A. H. (2006). On saying it right (write): "fix-its" in the founda tions of learning to write. Research in the Teaching of English, 41 (1), 8-42. Dyson, A. H., & Freedman, S. W., (1991). Writing. In J. Flood, J.M. Jensen, D. Lapp, & J.R. Squire (Eds.) Handbook of research on teaching the English language arts (pp. 754-774). New York: MacMillan Publishing Company. Eeds, M. (1988). Holistic assessment of coding ability. In Glazer, S., Scarfoss, L. & Gentile, L. (Ed.), Reexamining reading diagnosis: New trends and procedures (pp. 48-66). Newark, DE: Interna tional Reading Association. Elbow, P. (1981). Writing with power New York: Oxford University Press. Elbow, P. (1986). Portfolio assessment as an alternative in proficiency testing. Notes from the National Testing Network in Writing 6, 3, and 12. Elbow, P. (1996). Writing assessment:Do it better; do it less. In E.M. White, W.D. Lutz, & S. Kamusikiri (Eds.) Assessment of writing: Politics, policies, practices (pp. 120134). Elbow, P. (2006). The music of form: Rethinking organization in writing. College Composition and Communication, 57 (4), 620-666. Emig, J. (1971). The composing process of twelfth graders (Research Report No. 13). Urbana, IL: National Council of Teachers of English. Englert, C.S., Mariage, T.V., & Dunsmore, K. (2006). Tenets of sociocultural theory in writing instruction research. In C.A. MacArt hur, S. Graham, & J. Fitzgerald (Eds.) Handbook of writing research (pp. 208-221). New York: The Guilford Press. Feiman-Nemser, S. (1990). Teacher preparation: Structural and conceptual alternatives. In W. R. Houston, M. Huberman, & J. Sikula (Eds.), Handbook of research in teacher education (pp. 212-233). New York: Macmillan. Florio-Ruane, S., & Lensmire, T. J. (1989). Transforming future teachers' ideas about writing instruction Freedman, S.W. (1987). Response to student writing Urbana, IL: National Council of Teachers of English. Freedman, S.W. (1985). The role of response in acquisition of written language. Berkeley, C A: California UP.

PAGE 202

187 Gall, M.D., Gall, J.P., & Borg, W.R. (2003). Educational Research: An Introduction Boston: Pearson Education, Inc. Gallavan, N. P., Bowles, F. A., & Young, C. T. (2007). Learning to write and writing to learn: Insights from teacher candidates. Action in Teacher Education, 29 (2), 61-69. Gee, J.P. (Ed.) (1992). The social mind: Language ideology and social practice New York: Bergin & Garvey. Gee, T.C. (1972). Students re sponses to teacher comments Research in the Teaching of English 6, 212-221. Gielen, S., Dochy, F., & Dierick, S. (2003). Ev aluating the consequential validity of new modes of assessment: The influence of assessment on learning, including pre-, postand true assessment effects. In M. Se gers, F. Dochy, & E. Cascallar (Eds.), Optimizing new modes of assessment: In search of quality and standards (pp. 3754). Dordrecht: Kluwer Academic Publishers. Godshalk, F.I., Swineford, F., & Coffman, W.E. (1966). The measurement of writing ability New York: College Entrance Examination Board. Goffman, E. (1959). The Presentation of Self in Everyday Life Garden City, NY: Anchor. Goodman, Y. (1986). Children coming to know lite racy. In Teale, W. & Sulzby, E. (Ed.), Emergent literacy: writing and reading (pp. 1-14). Norwood, NJ: Ablex. Graham, S., Harris, K. R., Fink, B., & MacA rthur, C. A. (2001). Teacher efficacy in writing: A construct validation with primary grade teachers. Scientific Studies of Reading 5, 177-202. Graham, S., Harris, K., Fink-Chorzempa, B., & MacArthur, S. (2003). Primary grade teachers instructional adaptations fo r struggling writers: A national survey. Journal of Educational Psychology 95(2), 279-292. Graham, S., & Weiner, B. (1996). Theories and principles of motivation. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 63-84). New York: Simon & Schuster Macmillan. Graves, D. (1983). Writing: Teachers and children at work. Exeter, NH: Heinemann. Graves, D. (2002). Testing is not teaching: What should count in education Portsmouth, NH: Heinemann.

PAGE 203

188 Guba, E. (1978). Toward a methodology of naturalistic inquiry in educational evaluation Los Angeles: University of Califor nia Graduate School of Education. Guion, R.M. (1980). On Trinitarian doctrines of validity. Professional Psychology 11, 385-398. Gulikers, J.T.M., Bastiaens, T.J., & Kirschner, P.A. (2004). Perceptions of authentic assessment: Five dimensions of au thenticity. Paper presented at the Norhumbria/EARLI SIG assessment conference, Bergen, Norway. Gulikers, J.T.M., Bastiaens, T.J., & Ki rschner, P.A. (2007). Defining authentic assessment: Five dimensions of authentic ity. In A. Havnes & L. McDowell (Eds.), Balancing assessment and learning in contemporary education (pp. 73-86). London: Routledge. Guskey, T.R. (1986). Staff development and the process of teacher change. Educational Researcher 15(5), 5-12. Haneda, M., & Wells, G. (2000). Writing in knowledge-building communities. Research in the Teaching of English 34(3), 430-457. Hesse-Biber, S.N., & Leavy, P. (2005). The Practice of Qualitative Research Thousand Oaks, CA: Sage Publications, Inc. Hibbard, K.M. & Wagner, E.A. (2003). Assessing & Teaching: Reading Comprehension & Writing Larchmont, NY: Eye on Education, Inc. Hiebert, J., & Raphael, T. (1998). Early literacy instruction Fort Worth, TX: Harcourt Brace College. Hillocks, G. Jr. (1984). What works in teaching composition: A meta-analysis of experimental treatment studies. American Journal of Education, 93, 133-170. Hillocks, G., Jr. (2003). Fighting back: Assessing the assessments. The English Journal, 92, 63-70. Hillocks, G. Jr. (2008). Writing in sec ondary schools. In C. Bazerman (Ed.), Handbook of research on writing: History, so ciety, school, individual, text (pp. 311-329). New York: Lawrence Erlbaum Associates. Hirsch, E.D. Jr. (1977). The philosophy of composition Chicago: University of Chicago Press. Hoffman, J.V., Paris, S.G., Salas, R., Pa tterson, E., & Assaf, L. (2003). High-stakes assessment in the language arts: The piper pl ays, the players dance, but who pays the

PAGE 204

189 price? In J.Flood, D. Lapp, J.R. Squire, and J.M. Jensen (Eds.), The handbooks of research on teaching the English language arts (2nd ed.) (pp. 619-630). Holdaway, D. (1979). The foundations of literacy New York: Ashton Scholastic. Hollingworth, L. (2007). Five ways to prepare for standardized tests without sacrificing best practice. The Reading Teacher, 61 (4), 339-342. Honebein, P.C., Duffy, T.M., & Fishman, B.J. (1993). Constructivism and the design of learning environments: Context and authenti c activities for learning. In T.M. Duffy, J. Lowyck, & D.H. Jonassen (Eds.), Designing environments for constructive learning (pp. 88-108). Berlin: Springer-Verslag. Hudson, S.A. (1988). Childrens perceptions of classroom writing: Ownership in a continuum of control. In B. Rafoth & D. Rubin (Eds.) The social construction of written language (pp. 37-69). Norwood, NJ: Ablex. Hughes, R.E., & Nelson, C.H. (1991). Placement scores and placement practices: An empirical analysis. Community College Review 19(1), 42-46. Hunter, A., & Brewer, J. (2003). Multimethod research in sociology. In A. Tashakkori & C. Teddlie (Eds.) Handbook of mixed methods in social and behavioral research (pp. 577-594). Thousand Oaks, CA : Sage Publications, Inc. Huot, B. (2002). (Re)articulating writing asse ssment for teaching and learning. Logan: Utah State University Press. Huot, B. (1990). Reliability, validity and holis tic scoring: What we know, and what we need to know. College Composition and Communication, 41 (2), 201-213. Huot, B. & Williamson, M. (1997). Toward a new theory of writing assessment. College Composition and Communication 47(4), 549-565. Jerabek, R., & Dieterich, D. (1975). Composition evaluation: The state of the art. College Composition and Communication 26, 183-186. Johnson, B., & Turner, L.A. (2003). Data co llection strategies in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.) Handbook of mixed methods in social and behavioral research (pp. 297-319). Thousand Oaks, CA: Sage Publications, Inc. Judine, S.M. (1965). A guide for evaluating student composition Champaign, IL: National Council of Teachers of English.

PAGE 205

190 Knight, S.L., Wiseman, D.L., & Cooner, D. (2000). Using collaborative teacher research to determine the impact of professional development school activities on elementary students math and writing outcomes. Journal of Teacher Education, 51 (1), 26-38. Koenig, A.J. (1992). A framework for understa nding the literacy of individuals with visual impairments. Journal of Visual Impairments and Blindness, 84 277-284. Kress, G. (1997). Before writing: Rethinking the paths to literacy London: Routledge. Krippendorff, K.H. (2004). Content analysis: An intr oduction to its methodology (2nd ed.). Thousand Oaks, CA: Sa ge Publications, Inc. Larson, R.L. (1996). Portfolios in the assessmen t of writing: A political perspective. In E.M. White, W.D. Lutz, & S. Kamusikiri (Eds.) Assessment of writing: Politics, policies, practices (pp. 271-283). Lee, I. (1998). Writing in the Hong Kong secondary classroom: Teachers beliefs and practices. Hong Kong Journal of Adolescent Literacy 3, 61-76. LloydJones, R. (1977). Primary trait sc oring. In C. Cooper & L. Odell (Eds.) Evaluating Writing Urbana, IL: National Council of Teachers of English. Lunsford, A.A. (1986). The past-and future -of writing assessment. In K.L. Greenberg, H.S. Wiener, & R.A. Donovan (Eds.) Writing assessment: Issues and strategies (pp. 1-12). Mathison-Fife, J. & ONeill, P. (1997) Moving beyond the written comment: Narrowing the gap. Research in the Teaching of English 31, 91-119. Martin, A. W. (2004). Recovering response: Emphasizing writing as relational practice. Issues in Writing, 14 (2), 116-134. Martin, B., Jr., & Brogam, P. (1971). Instant readers. New York: Holt, Rinehart, & Winston. Martin, B. (2003). A writing assignment/A way of life. English Journal, 92 (6), 52-56. Martin, N., DArcy, P., Newton, B., & Park er, R. (1994). The development of writing abilities. In C. Bazerma n & D.R. Russell (Eds.), Landmark essays on writing across the curriculum Davis, CA: Hermagoras Press. Matsumura, L. C., Patthey-Chavez, G. G., & Valdes, R. (2002). Teacher feedback, writing assignment quality, a nd third-grade students' revi sion in lowerand higherachieving urban schools. The Elementary School Journal, 103 (1), 3-25.

PAGE 206

191 Mavrogenes, N.A. (1986). What every reading teacher should know about emergent literacy. The Reading Teacher 40(2), 174-178. Maxwell, J.A., & Loomis, D.M. (2003). Mixed method design: An alternative approach. In A. Tashakkori & C. Teddle (Eds.) Handbook of mixed methods in social & behavioral research (pp. 241-272). McGee, L., & Purcell-Gates, V. (1997). Convers ations: So whats going on in emergent literacy? Reading Research Quarterly, 32, 310-318. McGee, L. M., & Richgels, D. J. (2000). Literacy's beginnings: Supporting young readers and writers (3rd ed.). Needham Heights, MA: Allyn and Bacon. McGhee, M. W., & Lew, C. (2007). Lead ership and writing: How principals knowledge, beliefs, and interventions aff ect writing instruction in elementary and secondary schools. Educational Administration Quarterly, 43 (3), 358-380. Messick, S. (1989). Meaning and values in te st validation: The science and ethics of assessment. Educational Researcher 18, 5-11. Miller, R.E. (1994). Fault lin es in the contact zone. College English 56(4). 389-408. Morrow, L. M. (1997). Literacy development in the earl y years: Helping children read and write (3rd ed.). MA: Allyn & Bacon. Moss, P.A. (1994). Can there be validity without reliability? Educational Researcher 23 (2), 5-12. Moss, P.A. (1996). Enlarging the dialogue in educational measurement: Voices from interpretive research traditions. Educational Researcher 25(1), 20-28. Moss, P.A. (1998). Response: Te sting the test of the test. Assessing Writing, 5, 111-122. Moynihan, K. E. (2009). Local authors in th e classroom: Bringing readers and writers together. English Journal, 98(3), 34-38. Mueller, J. (2005). The authentic assessm ent toolbox: Enhancing student learning through online faculty development. Journal of Online Learning and Teaching, 1(1), Retrieved from Murphy, S. (1997). Literacy assessment and the politics of identities. Reading and Writing Quarterly, 13, 261-278. Murphy, S., & Yancey, K.B. (2008). Construct and consequence: Validity in writing assessment. In C. Bazerman (Ed.), Handbook of research on writing: History,

PAGE 207

192 society, school, individual, text (pp. 365-386). New York: Lawrence Erlbaum Associates. Murray, D.M. (1982). Teaching the othe r self: The writers first reader. College Composition and Communication 33(2), 140-147. Murray, D.M. (2004). A Writer Teaches Writing Boston: Thomson/Heinle. Murray, R.E.G. (2001). Integrating teaching and research through writing development for students and staff. Active Learning in Higher Education, 2(1), 31-45. Napoli, M. (2001). Preservice teachers reflections about learning to teach writing. Unpublished manuscript, Pennsylvania Stat e University, University Park. (ERIC Document Reproduction Service No. ED459472). Nespor, J. (1987). The role of beliefs in the practice of teaching. Journal of Curriculum Studies 19, 317-328. Newkirk, T., & Atwell, N. (Eds.). (1988). Understanding writing Portsmouth, NH: Heinemann. Nixon, R., & McClay, J. K. (2007). Collaborat ive writing assessment: Sowing seeds for transformational adult learning. Assessing Writing, 12 (2), 149-166. No Child Left Behind (NCLB) Act of 2001, 20 U.S.C.A. 6301 et seq. (West 2003). Nolen, P., McCutchen, D., & Berninger, V. (1990). Ensuring tomorrows literacy: A shared responsibility Journal of Teacher Education, 41 63. Norman, K. A., & Spencer, B. H. (2005). Our lives as writers: Examining pre-service teachers' experiences and beliefs about the nature of writing and writing instruction. Teacher Education Quarterly, 32 (1), 25-40. Odell, L. (1999). Assessing thinking: Glimps ing a mind at work. In C.R. Cooper & L. Odell (Eds.). Evaluating writing: The role of teachers knowledge about text, learning, and culture (pp.7-22). Urbana, IL: National Council of Teachers of English. Odell, L., & Cooper, C.R. (1980). Procedur es for evaluating writing: Assumptions and needed research. College English 42 35-43. Pajares, F. (1992). Teachers beliefs and educational re search: Cleaning up a messy construct. Review of Educational Research 62(3), 307-332. Pajares, F. (2003). Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading & Writing Quarterly 19, 139-158.

PAGE 208

193 Pajares, F., & Johnson, M.J. (1996). Self-effi cacy beliefs in the writing of high school students: A path analysis. Psychology in the Schools 33, 163-175. Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks, Calif.: Sage Publications. Petraglia, J. (1998). Reality by Design Mahway, NJ: Lawrence Erlbaum Associates. Phelps, L.W. (2000). Cyranos nose: Vari ations on the theme of response. Assessing Writing 7(1), 91-110. Pinnell, G. (1996). Guided reading. Portsmouth, NH: Heinemann. Pintrich, P.R. (1990). Implications of psychological rese arch on student learning and college teaching for teacher educ ation. In W.R. Houston (Ed.), Handbook of research on teacher education (pp. 826-857). New York: Macmillan. Pintrich, P. R., & Schunk, D. H. (1995). Motivation in education: Theory, research, and applications Englewood Cliffs, NJ: Prentice Hall. Pirie, B. (1997). Reshaping high school English Urbana, IL: National Council of Teachers of English. Posner, G.J., Strike, K.A., Hewson, P.W., & Gertzog, W.A. (1982). Accommodation of a scientific conception: Toward a theory of conceptual change. Science Education 66, 211-227. Prior, P. (2006). A sociocultural theory in writing. In C.A. MacArthur, S. Graham, & J. Fitzgerald (Eds.) Handbook of writing research New York: The Guilford Press. Probst, R. E. (1989). Transactional theory and response to student writing. In C.M. Anson (Ed.) Writing and Response: Theory, practice, and research (pp. 68-79). Raudenbush, S., Rowan, B., & Cheong, Y. ( 1992). Contextual effects on the selfperceived efficacy of high school teachers. Sociology in Education 65, 150-167. Reeves, L. (1997). Minimizing writing appreh ension in the learne r-centered classroom. English Journal, 86, 38-45. Rhodes, L & Shanklin, N. (1993) Windows into Literacy Assessing Learners K-8 Portsmouth, NH: Heinemann. Roehler, L.R., Duffy, G.G., Herrmann, B. A., Conley, M., & Johnson, J. (1988). Knowledge structures as evidence of th e personal: Bridging the gap from thought to practice. Journal of Curriculum Studies, 20, 159-165.

PAGE 209

194 Rowe, D.W. (2009). Early written communication. In R. Beard, J. Riley, D. Myhill, & M. Nystrand (Eds.) SAGE Handbook of Writing Development (pp. 213-231). Schaffer, J.C. (1995). The Jane Schaffer Method: Teac hing the multiparagraph essay: A sequential nine-week unit (3rd ed.). San Diego, CA: Jane Schaffer Publications. Scharton, M. (1996). The politics of validity. In E.M. White, W. D. Lutz, & S. Kamusikiri (Eds.) Assessment of writing: Po litics, policies, practices (pp. 53-75). Scherff, L., & Piazza, C. (2005). The more thi ngs change, the more they stay the same: A survey of high school stude nts writing experiences. Research in the Teaching of English 39(3), 271-304. Sipe, L.R. (2000). The construction of literary understanding by first and second graders in oral response to picture storybook read-alouds. Reading Research Quarterly, 35, 252-275. Smith, S. (1997). The genre of the end comme nt: Conventions in teacher responses to student writing. College Composition and Communication, 48 249-268. Snow, C., Burns, M., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children Washington, DC: National Academy Press. Sommers, N. (1982). Res ponding to student writing. College Composition and Communication, 33(2), 148-156. Sommers, J. (1989). The writers memo: Collaboration, response, and development. In C. Anson (Ed.), Writing and response: Theo ry, practice, and research (pp. 174-186). Urbana, IL: National Council of Teachers of English. Spandel, V. (2006). In defense of rubrics. The English Journal, 96 (1), 19-22. Sprinkle, R.S. (2004). Written commentary: A systematic, theory-based approach to response. Teaching English in the two-year college,31 (3), 273-286. Starch, D. & Eliot, E.C. (1912). Reliability of the grading of high-school work in English. The School Review 20(7), 442-457. Swaim, J. (1998). In search of an honest response. Language Arts, 75, 118-125. Tashakkori, A., & Teddlie, C. (2003). The past and future of mixed methods research: From data triangulation to mixed model designs. In A. Tashakkori & C. Teddlie (Eds.) Handbook of mixed methods in social and behavioral research (pp. 671-701). Thousand Oaks, CA: Sage Publications, Inc. Teale, W., & Sulzby, E. (1986). Emergent literacy: Writing and reading Norwood, NJ: Ablex.

PAGE 210

195 Temple, C., Nathan, R., Temple, F., & Burris, N.A. (1993). The Beginnings of Writing Needham Heights, MA: Allyn & Bacon. Thomas, P.L. (2000). The struggle itself : Teaching writing as we know we should. The English Journal 90, 39-45. Tompkins, G.E. (2008). Teaching Writing: Balancing Process and Product Upper Saddle River, NJ: Pearson Education, Inc. Vygotsky, L.S. (1962). Thought and language Cambridge, MA: The MIT Press. Vygotsky, L.S. (1978). Mind in society: The devel opment of higher psychological processes. Cambridge, MA: Harvard University Press. Wesley, K. (2000). The ill effects of the five paragraph theme. English Journal 90, 5760. White, E.M. (1985). Teaching and assessing writing. San Francisco: Jossey-Bass. White, E.M. (1996). Writing assessment beyond the classroom. In L.Z. Bloom, D.D. Daiker, & E.M. White (Eds.), Composition in the twentyfirst century: Crisis and change (pp. 101-111). White, E.M., Lutz, W.L., & Kamusikiri, S. (Eds.). (1996). Assessment of writing: Politics, policies, and practices New York: The Modern Language Association of America. Whitehurst, G.J., & Lonigan, C.J. (1998). Ch ild development and emergent literacy. Child Development, 69 848-872. Wiggins, G. (1994). The constant danger of sacrificing validity to reliability: Making writing assessment serve writers. Assessing Writing, 1(1), 129-139. Wiley, M. (2000). The popularity of formulaic writing (and why we need to resist). English Journal 90(1), 61-67. Williamson, M. (1993). An introduction to holistic scoring. In M. Williamson & B. Huot (Eds) Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 206-232). Cresskill, NJ: Hampton. Wilson, M. (2007). Why I won't be using rubrics to respond to students' writing. English Journal, 96(4), 62-66. Wilson, S.M. (1990). The secret garden of teacher education. Phi Delta Kappan 72, 204209.

PAGE 211

196 Wolcott, H.F. (1990). On seeking and rejecting validity in qualitativ e research. In E.W. Eisner & A. Preshkin. (Eds.) Qualitative inquiry in education: The continuing debate. (pp. 121-152). New York, NY: Teachers College Press. Wyatt-Sm ith, C., & Castleton, G. (2005). Exam ining how teachers judge student writing: An australian case study. Journal of Curriculum Studies, 37 (2), 131-154. Yaden, D., Rowe, D., & MacGillivray, L. (1999). Emergent literacy: A polyphony of perspectives Ann Arbor: University of Mich igan-Ann Arbor, Center for the Improvement of Early Reading Achievement. Yancey, K.B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication 50(3), 483-503.

PAGE 212

197 Appendices

PAGE 213

198 Appendix A Writing and Evaluation Questionnaire Please complete the following questionnaire. Results will be used for research purposes and will be kept confidential. Pl ease respond truthfully to each question, as there are not right or wro ng answers on this survey. 1. How often do you evaluate student writing? ____ Several times a day ____ Once a day ____ Once a week ____ Once every couple weeks ____ Once a month or less ____ Other (Pleas e specify): ___________________ 2. What percentage of the time that you spend assessing ALL of your students' work would you say is spent evaluating writing assignments? ____ 100%-75% ____ 74%-50% ____ 49%-25% ____ 24%-0% 3. Do you spend more time assessing some individual students' papers than others? ____ Yes ____ No 4. What accounts for differences in the amount of time spent on various papers? How would you characterize those papers you spend more time responding to as contrasted with those that take less time? ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________

PAGE 214

199 Appendix A: (Continued) 5. How do you feel about a ssessing student writing? ____ Positive ____ Somewhat positive ____ Somewhat negative ____ Negative 6. What is the most important aspect of writing that you are looking for when you grade a student writing? ____ Correctness in Grammar and Punctuation ____ Ideas/Concepts ____ Voice ____ Organization ____ Fluency ____ Word Choice ____ Length of Writing ____ Other (Please Specify): ______________________ 7. Why do you feel that aspect is the most important part of the writing to consider when you are grading? ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ 8. What do you see as the purpose(s) of writing assessment? ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________

PAGE 215

200 Appendix A: (Continued) 9. Where did you learn the different assessmen t methods that you use to evaluate student writing? ____ College or university ____ Professional Development Courses ____ Colleagues ____ Other 10. Do you ever have assignments where your students write more than one draft for you? If not, please go to #12. ____ Yes ____ No 11. How do the students receive grades for those papers? ____ Every draft has a separate grade. ____ The drafts are not graded. ____ The final copy and the drafts are put together for one grade. ____ Other (Please specify): ________________________________ 12. Please mark how often you use each of the following methods of assessment while grading the writing of your students. 1 2 3 4 Never Rarely Sometimes Always Checklists 1 2 3 4 Group Conferences 1 2 3 4 Holistic Scoring 1 2 3 4 Individual Student Conferences 1 2 3 4 Observation 1 2 3 4 Portfolios 1 2 3 4 Primary Traits Scoring 1 2 3 4 Rubrics 1 2 3 4 Self-Assessment 1 2 3 4 Other: _____________ 1 2 3 4 13. Which of those types of assessment you us e most frequently when evaluating student writing? _______________________________________

PAGE 216

201 Appendix A: (Continued) 14. Why do you choose to use that method of assessing student writing more often than any other method? _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ 15. Do you use both formal (written feedback, rubrics, etc) and informal (observation, anecdotal notes, conferences, etc.) methods of assessing writing? ____ Yes ____ No 16. If so, what percentage (for a total of 100%) of your time spent evaluating writing is spent on: ____ Informal Assessments ____ Formal Assessments 17. Do you ever utilize the FCAT Writi ng Assessment rubric to score papers? ____ Yes ____ No 18. If so, how often do you use that? ____ Daily ____ Weekly ____ Monthly ____ Other 19. Why do you choose to use that rubric? ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________

PAGE 217

202 Appendix A: (Continued) 20. How helpful do you feel that the feedback from that rubric is to your students? ____ Extremely helpful ____ Helpful ____ Somewhat helpful ____ Not helpful 21. How often do you provide your students wi th written feedback on their writing assignments? ____ Always ____ Most of the time ____ Sometimes ____ Rarely ____ Never 22. How effective do you believe that your method(s) of evaluating writing are? ____ Extremely effective ____ Effective ____ Somewhat effective ____ Ineffective 23. Why do you feel that way? ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ 24. What is your current position? ____ Teacher ____ Media Specialist ____ Reading Specialist ____ Administrator ____ Other (please specify): _________________ 25. Do you currently teach writing to stude nts? If you do currently teach writing to students, please go to #26. If you are not currently teaching, please skip to #27. ____ Yes ____ No

PAGE 218

203 Appendix A: (Continued) 26. What grade level do you currently teach? ____ K ____ 1 ____ 2 ____ 3 ____ 4 ____ 5 27. Have you ever taught writing to students? If so, please go to #28. If you have never taught writing to students, please skip to #29. ____ Yes ____ No 28. How many years have you taught writing to students? Please incl ude all years spent teaching writing to students in this res ponse even if you are not currently teaching. ____ 1-5 years ____ 6-10 years ____ 11-15 years ____ 16-20 years ____ over 20 years 29. What is the highest level of e ducation that you have completed? ____ Bachelors Degree ____ Masters Degree ____ Doctoral Degree ____ Other (pleas e specify): _________________ 30. Sex: ____ Female ____ Male 25. Age: ____ 21-26 _____ 27-32 ____ 33-38 ____ 39-44 ____ 45-50 ____ Over 50 Please feel free to share a ny other thoughts or comments th at you have regarding the evaluation of student writing he re. You may use the back of this page if you need more space for your response:

PAGE 219

Appendix B Final Version of Survey Monkey: Writ ing and Evaluation Question naire 1. Writing and Evaluation Questionnaire The results of this questionnaire will be used for re search purposes and will be kept confidential. Please respond truthfully to each question, as there are not right or wrong answers on this survey. Thank you so much for agreeing to complete this survey. I am interested in learning more about your beliefs regarding teaching and assessing writing. I am awar e that there are many different ways to approach the evaluation of writing. I also know that there are many factors that may influence your selection of the methods of evaluation that you utilize with students. Anything that you would like to share with me is welcome because your thoughts, feelings, and beliefs wi ll help me gain a better understanding of the current status of writing ev aluation in schools. Add Question Here Edit Question Move Copy Delete 1. What do you see as the pur pose(s) of writing assessment? Add Question HereSplit Page Here Edit Question Move Copy Delete Add Question Logic 2. How often do you assess student writing? Several times a day Once a day Once a week Once every couple of weeks Once a month or less Other (please specify) Add Question HereSplit Page Here Edit Question Move Copy Delete Add Question Logic 204

PAGE 220

Appendix B: (Continued) 3. What percentage of the time that you spend assessing all of your students' work would you say is spent assessing writing assignments? 100%-75% 74%-50% 49%-25% 24%-0% Add Question HereSplit Page Here Edit Question Move Copy Delete Edit Question Logic ( 2 ) 4. Do you spend more time assessing some individual students' papers than others? Yes No Add Question Here 5. What accounts for differences in the amount of time spent on various papers? Add Question Here Split Page Here Edit Question Move Copy Delete 6. How would you characterize those papers you spend more time responding to as contrasted with those that take less time? Add Question Here Add Page Here Page # 3 Edit Page Move Copy DeleteAdd Page Logic Show this Page Only 3. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 205 7. How do you feel about assessing student writing?

PAGE 221

Appendix B: (Continued) Positive Somewhat positive Somewhat negative Negative Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 8. What is the most important aspect of writing that you are looking for when you assess a student writing? Correctness in Grammar and Punctuation Ideas/Concepts Voice Organization Fluency Word Choice Length of Writing Other (please specify) Add Question Here Split Page Here Edit Question Move Copy Delete 9. Why do you feel that this aspect is the most important part of the writing to consider when you are assessing writing? Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 10. Where did you learn the different assessment methods that you use to assess student writing? College or university courses School or district based training Reading coaches or literacy specialists 206

PAGE 222

Appendix B: (Continued) Peer teachers Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteEdit Question Logic ( 2) 11. Do you ever have assignments in which your students write more than one draft for you? Yes No Add Question Here Add Page Here Page # 4 Edit Page Move Copy DeleteAdd Page Logic Show this Page Only 4. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 12. How do the students rece ive grades for those papers? Every draft has a separate grade. The drafts are not graded. The final copy and drafts are put together for one grade. Other (please specify) Add Question Here Add Page Here Page # 5 Edit Page Move Copy DeleteAdd Page Logic Show this Page Only 5. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy Delete 207 13. Please mark how often you use each of the following methods of assessment while assessing the writing of your students. 1 Rarely, if ever 2 Once in a while 3 Frequently 4 Almost always Checklists [Method by which the teacher notes Please mark how often you use each of 2 -Once in a while 3 Frequently 4 Almost always

PAGE 223

whether or not the student has accomplished what he or she has been asked to do but without judging the quality of the work (i.e., has five sentences, made a cover, wrote a narrative piece, etc.).] the following methods of assessment while assessing the writing of your students. Checklists [Method by which the teacher notes whether or not the student has accomplished what he or she has been asked to do but without judging the quality of the work (i.e., has five sentences, made a cover, wrote a narrative piece, etc.).] 1 Rarely, if ever Teacher Conferences [The teacher and student meet to discuss the students writing. This conversation may include a discussion about the strengths and weaknesses, suggestions for revisions, attention to conventions, etc.] Teacher Conferences [The teacher and student meet to discuss the students writing. This conversation may include a discussion about the strengths and weaknesses, suggestions for revisions, attention to conventions, etc.] 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always 208

PAGE 224

Peer Conferences [Students meet with one or more of their peers to share and discuss their writing. These meetings may include suggestions for revisions, sharing what they like or dislike about each others writing, etc.] Peer Conferences [Students meet with one or more of their peers to share and discuss their writing. These meetings may include suggestions for revisions, sharing what they like or dislike about each others writing, etc.] 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always Holistic Scoring [This method of evaluating writing requires the evaluator to look at all components of a writing sample in conjunction when giving a final grade rather than assessing individual characteristics separately.] Holistic Scoring [This method of evaluating writing requires the evaluator to look at all components of a writing sample in conjunction when giving a final grade rather than assessing individual characteristics separately.] 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always Portfolios [Teachers have students compile samples of their writing over the course of a Portfolios [Teachers have students compile samples of their writing over the course of a certain 2 -Once in a while 3 Frequently 4 Almost always 209

PAGE 225

210 certain timeframe (a grading period, the whole year, etc.) in order to evaluate the writing.] timeframe (a grading period, the whole year, etc.) in order to evaluate the writing.] 1 Rarely, if ever Observations [During the times when students write, the teacher watches to see what each student is doing and may make a mental or anecdotal note about what he or she observes.] Observations [During the times when students write, the teacher watches to see what each student is doing and may make a mental or anecdotal note about what he or she observes.] 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always Rubrics [When assessing student writing, the teacher looks at specified characteristics as outlined on a rubric and decides how well the student succeeded in each area before adding those scores together for a final score.] Rubrics [When assessing student writing, the teacher looks at specified characteristics as outlined on a rubric and decides how well the student succeeded in each area before adding those scores together for a final score.] 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always FCAT Scoring Rubric [The teacher uses FCAT Scoring Rubric 2 -Once in a while 3 Frequently 4 Almost always

PAGE 226

the FCAT rubric to evaluate student writing. The rubric requires teachers to look at focus, organization, support, and conventions.] [The teacher uses the FCAT rubric to evaluate student writing. The rubric requires teachers to look at focus, organization, support, and conventions.] 1 Rarely, if ever Primary Traits Scoring [The teacher predetermines what characteristics of the writing are the most important as well as what will be assessed and how it will be assessed. This method is specific to each assignment.] Primary Traits Scoring [The teacher predetermines what characteristics of the writing are the most important as well as what will be assessed and how it will be assessed. This method is specific to each assignment.] 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always Self Assessment [Students are given the opportunity to evaluate their own writing. They may use criteria established by the teacher or may create their own criteria.] Self Assessment [Students are given the opportunity to evaluate their own writing. They may use criteria established by the teacher or may create thei r own criteria.] 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always 211

PAGE 227

Other (Please specify in the box below) Other (Please specify in the box below) 1 Rarely, if ever 2 -Once in a while 3 Frequently 4 Almost always Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 14. Select the method of assessing st udent writing listed in the previous question that you use most frequently. Checklists Teacher Conferences Peer Conferences Holistic Scoring Observations Portfolios Rubrics FCAT Scoring Rubric Primary Traits Scoring Self Assessment Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 15. Please rate the method of assessing student writing that you use most frequently (as designated in the previous question) on a scale of 1-4 with 1 being minimally effective and 4 being extremely effective. 1 Minimally effective 2 Somewhat effective 3 Effective 4 Extremely effective Add Question Here Add Page Here Page # 6 Edit Page Move Copy DeleteAdd Page Logic Show this Page Only 212

PAGE 228

6. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 16. Please select the method of assessing student writi ng that you use second most frequently from the list below Checklists Teacher Conferences Peer Conferences Holistic Scoring Observations Portfolios Rubrics FCAT Scoring Rubric Primary Traits Scoring Self Assessment Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 17. Please rate the method of asse ssing student writing that you use second most frequently (as designated in the previous question) on a scale of 14 with 1 being minimally effective and 4 being extremely effective. 1 Minimally effective 2 Somewhat effective 3 Effective 4 Extremely effective Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 18. Please select the method of assessing student writi ng that you use third most frequently from the list below. Checklists 213

PAGE 229

Teacher Conferences Peer Conferences Holistic Scoring Observations Portfolios Rubrics FCAT Scoring Rubric Primary Traits Scoring Self Assessment Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 19. Please rate the method of assessing student writing that you use third most frequently (as designated in the previ ous question) on a scale of 1-4 with 1 being minimally effective and 4 being extremely effective. 1 Minimally effective 2 Somewhat effective 3 Effective 4 Extremely effective Add Question Here Split Page Here Edit Question Move Copy Delete 20. Why do you choose to use these three methods of assessing student writing more often than other methods? Add Question Here Split Page Here Edit Question Move Copy DeleteEdit Question Logic ( 3) 21. Are you mandated by your school to use any of the three methods of assessment that you use most frequently? Yes 214

PAGE 230

No Add Question Here Add Page Here 215 Page # 7 Edit Page Move Copy DeleteAdd Page Logic Show this Page Only 7. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy Delete 22. Which method(s) of evaluatin g writing are you mandated to use? Add Question Here Add Page Here Page # 8 Edit Page Move Copy DeleteAdd Page Logic Show this Page Only 8. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 23. Please think about ALL of the differ ent methods of evaluation that you use when reviewing student wr iting. As a whole, how effective do you believe that the method(s) of evaluatin g writing that you utilize are? Minimally effective Somewhat effective Effective Extremely effective Add Question Here Split Page Here Edit Question Move Copy Delete 24. Why do you feel that way? Add Question Here Split Page Here Edit Question Move Copy DeleteEdit Question Logic ( 2) 25. Do you use both formal (written f eedback, rubrics, grades, etc) and informal (observation, anecdotal notes, c onferences, etc.) methods of assessing writing?

PAGE 231

Yes No Add Question Here Add Page Here Page # 9 Edit Page Move Copy DeleteAdd Page Logic Show this Page Only 9. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy Delete 26. What percentage (for a total of 100% ) of your time spent evaluating writing is spent on: Informal Assessments (e.g. observations, anecdotal notes, conferences, etc.) Formal Assessments (e.g. written feedback, rubrics, grades, etc.) Add Question Here Add Page Here Page # 10 Edit Page Move Copy Delete Add Page Logi c Show this Page Only 10. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteEdit Question Logic ( 2) 27. Do you ever utilize the FCAT Writing Assessment rubric to score papers? Yes No Add Question Here Add Page Here Page # 11 Edit Page Move Copy Delete Add Page Logi c Show this Page Only 11. Writing and Evaluation Questionnaire (Cont'd) Add Question Here 216

PAGE 232

Edit Question Move Copy DeleteAdd Question Logic 28. How often do you use the FCAT Writing Assessment? Daily Weekly Monthly Other (please specify) Add Question Here Split Page Here Edit Question Move Copy Delete 29. Why do you choose to use the FCAT rubric? Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 30. How helpful do you feel that the feedback from the FCAT rubric is to your students Extremely helpful Helpful Somewhat helpful Minimally helpful Add Question Here Add Page Here Page # 12 Edit Page Move Copy Delete Add Page Logi c Show this Page Only 12. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 31. Do you use any standardized writing assessments (SAT, FCAT, etc.)? Yes No Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 32. How often do you provide your students with written feedback on their writing assignments? 217

PAGE 233

Almost always Frequently Once in a while Rarely, if ever Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 33. What is your current position? Teacher Media Specialist Reading Specialist/Literacy Coach Administrator Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteEdit Question Logic ( 2) 34. Have you ever taught writing to students? Yes No Add Question Here Add Page Here Page # 13 Edit Page Move Copy Delete Add Page Logi c Show this Page Only 13. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 35. How many years have you taught writing to students? Please include all years spent teaching writing to students in this response even if you are not currently teaching. 1-5 years 6-10 years 11-15 years 16-20 years over 20 years Add Question Here Add Page Here 218

PAGE 234

219 Page # 14 Edit Page Move Copy Delete Add Page Logi c Show this Page Only 14. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteEdit Question Logic ( 2) 36. Do you currently teach writing to students? Yes No Add Question Here Add Page Here Page # 15 Edit Page Move Copy Delete Add Page Logi c Show this Page Only 15. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 37. What grade level do you currently teach? Kindergarten 1st 2nd 3rd 4th 5th 6th Add Question Here Add Page Here Page # 16 Edit Page Move Copy Delete Show this Page Only 16. Writing and Evaluation Questionnaire (Cont'd) Add Question Here Edit Question Move Copy DeleteAdd Question Logic 38. What is the highest level of education that you have completed? Bachelors Degree Masters Degree Doctoral Degree Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 39. With which of the followi ng are you currently affiliated?

PAGE 235

Public School Private School Charter School Homeschool Other (please specify) Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 40. In which county is your school? Hillsborough Pasco Pinellas Add Question Here Split Page Here Edit Question Move Copy Delete 41. What is the nam e of your school? Please note: The names of schools, t eachers, and districts will NOT be reported in my research. Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 42. What is your gender? Male Female Add Question Here Split Page Here Edit Question Move Copy DeleteAdd Question Logic 43. In which range does your age fall? 21-26 27-32 33-38 39-44 220

PAGE 236

45-50 Over 50 Add Question Here Split Page Here Edit Question Move Copy Delete 44. Please feel free to share any other thoughts or co mments that you have regarding the evaluation of student writing here. 221

PAGE 237

222 Appendix C Interview Protocol for the Educator Interview Developed using Dillman et al. (2009) and Patton (2002) as guides Hello! Thank you so much for agreeing to talk with me today. I am interested in chatting with you regarding your beliefs a bout teaching and assessing writing. There are no right or wrong answers to my questions. An ything that you would like to share with me is welcome. Lets start by talking about writing. What do you enjoy about teaching writing? Probes: Why do you enjoy that? What do you feel when you are teaching writing? Do you look forward to that part of your day? Okay, can you tell me what you feel is the mo st important thing th at you can teach your students about writing? Probes: Why is that so important? How do you go about teaching that? What happens if this is missed? Is there any other aspect of writing that you feel is equally important? Now Id love to talk with you about the conf erence and the meetings that you all had here to select the student participants who w ill represent your school at the conference. How did you feel about the selection proce ss when selecting students to attend the conference? Probes: Why did you feel that way? Is there anything that could change the way you feel? Was there something that you would have done differently? Can you tell me about how the group decide d on a method to use when selecting the participants for the conference? Probes: Did everyone have input in deciding on the methods used?

PAGE 238

223 Appendix C: (Continued) Was there another way that you would have liked to have approached the task? Do you feel like the best student writings were selected from the submissions for those authors to attend the conference? Probes: What were the factors that led you, as a group, to select those writings? Were there any writings that you w ould have liked to have seen included in the winning group th at were not? Why did you want to see them included? Why do you feel that they were not included? Can you tell me how you think that the FCAT writing assessment, any other standardized assessments, or the writing curriculum that you have in place at your school had an impact on the student writings that were submitted for the conference? Probes: Were the students given instructions on what or how to write? Were these writings typical of the types of writings you see from your students? Can you tell me how the FCAT writing assessment, other st andardized assessments, or the writing curriculum you have in place at your school may have impacted your evaluation decisions during this selection process? Probes: How did you, personally, feel an influence from the FCAT on the decisions that you made? How do you think that the FCAT may have influenced the decisions of the other people on your team? What else would you like to share with me th at we have not yet had the opportunity to discuss? Id love to hear your thoughts. Just to close, may I ask how long you have been an educator? What degree do you have (i.e., a Bachelors, Masters. or a Ph.D.)? And how many years have you taught writing to your students? Thank you so much for taking the time out of your busy schedule to talk with me today. I greatly appreciate your willingness to share your thoughts with me, and your comments will help me greatly.

PAGE 239

224 Appendix D Additional Quantitative Data Result Tables Question 2: How often do you assess student writing? Table D1: Frequency of assessing student writing Frequency Frequency Percent Valid Percent Others 7 16.3 16.3 Once a day 6 14.0 14.0 Once a week 6 14.0 14.0 Once every couple of weeks 13 30.2 30.2 Once a month or less 11 25.6 25.6 Total 43 100.0 100.0 Table D2: County and frequency of assessing cross tabulation Others Once a day Once a week Once every couple of weeks Once a month or less Total 1 2 1 3 1 8 Public County A 12.5% 25.0% 12.5% 37.5% 12.5% 100.0% 3 2 0 7 6 18 Public County B 16.7% 11.1% .0% 38.9% 33.3% 100.0% 4 4 1 10 7 26 Total 15.4% 15.4% 3.8% 38.5% 26.9% 100.0%

PAGE 240

225 Appendix D: (Continued) Table D3: Experience and frequenc y of assessing cross tabulation Exp Others Once a day Once a week Once every couple of weeks Once a month or less Total 1 0 1 4 2 8 1 -5 years 12.5% .0% 12.5% 50.0% 25.0% 100.0% 2 1 1 2 4 10 6 10 years 20.0% 10.0% 10.0% 20.0% 40.0% 100.0% 1 3 2 3 2 11 11 -15 years 9.1% 27.3% 18.2% 27.3% 18.2% 100.0% 1 0 0 2 1 4 16 20 years 25.0% .0% .0% 50.0% 25.0% 100.0% 2 2 0 2 1 7 More than 20 years 28.6% 28.6% .0% 28.6% 14.3% 100.0% 7 6 4 13 10 40 Total 17.5% 15.0% 10.0% 32.5% 25.0% 100.0% Chi-Square = 10.194, p =0.816

PAGE 241

226 Appendix D: (Continued) Table D4: Grade teaching and frequency of assessing cross tabulation Others Once a day Once a week Once every couple of weeks Once a month or less Total 0 0 1 0 0 1 KinderGarten .0% .0% 100.0% .0% .0% 100.0% 0 0 0 2 0 2 1st .0% .0% .0% 100.0% .0% 100.0% 0 1 1 1 0 3 2nd .0% 33.3% 33.3% 33.3% .0% 100.0% 2 0 1 0 2 5 3rd 40.0% .0% 20.0% .0% 40.0% 100.0% 0 2 0 6 2 10 4th .0% 20.0% .0% 60.0% 20.0% 100.0% 3 1 1 4 6 15 5th 20.0% 6.7% 6.7% 26.7% 40.0% 100.0% 0 0 1 0 0 1 6th .0% .0% 100.0% .0% .0% 100.0% 5 4 5 13 10 37 Total 13.5% 10.8% 13.5% 35.1% 27.0% 100.0% Chi-Square = 33.243, p =0.096

PAGE 242

227 Appendix D: (Continued) Table D5: Age and frequency of assessing cross tabulation Age Others Once a day Once a week Once every couple of weeks Once a month or less Total 0 0 0 2 1 3 21 26 .0% .0% .0% 66.7% 33.3% 100.0% 2 1 0 1 1 5 27 32 40.0% 20.0% .0% 20.0% 20.0% 100.0% 0 2 2 4 3 11 33 -38 .0% 18.2% 18.2% 36.4% 27.3% 100.0% 0 1 3 1 2 7 39 -44 .0% 14.3% 42.9% 14.3% 28.6% 100.0% 0 0 0 1 0 1 45 50 .0% .0% .0% 100.0% .0% 100.0% 4 2 0 4 3 13 Over 50 30.8% 15.4% .0% 30.8% 23.1% 100.0% 6 6 5 13 10 40 Total 15.0% 15.0% 12.5% 32.5% 25.0% 100.0% Chi-Square = 20.452, p =0.430 Question 3: What percentage of the ti me that you spend assessing all of your students work would you say is spent assessing writing assignments? Table D6: Percentage of tota l work involved in assessing writing assessments Frequency Percent Valid Percent 75% 100% 1 2.3 2.3 50% -74% 14 32.6 32.6 25% 49% 18 41.9 41.9 0% -24% 10 23.3 23.3 Total 43 100.0 100.0

PAGE 243

228 Appendix D: (Continued) Table D7: Percentage of time assessing work by county 1.00 2.00 3.00 4.00 Total 0 4 3 1 8 Public County A .0% 50.0% 37.5% 12.5% 100.0% 1 3 8 6 18 Public County B 5.6% 16.7% 44.4% 33.3% 100.0% 1 7 11 7 26 Total 3.8% 26.9% 42.3% 26.9% 100.0% Table D8: Experience and Percentage of tota l work in assessing wr iting cross tabulation Exp 75 100 50 -74 25 49 0 -24 Total 0 2 4 2 8 1 -5 years .0% 25.0% 50.0% 25.0% 100.0% 0 3 5 2 10 6 10 years .0% 30.0% 50.0% 20.0% 100.0% 1 4 4 2 11 11 -15 years 9.1% 36.4% 36.4% 18.2% 100.0% 0 0 4 0 4 16 20 years .0% .0% 100.0% .0% 100.0% 0 4 0 3 7 More than 20 years .0% 57.1% .0% 42.9% 100.0% 1 13 17 9 40 Total 2.5% 32.5% 42.5% 22.5% 100.0% Chi-Square = 14.205, p =0.288

PAGE 244

229 Appendix D: (Continued) Table D9: Grade teaching and Percentage of total work in assessing writing cross tabulation 75 100 50 -74 25 49 0 -24 Total 0 0 1 0 1 Kindergarten .0% .0% 100.0% .0% 100.0% 0 1 0 1 2 1st .0% 50.0% .0% 50.0% 100.0% 0 2 1 0 3 2nd .0% 66.7% 33.3% .0% 100.0% 0 0 4 1 5 3rd .0% .0% 80.0% 20.0% 100.0% 0 3 4 3 10 4th .0% 30.0% 40.0% 30.0% 100.0% 1 4 6 4 15 5th 6.7% 26.7% 40.0% 26.7% 100.0% 0 1 0 0 1 6th .0% 100.0% .0% .0% 100.0% 1 11 16 9 37 Total 2.7% 29.7% 43.2% 24.3% 100.0% Chi-Square = 12.308, p =0.831

PAGE 245

230 Appendix D: (Continued) Table D10: Age of Teachers and Percentage of total work in assessing writing cross tabulation Age 1.00 2.00 3.00 4.00 Total 0 2 0 1 3 21 26 .0% 66.7% .0% 33.3% 100.0% 0 1 3 1 5 27 32 .0% 20.0% 60.0% 20.0% 100.0% 0 3 5 3 11 33 -38 .0% 27.3% 45.5% 27.3% 100.0% 0 2 5 0 7 39 -44 .0% 28.6% 71.4% .0% 100.0% 1 0 0 0 1 45 50 100.0% .0% .0% .0% 100.0% 0 5 5 3 13 Over 50 .0% 38.5% 38.5% 23.1% 100.0% 1 13 18 8 40 Total 2.5% 32.5% 45.0% 20.0% 100.0% Chi-Square = 46.390, p <0.001 Table D11: Spend more time in assessing some students writing that others Frequency Percent Valid Percent Yes 39 90.7 92.9 No 3 7.0 7.1 Total 42 97.7 100.0 Missing 1 2.3 Total 43 100.0

PAGE 246

231 Appendix D: (Continued) Table D12: Experience and whether more time spent in assessing some assignments than others Exp Yes No Total 8 0 8 1 -5 years 100.0% .0% 100.0% 9 1 10 6 10 years 90.0% 10.0% 100.0% 9 2 11 11 -15 years 81.8% 18.2% 100.0% 4 0 4 16 20 years 100.0% .0% 100.0% 7 0 7 More than 20 years 100.0% .0% 100.0% 37 3 40 Total 92.5% 7.5% 100.0% Chi-Square = 3.440, p= 0.487 Table D13: Grade teaching and whether more time spent in assessing some assignments than others Grade Yes No Total 2 0 2 Kindergarten 100.0% .0% 100.0% 2 1 3 2nd 66.7% 33.3% 100.0% 5 0 5 3rd 100.0% .0% 100.0% 9 1 10 4th 90.0% 10.0% 100.0% 15 0 15 5th 100.0% .0% 100.0% 1 0 1 6th 100.0% .0% 100.0% 34 2 36 Total 94.4% 5.6% 100.0%

PAGE 247

232 Appendix D: (Continued) Table D14: Age teaching and whether more time spent in assessing some assignments than others Age Yes No Total 3 0 3 21 26 100.0% .0% 100.0% 5 0 5 27 32 100.0% .0% 100.0% 9 1 10 33 -38 90.0% 10.0% 100.0% 6 1 7 39 -44 85.7% 14.3% 100.0% 1 0 1 45 50 100.0% .0% 100.0% 12 1 13 Over 50 92.3% 7.7% 100.0% 36 3 39 Total 92.3% 7.7% 100.0% Chi-Square = 1.254, p = 0.940 Question 7: How do you feel about assessing student writing? Table D15: Feelings about assessing student writing Feeling Frequency Percent Valid Percent Positive 23 53.5 56.1 Somewhat positive 13 30.2 31.7 Negative 2 4.7 4.9 Some what negative 3 7.0 7.3 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 248

233 Appendix D: (Continued) Table D16: Feelings ab out assessing writing by county 1.00 2.00 3.00 4.00 Total 6 2 0 0 8 Public County A 75.0% 25.0% .0% .0% 100.0% 8 7 1 2 18 Public County B 44.4% 38.9% 5.6% 11.1% 100.0% 14 9 1 2 26 53.8% 34.6% 3.8% 7.7% 100.0% Table D17 : Experience and feeling about writing Exp Positive Somewhat positive Negative Somewhat negative Total 3 3 1 1 8 1 -5 years 37.5% 37.5% 12.5% 12.5% 100.0% 5 4 1 0 10 6 10 years 50.0% 40.0% 10.0% .0% 100.0% 5 4 0 2 11 11 -15 years 45.5% 36.4% .0% 18.2% 100.0% 4 0 0 0 4 16 20 years 100.0% .0% .0% .0% 100.0% 5 2 0 0 7 More than 20 years 71.4% 28.6% .0% .0% 100.0% 22 13 2 3 40 Total 55.0% 32.5% 5.0% 7.5% 100.0% Chi-Square = 10.123 p = 0.605

PAGE 249

234 Appendix D: (Continued) Table D18: Grade teaching and feeling about writing Positive Somewhat positive Negative Somewhat negative Total 1 0 0 0 1 Kindergarten 100.0% .0% .0% .0% 100.0% 0 2 0 0 2 1st .0% 100.0% .0% .0% 100.0% 3 0 0 0 3 2nd 100.0% .0% .0% .0% 100.0% 3 1 1 0 5 3rd 60.0% 20.0% 20.0% .0% 100.0% 4 3 1 2 10 4th 40.0% 30.0% 10.0% 20.0% 100.0% 8 6 0 1 15 5th 53.3% 40.0% .0% 6.7% 100.0% 0 1 0 0 1 6th .0% 100.0% .0% .0% 100.0% 19 13 2 3 37 Total 51.4% 35.1% 5.4% 8.1% 100.0% Chi-Square = 15.525 p = 0.626

PAGE 250

235 Appendix D: (Continued) Table D19: Age teaching and feeling about writing Age Positive Somewhat positive Negative Somewhat negative Total 2 1 0 0 3 21 26 66.7% 33.3% .0% .0% 100.0% 2 3 0 0 5 27 32 40.0% 60.0% .0% .0% 100.0% 5 4 1 1 11 33 -38 45.5% 36.4% 9.1% 9.1% 100.0% 4 2 1 0 7 39 -44 57.1% 28.6% 14.3% .0% 100.0% 0 0 0 1 1 45 50 .0% .0% .0% 100.0% 100.0% 10 2 0 1 13 Over 50 76.9% 15.4% .0% 7.7% 100.0% 23 12 2 3 40 Total 57.5% 30.0% 5.0% 7.5% 100.0% Chi-Square = 20.152 p = 0.166 Question 8: What is the most important as pect of writing that you are looking for when you assess a student writing? Table D20: Most important aspect that teachers look for in student writing Frequency Percent Valid Percent Others 8 18.6 19.5 Correctness in grammar and punctuation 2 4.7 4.9 Ideas / Concepts 23 53.5 56.1 Voice 2 4.7 4.9 Organization 5 11.6 12.2 Fluency 1 2.3 2.4 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 251

236 Appendix D: (Continued) Table D21: Experience and most importa nt aspect that teachers look for in a writing assignment Exp Others Correctness in grammar Ideas /concepts Voice Organizatio n Fluency Total 2 1 5 0 0 0 8 1 -5 years 25.0% 12.5% 62.5% .0% .0% .0% 100.0 % 2 1 5 1 0 1 10 6 10 years 20.0% 10.0% 50.0% 10.0% .0% 10.0% 100.0 % 2 0 6 0 3 0 11 11 15 years 18.2% .0% 54.5% .0% 27.3% .0% 100.0 % 0 0 3 0 1 0 4 16 20 years .0% .0% 75.0% .0% 25.0% .0% 100.0 % 2 0 4 0 1 0 7 Mor e than 20 years 28.6% .0% 57.1% .0% 14.3% .0% 100.0 % 8 2 23 1 5 1 40 Total 20.0% 5.0% 57.5% 2.5% 12.5% 2.5% 100.0 % Chi-Square = 14.726 p = 0.792 Table D22: Most important aspect of writing by county .00 1.00 2.00 3.00 4.00 5.00 Total 2 0 5 0 1 0 8 Public County A 25.0% .0% 62.5% .0% 12.5% .0% 100.0% 2 2 12 1 0 1 18 Public County B 11.1% 11.1% 66.7% 5.6% .0% 5.6% 100.0% 4 2 17 1 1 1 26 15.4% 7.7% 65.4% 3.8% 3.8% 3.8% 100.0%

PAGE 252

237 Appendix D: (Continued) Table D23: Grade teaching and most important aspect that teachers look for in a writing assignment Grade Others Correctness in grammar Ideas /concepts Voice Organization Fluency Total 0 0 0 1 0 0 1 Kindergarten .0% .0% .0% 100.0% .0% .0% 100.0% 1 0 0 0 1 0 2 1st 50.0% .0% .0% .0% 50.0% .0% 100.0% 1 0 1 0 1 0 3 2nd 33.3% .0% 33.3% .0% 33.3% .0% 100.0% 1 1 3 0 0 0 5 3rd 20.0% 20.0% 60.0% .0% .0% .0% 100.0% 2 0 6 0 2 0 10 4th 20.0% .0% 60.0% .0% 20.0% .0% 100.0% 2 1 9 1 1 1 15 5th 13.3% 6.7% 60.0% 6.7% 6.7% 6.7% 100.0% 0 0 1 0 0 0 1 6th .0% .0% 100.0% .0% .0% .0% 100.0% 7 2 20 2 5 1 37 Total 18.9% 5.4% 54.1% 5.4% 13.5% 2.7% 100.0% Chi-Square = 31.186 p = 0.406

PAGE 253

238 Appendix D: (Continued) Table D24: Age and most important aspect that teachers look for in a writing assignment Age Others Correctness in grammar Ideas /concepts Voice Organization Fluency Total 1 1 1 0 0 0 3 21 26 33.3% 33.3% 33.3% .0% .0% .0% 100.0% 2 0 3 0 0 0 5 27 32 40.0% .0% 60.0% .0% .0% .0% 100.0% 1 0 4 2 3 1 11 33 -38 9.1% .0% 36.4% 18.2% 27.3% 9.1% 100.0% 1 1 4 0 1 0 7 39 -44 14.3% 14.3% 57.1% .0% 14.3% .0% 100.0% 0 0 1 0 0 0 1 45 50 .0% .0% 100.0% .0% .0% .0% 100.0% 2 0 10 0 1 0 13 Over 50 15.4% .0% 76.9% .0% 7.7% .0% 100.0% 7 2 23 2 5 1 40 Total 17.5% 5.0% 57.5% 5.0% 12.5% 2.5% 100.0% Chi-Square = 23.639 p = 0.540 Question 10: Where did you learn the different assessment methods that you use to assess student writing? Table D25: Sources of learning di fferent methods of assessment Sources Frequency Percent Valid Percent Others 6 14.0 14.6 College ./ University 9 20.9 22.0 School / District based training 20 46.5 48.8 Reading coaches and literary specialists 5 11.6 12.2 Peer teachers 1 2.3 2.4 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 254

239 Appendix D: (Continued) Table D26: Places where methods learned by county .00 1.00 2.00 3.00 Total 0 1 7 0 8 Public County A .0% 12.5% 87.5% .0% 100.0% 1 5 8 4 18 Public County B 5.6% 27.8% 44.4% 22.2% 100.0% 1 6 15 4 26 3.8% 23.1% 57.7% 15.4% 100.0% Question 11: Do you ever have assignment s in which your students write more than one draft for you? Table D27: Assignments with more than one draft Response Frequency Percent Valid Percent Yes 34 79.1 82.9 No 7 16.3 17.1 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0 Table D28: More than one draft by county 1.00 2.00 Total 6 2 8 Public County A 75.0% 25.0% 100.0% 15 3 18 Public County B 83.3% 16.7% 100.0% 21 5 26 80.8% 19.2% 100.0%

PAGE 255

240 Appendix D: (Continued) Table D29: Experience a nd assessing more assignment with more than one draft Exp Yes No Total 8 0 8 1 -5 years 100.0% .0% 100.0% 8 2 10 6 10 years 80.0% 20.0% 100.0% 9 2 11 11 -15 years 81.8% 18.2% 100.0% 4 0 4 16 20 years 100.0% .0% 100.0% 5 2 7 More than 20 years 71.4% 28.6% 100.0% 34 6 40 Total 85.0% 15.0% 100.0% Chi-Square = 3.412 p = 0.491 Table D30: Grade teaching and assessi ng more assignment with more than one draft Grade Yes No Total 0 1 1 Kindergarten .0% 100.0% 100.0% 0 2 2 1st .0% 100.0% 100.0% 3 0 3 2nd 100.0% .0% 100.0% 4 1 5 3rd 80.0% 20.0% 100.0% 9 1 10 4th 90.0% 10.0% 100.0% 13 2 15 5th 86.7% 13.3% 100.0% 1 0 1 6th 100.0% .0% 100.0% 30 7 37 Total 81.1% 18.9% 100.0%

PAGE 256

241 Table D30: Grade teaching and assessi ng more assignment with more than one draft Grade Yes No Total 0 1 1 Kindergarten .0% 100.0% 100.0% 0 2 2 1st .0% 100.0% 100.0% 3 0 3 2nd 100.0% .0% 100.0% 4 1 5 3rd 80.0% 20.0% 100.0% 9 1 10 4th 90.0% 10.0% 100.0% 13 2 15 5th 86.7% 13.3% 100.0% 1 0 1 6th 100.0% .0% 100.0% 30 7 37 Chi-Square = 14.618 p = 0.023 Table D31: Age and assessing assign ments with more than one draft Age 1.00 2.00 Total 3 0 3 21 26 100.0% .0% 100.0% 5 0 5 27 32 100.0% .0% 100.0% 8 3 11 33 -38 72.7% 27.3% 100.0% 6 1 7 39 -44 85.7% 14.3% 100.0% 0 1 1 45 50 .0% 100.0% 100.0% 11 2 13 Over 50 84.6% 15.4% 100.0% 33 7 40 Total 82.5% 17.5% 100.0% Chi-Square = 7.229 p = 0.204

PAGE 257

242 Appendix D: (Continued) Question 12: How do the students receive grades for those papers? Table D32: Method of receiving grades by students Frequency Percent Valid Percent Others 5 11.6 15.2 Every draft has a separate grade 7 16.3 21.2 Drafts are not graded 9 20.9 27.3 Final copy and draft are put together for one grade 12 27.9 36.4 Total 33 76.7 100.0 Missing 10 23.3 Total 43 100.0 Table D33: Method of receiving grades by county .00 1.00 2.00 3.00 Total 0 0 2 4 6 Public County A .0% .0% 33.3% 66.7% 100.0% 2 3 3 7 15 Public County B 13.3% 20.0% 20.0% 46.7% 100.0% 2 3 5 11 21 9.5% 14.3% 23.8% 52.4% 100.0% Question 13: Please mark how often you use each of the following methods of assessment while assessing th e writing of your students. Table D34: Frequency of using checklists Frequency Percent Valid Percent Rarely 6 14.0 14.6 Once in a while 11 25.6 26.8 Frequently 19 44.2 46.3 Almost always 5 11.6 12.2 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 258

243 Appendix D: (Continued) Table D35 : Experience and frequency of using checklists cross tabulation Exp Rarely Once in a while Frequently Almost always Total 1 2 4 1 8 1 -5 years 12.5% 25.0% 50.0% 12.5% 100.0% 0 4 4 2 10 6 10 years .0% 40.0% 40.0% 20.0% 100.0% 2 4 5 0 11 11 -15 years 18.2% 36.4% 45.5% .0% 100.0% 1 1 2 0 4 16 20 years 25.0% 25.0% 50.0% .0% 100.0% 1 0 4 2 7 More than 20 years 14.3% .0% 57.1% 28.6% 100.0% 5 11 19 5 40 Total 12.5% 27.5% 47.5% 12.5% 100.0% Chi-Square = 8.939 p = 0.708 Table D36: Frequency of checklists by county 1.00 2.00 3.00 4.00 Total 0 3 5 0 8 Public County A .0% 37.5% 62.5% .0% 100.0% 4 6 5 3 18 Public County B 22.2% 33.3% 27.8% 16.7% 100.0% 4 9 10 3 26 15.4% 34.6% 38.5% 11.5% 100.0%

PAGE 259

244 Appendix D: (Continued) Table D37: Grade teaching and frequenc y of using checklists cross tabulation Grade Teaching Rarely Once in a while Frequently Almost always Total 1 0 0 0 1 Kindergarten 100.0% .0% .0% .0% 100.0% 0 0 2 0 2 1st .0% .0% 100.0% .0% 100.0% 0 2 1 0 3 2nd .0% 66.7% 33.3% .0% 100.0% 1 2 2 0 5 3rd 20.0% 40.0% 40.0% .0% 100.0% 0 4 5 1 10 4th .0% 40.0% 50.0% 10.0% 100.0% 1 2 9 3 15 5th 6.7% 13.3% 60.0% 20.0% 100.0% 1 0 0 0 1 6th 100.0% .0% .0% .0% 100.0% 4 10 19 4 37 Total 10.8% 27.0% 51.4% 10.8% 100.0% Chi-Square = 26.728 p = 0.084

PAGE 260

245 Appendix D: (Continued) Table D38: Age and frequency of using checklists cross tabulation Age Rarely Once in a while Frequently Almost always Total 1 0 1 1 3 21 26 33.3% .0% 33.3% 33.3% 100.0% 0 1 4 0 5 27 32 .0% 20.0% 80.0% .0% 100.0% 2 4 4 1 11 33 -38 18.2% 36.4% 36.4% 9.1% 100.0% 0 3 4 0 7 39 -44 .0% 42.9% 57.1% .0% 100.0% 0 1 0 0 1 45 50 .0% 100.0% .0% .0% 100.0% 2 2 6 3 13 Over 50 15.4% 15.4% 46.2% 23.1% 100.0% 5 11 19 5 40 Total 12.5% 27.5% 47.5% 12.5% 100.0% Chi-Square = 13.560 p = 0.559 Table D39: Teacher conferences Frequency Frequency Percent Valid Percent Once in a while 9 20.9 22.0 Frequently 21 48.8 51.2 Almost always 11 25.6 26.8 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 261

246 Appendix D: (Continued) Table D40: Frequency of teacher conferences by county 2.00 3.00 4.00 Total 1 2 5 8 Public County A 12.5% 25.0% 62.5% 100.0% 4 9 5 18 Public County B 22.2% 50.0% 27.8% 100.0% 5 11 10 26 19.2% 42.3% 38.5% 100.0% Table D41 : Experience and freque ncy of using teacher conferences Once in a while Frequently Almost always Total 0 6 2 8 1 -5 years .0% 75.0% 25.0% 100.0% 4 2 4 10 6 10 years 40.0% 20.0% 40.0% 100.0% 4 5 2 11 11 -15 years 36.4% 45.5% 18.2% 100.0% 1 2 1 4 16 20 years 25.0% 50.0% 25.0% 100.0% 0 6 1 7 More than 20 years .0% 85.7% 14.3% 100.0% 9 21 10 40 Total 22.5% 52.5% 25.0% 100.0% Chi-Square = 11.476 p = 0.176

PAGE 262

247 Appendix D: (Continued) Table D42: Grade teaching and frequency of using teacher conferences cross tabulation Grade Teaching Once in a while Frequently Almost always Total 0 0 1 1 Kindergarten .0% .0% 100.0% 100.0% 0 2 0 2 1st .0% 100.0% .0% 100.0% 1 2 0 3 2nd 33.3% 66.7% .0% 100.0% 1 2 2 5 3rd 20.0% 40.0% 40.0% 100.0% 2 5 3 10 4th 20.0% 50.0% 30.0% 100.0% 3 8 4 15 5th 20.0% 53.3% 26.7% 100.0% 0 1 0 1 6th .0% 100.0% .0% 100.0% 7 20 10 37 Total 18.9% 54.1% 27.0% 100.0% Chi-Square = 7.056 p = 0.854

PAGE 263

248 Appendix D: (Continued) Table D43: Age and frequency of using teacher conferences cross tabulation Age Once in a while Frequently Almost always Total 0 1 2 3 21 26 .0% 33.3% 66.7% 100.0% 1 3 1 5 27 32 20.0% 60.0% 20.0% 100.0% 4 4 3 11 33 -38 36.4% 36.4% 27.3% 100.0% 1 3 3 7 39 -44 14.3% 42.9% 42.9% 100.0% 1 0 0 1 45 50 100.0% .0% .0% 100.0% 2 9 2 13 Over 50 15.4% 69.2% 15.4% 100.0% 9 20 11 40 Total 22.5% 50.0% 27.5% 100.0% Chi-Square = 10.354 p = 0.410 Table D44: Frequency of peer conferences Frequency Frequency Percent Valid Percent Rarely 1 2.3 2.4 Once in a while 16 37.2 39.0 Frequently 20 46.5 48.8 Almost always 4 9.3 9.8 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 264

249 Appendix D: (Continued) Table D45: Frequency of peer conferences by county 2.00 3.00 4.00 Total 3 4 1 8 Public County A 37.5% 50.0% 12.5% 100.0% 4 11 3 18 Public County B 22.2% 61.1% 16.7% 100.0% 7 15 4 26 26.9% 57.7% 15.4% 100.0% Table D46: Experience and fre quency of using peer conferences Exp Rarely Once in a while Frequently Almost always Total 0 1 5 2 8 1 -5 years .0% 12.5% 62.5% 25.0% 100.0% 0 4 5 1 10 6 10 years .0% 40.0% 50.0% 10.0% 100.0% 1 6 4 0 11 11 -15 years 9.1% 54.5% 36.4% .0% 100.0% 0 2 2 0 4 16 20 years .0% 50.0% 50.0% .0% 100.0% 0 3 3 1 7 More than 20 years .0% 42.9% 42.9% 14.3% 100.0% 1 16 19 4 40 Total 2.5% 40.0% 47.5% 10.0% 100.0% Chi-Square = 8.990 p = 0.704

PAGE 265

250 Appendix D: (Continued) Table D47: Grade teaching and frequency of using peer conferences cross tabulation Grade Rarely Once in a while Frequently Almost always Total 0 0 1 0 1 Kindergarten .0% .0% 100.0% .0% 100.0% 0 1 1 0 2 1st .0% 50.0% 50.0% .0% 100.0% 0 3 0 0 3 2nd .0% 100.0% .0% .0% 100.0% 0 3 2 0 5 3rd .0% 60.0% 40.0% .0% 100.0% 1 3 5 1 10 4th 10.0% 30.0% 50.0% 10.0% 100.0% 0 5 7 3 15 5th .0% 33.3% 46.7% 20.0% 100.0% 0 0 1 0 1 6th .0% .0% 100.0% .0% 100.0% 1 15 17 4 37 Total 2.7% 40.5% 45.9% 10.8% 100.0% Chi-Square = 12.313 p = 0.831

PAGE 266

251 Appendix D: (Continued) Table D48: Age and frequency of usi ng peer conferences cross tabulation Age Rarely Once in a while Frequently Almost always Total 0 0 1 2 3 21 26 .0% .0% 33.3% 66.7% 100.0% 0 2 3 0 5 27 32 .0% 40.0% 60.0% .0% 100.0% 1 4 6 0 11 33 -38 9.1% 36.4% 54.5% .0% 100.0% 0 4 2 1 7 39 -44 .0% 57.1% 28.6% 14.3% 100.0% 0 1 0 0 1 45 50 .0% 100.0% .0% .0% 100.0% 0 5 7 1 13 Over 50 .0% 38.5% 53.8% 7.7% 100.0% 1 16 19 4 40 Total 2.5% 40.0% 47.5% 10.0% 100.0% Chi-Square = 118.345 p = 0.245 Table D49: Frequency of holistic scoring Frequency Percent Valid Percent Rarely 7 16.3 17.5 Once in a while 10 23.3 25.0 Frequently 18 41.9 45.0 Almost always 5 11.6 12.5 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0

PAGE 267

252 Appendix D: (Continued) Table D50: Frequency of holistic scoring by county 1.00 2.00 3.00 4.00 Total 2 1 4 1 8 Public County A 25.0% 12.5% 50.0% 12.5% 100.0% 1 3 10 3 17 Public County B 5.9% 17.6% 58.8% 17.6% 100.0% 3 4 14 4 25 12.0% 16.0% 56.0% 16.0% 100.0% Table D51: Experience and fre quency of using holistic scoring Exp Rarely Once in a while Frequently Almost always Total 1 1 5 1 8 1 -5 years 12.5% 12.5% 62.5% 12.5% 100.0% 2 3 4 0 9 6 10 years 22.2% 33.3% 44.4% .0% 100.0% 2 3 4 2 11 11 -15 years 18.2% 27.3% 36.4% 18.2% 100.0% 0 1 3 0 4 16 20 years .0% 25.0% 75.0% .0% 100.0% 2 2 2 1 7 More than 20 years 28.6% 28.6% 28.6% 14.3% 100.0% 7 10 18 4 39 Total 17.9% 25.6% 46.2% 10.3% 100.0% Chi-Square = 6.209 p = 0.905

PAGE 268

253 Appendix D: (Continued) Table D52: Grade teaching and frequency of using holistic scoring cross tabulation Grade Teaching Rarely Once in a while Frequently Almost always Total 0 0 0 1 1 Kindergarten .0% .0% .0% 100.0% 100.0% 2 0 0 0 2 1st 100.0% .0% .0% .0% 100.0% 0 1 2 0 3 2nd .0% 33.3% 66.7% .0% 100.0% 1 2 2 0 5 3rd 20.0% 40.0% 40.0% .0% 100.0% 2 3 5 0 10 4th 20.0% 30.0% 50.0% .0% 100.0% 1 2 7 4 14 5th 7.1% 14.3% 50.0% 28.6% 100.0% 0 1 0 0 1 6th .0% 100.0% .0% .0% 100.0% 6 9 16 5 36 Total 16.7% 25.0% 44.4% 13.9% 100.0% Chi-Square = 27.033 p = 0.078

PAGE 269

254 Appendix D: (Continued) Table D53: Age and frequency of usi ng holistic scoring cross tabulation Age Rarely Once in a while Frequently Almost always Total 1 0 2 0 3 21 26 33.3% .0% 66.7% .0% 100.0% 1 1 2 1 5 27 32 20.0% 20.0% 40.0% 20.0% 100.0% 2 2 5 1 10 33 -38 20.0% 20.0% 50.0% 10.0% 100.0% 1 2 4 0 7 39 -44 14.3% 28.6% 57.1% .0% 100.0% 0 0 0 1 1 45 50 .0% .0% .0% 100.0% 100.0% 2 5 5 1 13 Over 50 15.4% 38.5% 38.5% 7.7% 100.0% 7 10 18 4 39 Total 17.9% 25.6% 46.2% 10.3% 100.0% Chi-Square = 13.362 p = 0.574 Table D54: Frequency of portfolio Frequency Percent Valid Percent Rarely 6 14.0 14.6 Once in a while 15 34.9 36.6 Frequently 15 34.9 36.6 Almost always 5 11.6 12.2 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 270

255 Appendix D: (Continued) D55: Frequency of por tfolio use by county 1.00 2.00 3.00 4.00 Total 1 2 2 3 8 Public County A 12.5% 25.0% 25.0% 37.5% 100.0% 2 8 7 1 18 Public County B 11.1% 44.4% 38.9% 5.6% 100.0% 3 10 9 4 26 11.5% 38.5% 34.6% 15.4% 100.0% Table D56: Experience and fr equency of using portfolio Exp Rarely Once in a while Frequently Almost always Total 1 5 2 0 8 1 -5 years 12.5% 62.5% 25.0% .0% 100.0% 1 4 4 1 10 6 10 years 10.0% 40.0% 40.0% 10.0% 100.0% 3 1 6 1 11 11 -15 years 27.3% 9.1% 54.5% 9.1% 100.0% 0 2 1 1 4 16 20 years .0% 50.0% 25.0% 25.0% 100.0% 1 3 2 1 7 More than 20 years 14.3% 42.9% 28.6% 14.3% 100.0% 6 15 15 4 40 Total 15.0% 37.5% 37.5% 10.0% 100.0% Chi-Square = 9.200 p = 0.686

PAGE 271

256 Appendix D: (Continued) Table D57: Grade teaching and frequenc y of using portfolio cross tabulation Grade Teaching Rarely Once in a while Frequently Almost always Total 0 0 0 1 1 Kindergarten .0% .0% .0% 100.0% 100.0% 0 1 1 0 2 1st .0% 50.0% 50.0% .0% 100.0% 0 1 2 0 3 2nd .0% 33.3% 66.7% .0% 100.0% 0 5 0 0 5 3rd .0% 100.0% .0% .0% 100.0% 2 2 5 1 10 4th 20.0% 20.0% 50.0% 10.0% 100.0% 3 4 6 2 15 5th 20.0% 26.7% 40.0% 13.3% 100.0% 1 0 0 0 1 6th 100.0% .0% .0% .0% 100.0% 6 13 14 4 37 Total 16.2% 35.1% 37.8% 10.8% 100.0% Chi-Square = 26.547 p = 0.088

PAGE 272

257 Appendix D: (Continued) Table D58: Age and frequency of using portfolio cross tabulation Age Rarely Once in a while Frequently Almost always Total 0 1 2 0 3 21 26 .0% 33.3% 66.7% .0% 100.0% 0 4 0 1 5 27 32 .0% 80.0% .0% 20.0% 100.0% 2 3 4 2 11 33 -38 18.2% 27.3% 36.4% 18.2% 100.0% 1 3 3 0 7 39 -44 14.3% 42.9% 42.9% .0% 100.0% 1 0 0 0 1 45 50 100.0% .0% .0% .0% 100.0% 2 4 5 2 13 Over 50 15.4% 30.8% 38.5% 15.4% 100.0% 6 15 14 5 40 Total 15.0% 37.5% 35.0% 12.5% 100.0% Chi-Square = 14.513 p = 0.487 Table D59: Frequency of observations Frequency Percent Valid Percent Rarely 3 7.0 7.3 Once in a while 11 25.6 26.8 Frequently 18 41.9 43.9 Almost always 9 20.9 22.0 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 273

258 Appendix D: (Continued) Table D60: Frequency of observations by county 2.00 3.00 4.00 Total 2 2 4 8 Public County A 25.0% 25.0% 50.0% 100.0% 8 8 2 18 Public County B 44.4% 44.4% 11.1% 100.0% 10 10 6 26 38.5% 38.5% 23.1% 100.0% Table D61: Experience and fr equency of using observations Exp Rarely Once in a while Frequently Almost always Total 0 2 4 2 8 1 -5 years .0% 25.0% 50.0% 25.0% 100.0% 1 3 4 2 10 6 10 years 10.0% 30.0% 40.0% 20.0% 100.0% 2 2 5 2 11 11 -15 years 18.2% 18.2% 45.5% 18.2% 100.0% 0 3 1 0 4 16 20 years .0% 75.0% 25.0% .0% 100.0% 0 1 4 2 7 More than 20 years .0% 14.3% 57.1% 28.6% 100.0% 3 11 18 8 40 Total 7.5% 27.5% 45.0% 20.0% 100.0% Chi-Square = 9.157 p = 0.689

PAGE 274

259 Appendix D: (Continued) Table D62 : Grade teaching and frequency of using observations cross tabulation Grade Teaching Rarely Once in a while Frequently Almost always Total 0 0 0 1 1 Kindergarten .0% .0% .0% 100.0% 100.0% 0 0 1 1 2 1st .0% .0% 50.0% 50.0% 100.0% 0 1 1 1 3 2nd .0% 33.3% 33.3% 33.3% 100.0% 0 1 3 1 5 3rd .0% 20.0% 60.0% 20.0% 100.0% 2 4 3 1 10 4th 20.0% 40.0% 30.0% 10.0% 100.0% 1 4 7 3 15 5th 6.7% 26.7% 46.7% 20.0% 100.0% 0 0 1 0 1 6th .0% .0% 100.0% .0% 100.0% 3 10 16 8 37 Total 8.1% 27.0% 43.2% 21.6% 100.0% Chi-Square = 11.275 p = 0.882

PAGE 275

260 Appendix D: (Continued) Table D63: Age and frequency of using observations cross tabulation Age Rarely Once in a while Frequently Almost always Total 0 0 1 2 3 21 26 .0% .0% 33.3% 66.7% 100.0% 0 2 2 1 5 27 32 .0% 40.0% 40.0% 20.0% 100.0% 1 3 5 2 11 33 -38 9.1% 27.3% 45.5% 18.2% 100.0% 1 1 3 2 7 39 -44 14.3% 14.3% 42.9% 28.6% 100.0% 0 0 1 0 1 45 50 .0% .0% 100.0% .0% 100.0% 1 4 6 2 13 Over 50 7.7% 30.8% 46.2% 15.4% 100.0% 3 10 18 9 40 Total 7.5% 25.0% 45.0% 22.5% 100.0% Chi-Square = 7.250 p = 0.950 Table D64: Frequency of rubrics Frequency Percent Valid Percent Rarely 2 4.7 4.9 Once in a while 3 7.0 7.3 Frequently 19 44.2 46.3 Almost always 17 39.5 41.5 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 276

261 Appendix D: (Continued) Table D65: Frequency of using rubrics by county 1.00 2.00 3.00 4.00 Total 0 3 3 2 8 Public County A .0% 37.5% 37.5% 25.0% 100.0% 1 0 7 10 18 Public County B 5.6% .0% 38.9% 55.6% 100.0% 1 3 10 12 26 3.8% 11.5% 38.5% 46.2% 100.0% Table D66: Experience and frequency of using rubrics Exp Rarely Once in a while Frequentl y Almost always Total 0 0 4 4 8 1 -5 years .0% .0% 50.0% 50.0% 100.0% 0 1 4 5 10 6 10 years .0% 10.0% 40.0% 50.0% 100.0% 1 0 7 3 11 11 -15 years 9.1% .0% 63.6% 27.3% 100.0% 0 0 1 3 4 16 20 years .0% .0% 25.0% 75.0% 100.0% 0 2 3 2 7 More than 20 years .0% 28.6% 42.9% 28.6% 100.0% 1 3 19 17 40 Total 2.5% 7.5% 47.5% 42.5% 100.0% Chi-Square = 11.931 p = 0.451

PAGE 277

262 Appendix D: (Continued) Table D67 : Grade teaching and frequenc y of using rubrics cross tabulation Grade Teaching Rarely Once in a while Frequently Almost always Total 1 0 0 0 1 Kindergarten 100.0% .0% .0% .0% 100.0% 0 1 1 0 2 1st .0% 50.0% 50.0% .0% 100.0% 0 0 3 0 3 2nd .0% .0% 100.0% .0% 100.0% 0 0 4 1 5 3rd .0% .0% 80.0% 20.0% 100.0% 0 1 4 5 10 4th .0% 10.0% 40.0% 50.0% 100.0% 1 1 6 7 15 5th 6.7% 6.7% 40.0% 46.7% 100.0% 0 0 1 0 1 6th .0% .0% 100.0% .0% 100.0% 2 3 19 13 37 Total 5.4% 8.1% 51.4% 35.1% 100.0% Chi-Square = 30.722 p = 0.031

PAGE 278

263 Appendix D: (Continued) Table D68: Age and frequency of using rubrics cross tabulation Age Rarely Once in a while Frequently Almost always Total 0 0 0 3 3 21 26 .0% .0% .0% 100.0% 100.0% 0 0 3 2 5 27 32 .0% .0% 60.0% 40.0% 100.0% 1 1 5 4 11 33 -38 9.1% 9.1% 45.5% 36.4% 100.0% 0 0 5 2 7 39 -44 .0% .0% 71.4% 28.6% 100.0% 1 0 0 0 1 45 50 100.0% .0% .0% .0% 100.0% 0 2 5 6 13 Over 50 .0% 15.4% 38.5% 46.2% 100.0% 2 3 18 17 40 Total 5.0% 7.5% 45.0% 42.5% 100.0% Chi-Square = 28.617 p = 0.018 Table D69: Frequency of FCAT scoring rubric Frequency Percent Valid Percent Rarely 14 32.6 35.9 Once in a while 6 14.0 15.4 Frequently 13 30.2 33.3 Almost always 6 14.0 15.4 Total 39 90.7 100.0 Missing 4 9.3 Total 43 100.0

PAGE 279

264 Appendix D: (Continued) Table D70: Frequency of FCAT rubric use by county 1.00 2.00 3.00 4.00 Total 2 0 1 5 8 Public County A 25.0% .0% 12.5% 62.5% 100.0% 5 4 9 0 18 Public County B 27.8% 22.2% 50.0% .0% 100.0% 7 4 10 5 26 26.9% 15.4% 38.5% 19.2% 100.0% Table D71 : Experience and frequency of using FCAT scoring rubric Exp Rarely Once in a while Frequently Almost always Total 2 1 4 0 7 1 -5 years 28.6% 14.3% 57.1% .0% 100.0% 2 3 2 2 9 6 10 years 22.2% 33.3% 22.2% 22.2% 100.0% 5 2 2 2 11 11 -15 years 45.5% 18.2% 18.2% 18.2% 100.0% 1 0 2 1 4 16 20 years 25.0% .0% 50.0% 25.0% 100.0% 3 0 3 1 7 More than 20 years 42.9% .0% 42.9% 14.3% 100.0% 13 6 13 6 38 Total 34.2% 15.8% 34.2% 15.8% 100.0% Chi-Square = 8.974 p = 0.705

PAGE 280

265 Appendix D: (Continued) Table D72: Grade teaching and frequency of using FCAT scoring rubric cross tabulation Grade Teaching Rarely Once in a while Frequently Almost always Total 1 0 0 0 1 Kindergarten 100.0% .0% .0% .0% 100.0% 2 0 0 0 2 1st 100.0% .0% .0% .0% 100.0% 2 0 1 0 3 2nd 66.7% .0% 33.3% .0% 100.0% 3 1 0 0 4 3rd 75.0% 25.0% .0% .0% 100.0% 1 2 4 3 10 4th 10.0% 20.0% 40.0% 30.0% 100.0% 4 2 6 2 14 5th 28.6% 14.3% 42.9% 14.3% 100.0% 0 1 0 0 1 6th .0% 100.0% .0% .0% 100.0% 13 6 11 5 35 Total 37.1% 17.1% 31.4% 14.3% 100.0% Chi-Square = 19.996 p = 0.333

PAGE 281

266 Appendix D: (Continued) Table D73 : Age and frequency of us ing FCAT scoring rubric cross tabulation Age Rarely Once in a while Frequently Almost always Total 1 1 1 0 3 21 26 33.3% 33.3% 33.3% .0% 100.0% 1 0 2 1 4 27 32 25.0% .0% 50.0% 25.0% 100.0% 3 4 1 3 11 33 -38 27.3% 36.4% 9.1% 27.3% 100.0% 3 1 2 0 6 39 -44 50.0% 16.7% 33.3% .0% 100.0% 1 0 0 0 1 45 50 100.0% .0% .0% .0% 100.0% 5 0 6 2 13 Over 50 38.5% .0% 46.2% 15.4% 100.0% 14 6 12 6 38 Total 36.8% 15.8% 31.6% 15.8% 100.0% Chi-Square = 14.293 p = 0.503 Table D74: Primary traits scoring Frequency Percent Valid Percent Rarely 6 14.0 15.4 Once in a while 8 18.6 20.5 Frequently 15 34.9 38.5 Almost always 10 23.3 25.6 Total 39 90.7 100.0 Missing 4 9.3 Total 43 100.0

PAGE 282

267 Appendix D: (Continued) Table D75: Frequency of pr imary traits scoring by county 1.00 2.00 3.00 4.00 Total 2 4 2 0 8 Public County A 25.0% 50.0% 25.0% .0% 100.0% 1 1 9 5 16 Public County B 6.3% 6.3% 56.3% 31.3% 100.0% 3 5 11 5 24 12.5% 20.8% 45.8% 20.8% 100.0% Table D76: Experience and fre quency of using traits scoring Exp Rarely Once in a while Frequently Almost always Total 2 0 4 2 8 1 -5 years 25.0% .0% 50.0% 25.0% 100.0% 1 2 2 4 9 6 10 years 11.1% 22.2% 22.2% 44.4% 100.0% 1 3 3 3 10 11 -15 years 10.0% 30.0% 30.0% 30.0% 100.0% 0 1 3 0 4 16 20 years .0% 25.0% 75.0% .0% 100.0% 1 2 3 1 7 More than 20 years 14.3% 28.6% 42.9% 14.3% 100.0% 5 8 15 10 38 Total 13.2% 21.1% 39.5% 26.3% 100.0% Chi-Square = 8.826 p = 0.718

PAGE 283

268 Appendix D: (Continued) Table D77: Grade teaching and frequenc y of using primary traits scoring cross tabulation Rarely Once in a while Frequently Almost always Total 1 0 0 0 1 Kindergarten 100.0% .0% .0% .0% 100.0% 1 1 0 0 2 1st 50.0% 50.0% .0% .0% 100.0% 0 1 2 0 3 2nd .0% 33.3% 66.7% .0% 100.0% 1 0 1 3 5 3rd 20.0% .0% 20.0% 60.0% 100.0% 2 2 4 1 9 4th 22.2% 22.2% 44.4% 11.1% 100.0% 0 4 5 5 14 5th .0% 28.6% 35.7% 35.7% 100.0% 0 0 0 1 1 6th .0% .0% .0% 100.0% 100.0% 5 8 12 10 35 Total 14.3% 22.9% 34.3% 28.6% 100.0%

PAGE 284

269 Appendix D: (Continued) Table D78 : Age and frequency of using primary traits scoring cross tabulation Age Rarely Once in a while Frequently Almost always Total 0 0 2 1 3 21 26 .0% .0% 66.7% 33.3% 100.0% 1 0 3 1 5 27 32 20.0% .0% 60.0% 20.0% 100.0% 2 3 1 4 10 33 -38 20.0% 30.0% 10.0% 40.0% 100.0% 1 3 1 2 7 39 -44 14.3% 42.9% 14.3% 28.6% 100.0% 0 0 0 1 1 45 50 .0% .0% .0% 100.0% 100.0% 2 2 8 1 13 Over 50 15.4% 15.4% 61.5% 7.7% 100.0% 6 8 15 10 39 Total 15.4% 20.5% 38.5% 25.6% 100.0% Chi-Square = 16.311 p = 0.362 Table D79: Frequency of self assessment Frequency Percent Valid Percent Rarely 10 23.3 24.4 Once in a while 17 39.5 41.5 Frequently 12 27.9 29.3 Almost always 2 4.7 4.9 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 285

270 Appendix D: (Continued) Table D80: Frequency of self assessment by county 1.00 2.00 3.00 4.00 Total 0 6 0 2 8 Public County A .0% 75.0% .0% 25.0% 100.0% 6 7 5 0 18 Public County B 33.3% 38.9% 27.8% .0% 100.0% 6 13 5 2 26 23.1% 50.0% 19.2% 7.7% 100.0% Table D81 : Experience and fre quency of using self assessment Exp Rarely Once in a while Frequently Almost always Total 3 3 2 0 8 1 -5 years 37.5% 37.5% 25.0% .0% 100.0% 3 4 2 1 10 6 10 years 30.0% 40.0% 20.0% 10.0% 100.0% 3 5 3 0 11 11 -15 years 27.3% 45.5% 27.3% .0% 100.0% 1 1 1 1 4 16 20 years 25.0% 25.0% 25.0% 25.0% 100.0% 0 4 3 0 7 More than 20 years .0% 57.1% 42.9% .0% 100.0% 10 17 11 2 40 Total 25.0% 42.5% 27.5% 5.0% 100.0% Chi-Square = 8.931 p = 0.709

PAGE 286

271 Appendix D: (Continued) Table D82: Grade teaching and frequency of using self assessment cross tabulation Grade Teaching Rarely Once in a while Frequently Almost always Total 0 0 1 0 1 Kindergarten .0% .0% 100.0% .0% 100.0% 0 1 1 0 2 1st .0% 50.0% 50.0% .0% 100.0% 0 2 1 0 3 2nd .0% 66.7% 33.3% .0% 100.0% 3 2 0 0 5 3rd 60.0% 40.0% .0% .0% 100.0% 4 4 2 0 10 4th 40.0% 40.0% 20.0% .0% 100.0% 3 7 3 2 15 5th 20.0% 46.7% 20.0% 13.3% 100.0% 0 0 1 0 1 6th .0% .0% 100.0% .0% 100.0% 10 16 9 2 37 Total 27.0% 43.2% 24.3% 5.4% 100.0% Chi-Square = 15.836 p = 0.604

PAGE 287

272 Appendix D: (Continued) Table D83: Age and frequency of using self assessment scoring cross tabulation Age Rarely Once in a while Frequently Almost always Total 1 1 1 0 3 21 26 33.3% 33.3% 33.3% .0% 100.0% 1 3 1 0 5 27 32 20.0% 60.0% 20.0% .0% 100.0% 4 4 3 0 11 33 -38 36.4% 36.4% 27.3% .0% 100.0% 2 3 1 1 7 39 -44 28.6% 42.9% 14.3% 14.3% 100.0% 1 0 0 0 1 45 50 100.0% .0% .0% .0% 100.0% 1 5 6 1 13 Over 50 7.7% 38.5% 46.2% 7.7% 100.0% 10 16 12 2 40 Total 25.0% 40.0% 30.0% 5.0% 100.0% Chi-Square = 10.144 p = 0.811 Question 14: Select the me thod of assessing student writing that you use most frequently. Table D84: Most frequent used method of assessment Frequency Percent Valid Percent Others 1 2.3 2.4 Checklists 4 9.3 9.8 Teacher conferences 8 18.6 19.5 Peer Conferences 3 7.0 7.3 Holistic scoring 4 9.3 9.8 Observations 12 27.9 29.3 Portfolios 3 7.0 7.3 Rubrics 6 14.0 14.6 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 288

273 Appendix D: (Continued) Table D85: Experience and Most frequently us ed method of assessment Exp Other s Check list Teach er confer ence Peer conference Holistic scoring Observa tions Portfo lios Rubric s Total 0 1 2 0 1 1 1 2 8 1 -5 years .0% 12.5% 25.0% .0% 12.5% 12.5% 12.5% 25.0% 100.0% 0 1 2 0 1 3 1 2 10 6 10 years .0% 10.0% 20.0% .0% 10.0% 30.0% 10.0% 20.0% 100.0% 0 0 1 2 1 5 1 1 11 11 15 years .0% .0% 9.1% 18.2% 9.1% 45.5% 9.1% 9.1% 100.0% 0 1 1 0 0 1 0 1 4 16 20 years .0% 25.0% 25.0% .0% .0% 25.0% .0% 25.0% 100.0% 1 1 1 1 1 2 0 0 7 Mor e than 20 years 14.3% 14.3% 14.3% 14.3% 14.3% 28.6% .0% .0% 100.0% 1 4 7 3 4 12 3 6 40 Total 2.5% 10.0% 17.5% 7.5% 10.0% 30.0% 7.5% 15.0% 100.0% Chi-Square = 17.356 p = 0.941

PAGE 289

274 Appendix D: (Continued) Table D86: Grade Teaching and Most fr equently used method of assessment Grade Others Check list Teach er confer ence Peer confere nce Holistic scoring Obser vation s Portfoli os Rubric s Total 0 0 1 0 0 0 0 0 1 Kinder garten .0% .0% 100.0 % .0% .0% .0% .0% .0% 100.0 % 0 0 1 0 1 0 0 0 2 1st .0% .0% 50.0% .0% 50.0% .0% .0% .0% 100.0 % 0 0 1 0 1 1 0 0 3 2nd .0% .0% 33.3% .0% 33.3% 33.3% .0% .0% 100.0 % 0 0 2 0 0 0 0 3 5 3rd .0% .0% 40.0% .0% .0% .0% .0% 60.0% 100.0 % 0 1 1 0 0 6 2 0 10 4th .0% 10.0% 10.0% .0% .0% 60.0% 20.0% .0% 100.0 % 1 3 1 3 2 3 1 1 15 5th 6.7% 20.0% 6.7% 20.0% 13.3% 20.0% 6.7% 6.7% 100.0 % 0 0 0 0 0 0 0 1 1 6th .0% .0% .0% .0% .0% .0% .0% 100.0 % 100.0 % 1 4 7 3 4 10 3 5 37 Total 2.7% 10.8% 18.9% 8.1% 10.8% 27.0% 8.1% 13.5% 100.0 % Chi-Square = 48.059 p = 0.241

PAGE 290

275 Appendix D: (Continued) Table D87: Age and most freque ntly used method of assessment Age Others Checkl ist Teach er confer ence Peer conferen ce Holistic scoring Observa tions Portfolios Rubrics Total 0 1 0 0 1 0 0 1 3 21 26 .0% 33.3% .0% .0% 33.3% .0% .0% 33.3% 100.0% 0 0 1 0 0 1 1 2 5 27 32 .0% .0% 20.0% .0% .0% 20.0% 20.0% 40.0% 100.0% 0 1 2 0 0 4 2 2 11 33 38 .0% 9.1% 18.2% .0% .0% 36.4% 18.2% 18.2% 100.0% 0 1 2 0 2 2 0 0 7 39 44 .0% 14.3% 28.6% .0% 28.6% 28.6% .0% .0% 100.0% 0 0 0 1 0 0 0 0 1 45 50 .0% .0% .0% 100.0% .0% .0% .0% .0% 100.0% 1 1 3 1 1 5 0 1 13 Over 50 7.7% 7.7% 23.1% 7.7% 7.7% 38.5% .0% 7.7% 100.0% 1 4 8 2 4 12 3 6 40 Tota l 2.5% 10.0% 20.0% 5.0% 10.0% 30.0% 7.5% 15.0% 100.0% Chi-Square = 41.847 p = 0.198

PAGE 291

276 Appendix D: (Continued) Question 15: Please rate th e method of assessing student writing that you use most frequently on a scale of 1-4 with 1 bein g minimally effective and 4 being extremely effective. Table D88: Rating of the assessment method used most frequently Rating Frequency Percent Valid Percent Somewhat effective 3 7.0 7.3 Effective 21 48.8 51.2 Extremely effective 17 39.5 41.5 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0 Table D89: Effectiveness of frequent method by county 2.00 3.00 4.00 Total 1 3 4 8 Public County A 12.5% 37.5% 50.0% 100.0% 0 8 10 18 Public County B .0% 44.4% 55.6% 100.0% 1 11 14 26 3.8% 42.3% 53.8% 100.0%

PAGE 292

277 Appendix D: (Continued) Table D90: Experience and perception on effectiveness of assessment method Exp Somewhat effective Effective Extremely effective Total 2 5 1 8 1 -5 years 25.0% 62.5% 12.5% 100.0% 0 6 4 10 6 10 years .0% 60.0% 40.0% 100.0% 1 9 1 11 11 -15 years 9.1% 81.8% 9.1% 100.0% 0 4 0 4 16 20 years .0% 100.0% .0% 100.0% 1 4 2 7 More than 20 years 14.3% 57.1% 28.6% 100.0% 4 28 8 40 Total 10.0% 70.0% 20.0% 100.0% Chi-Square = 5.093 p = 0.748

PAGE 293

278 Appendix D: (Continued) Table D91: Grade teaching and percep tion on effectiveness of assessment method Grade teaching Somewhat effective Effective Extremely effective Total 0 0 1 1 Kindergarten .0% .0% 100.0% 100.0% 1 1 0 2 1st 50.0% 50.0% .0% 100.0% 0 2 1 3 2nd .0% 66.7% 33.3% 100.0% 1 1 3 5 3rd 20.0% 20.0% 60.0% 100.0% 1 9 0 10 4th 10.0% 90.0% .0% 100.0% 1 11 3 15 5th 6.7% 73.3% 20.0% 100.0% 0 1 0 1 6th .0% 100.0% .0% 100.0% 4 25 8 37 Total 10.8% 67.6% 21.6% 100.0% Chi-Square = 11.321 p = 0.502

PAGE 294

279 Appendix D: (Continued) Table D92: Age and perception on effectiveness of assessment method Age Somewhat effective Effective Extremely effective Total 0 2 1 3 21 26 .0% 66.7% 33.3% 100.0% 1 2 2 5 27 32 20.0% 40.0% 40.0% 100.0% 1 9 1 11 33 -38 9.1% 81.8% 9.1% 100.0% 1 3 3 7 39 -44 14.3% 42.9% 42.9% 100.0% 0 1 0 1 45 50 .0% 100.0% .0% 100.0% 1 10 2 13 Over 50 7.7% 76.9% 15.4% 100.0% 4 27 9 40 Total 10.0% 67.5% 22.5% 100.0% Chi-Square = 10.113 p = 0.431

PAGE 295

280 Appendix D: (Continued) Question 16: Please select the method of assessing student writing that you use second most frequently. Table D93: Second most frequent ly used method of assessment Frequency Percent Valid Percent Others 2 4.7 4.9 Checklists 3 7.0 7.3 Teacher conferences 8 18.6 19.5 Peer Conferences 2 4.7 4.9 Holistic scoring 6 14.0 14.6 Observations 3 7.0 7.3 Portfolios 2 4.7 4.9 Rubrics 6 14.0 14.6 FCAT scoring rubric 4 9.3 9.8 Primary traits scoring 4 9.3 9.8 Self assessment 1 2.3 2.4 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0 Question 17: Please rate th e method of assessing student writing that you use second most frequently on a scale of 1-4 with 1 being minimally e ffective and 4 being extremely effective. Table D94: Rating of the sec ond most frequent method of assessment Frequency Percent Valid Percent Somewhat effective 5 11.6 12.2 Effective 20 46.5 48.8 Extremely effective 16 37.2 39.0 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 296

281 Appendix D: (Continued) Table D95: Rating of second most frequent method by county 2.00 3.00 4.00 Total 1 5 2 8 Public County A 12.5% 62.5% 25.0% 100.0% 0 8 10 18 Public County B .0% 44.4% 55.6% 100.0% 1 13 12 26 3.8% 50.0% 46.2% 100.0% Question 18: Please select the method of assessing student writing that you use third most frequently. Table D96: Third most frequent ly used method of assessment Method Frequency Percent Valid Percent Checklists 5 11.6 12.5 Teacher conferences 6 14.0 15.0 Peer Conferences 3 7.0 7.5 Holistic scoring 4 9.3 10.0 Observations 2 4.7 5.0 Portfolios 7 16.3 17.5 Rubrics 6 14.0 15.0 FCAT scoring rubric 2 4.7 5.0 Primary traits scoring 3 7.0 7.5 Self assessment 2 4.7 5.0 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0

PAGE 297

282 Appendix D: (Continued) Question 19: Please rate th e method of assessing student writing that you use third most frequently on a scale of 1-4 with 1 being minimally e ffective and 4 being extremely effective. Table D97: Rating of the third most frequent method of assessment Frequency Percent Valid Percent Minimally effective 1 2.3 2.5 Somewhat effective 5 11.6 12.5 Effective 20 46.5 50.0 Extremely effective 14 32.6 35.0 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0 Table D98: Effectiveness of third most frequent method by county 2.00 3.00 4.00 Total 1 4 3 8 Public County A 12.5% 50.0% 37.5% 100.0% 2 8 7 17 Public County B 11.8% 47.1% 41.2% 100.0% 3 12 10 25 12.0% 48.0% 40.0% 100.0% Question 23: Please think about ALL the different methods of e valuation that you use when reviewing student writing. As a wh ole, how effective do you believe that the method(s) of evaluating writing that you utilize are? Table D99: Effectiveness of evaluation methods overall Frequency Percent Valid Percent Somewhat effective 4 9.3 9.8 Effective 28 65.1 68.3 Extremely effective 9 20.9 22.0 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 298

283 Appendix D: (Continued) Question 25: Do you use both formal (written feedback, rubrics, grades, etc) and informal (observation, anecdotal notes, conferences, etc.) methods of assessing writing? Table D100: Use of both formal and in formal methods of assessing writing Use Frequency Percent Valid Percent Yes 39 90.7 95.1 No 2 4.7 4.9 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0 Table D101: Experience and use of bot h formal and informal methods of assessment Exp Yes No Total 8 0 8 1 -5 years 100.0% .0% 100.0% 10 0 10 6 10 years 100.0% .0% 100.0% 11 0 11 11 -15 years 100.0% .0% 100.0% 4 0 4 16 20 years 100.0% .0% 100.0% 6 1 7 More than 20 years 85.7% 14.3% 100.0% 39 1 40 Total 97.5% 2.5% 100.0% Chi-Square = 4.835 p = 0.305

PAGE 299

284 Appendix D: (Continued) Table D102: Grade teaching and use of both formal and informal methods of assessment Grade Yes No Total 0 1 1 Kindergarten .0% 100.0% 100.0% 1 1 2 1st 50.0% 50.0% 100.0% 3 0 3 2nd 100.0% .0% 100.0% 5 0 5 3rd 100.0% .0% 100.0% 10 0 10 4th 100.0% .0% 100.0% 15 0 15 5th 100.0% .0% 100.0% 1 0 1 6th 100.0% .0% 100.0% 35 2 37 Total 94.6% 5.4% 100.0%

PAGE 300

285 Appendix D: (Continued) Table D103: Age and use of both formal and informal methods of assessment Age Yes No Total 3 0 3 21 26 100.0% .0% 100.0% 5 0 5 27 32 100.0% .0% 100.0% 10 1 11 33 -38 90.9% 9.1% 100.0% 7 0 7 39 -44 100.0% .0% 100.0% 1 0 1 45 50 100.0% .0% 100.0% 12 1 13 Over 50 92.3% 7.7% 100.0% 38 2 40 Total 95.0% 5.0% 100.0% Chi-Square = 1.428 p = 0.921

PAGE 301

286 Appendix D: (Continued) Question 26: What percentage (for a total of 100%) of your time spent evaluating writing is spent on informal assessments? Table D104: Percentage of time of total evaluation spent on informal assessments Percentage Frequency Percent Valid Percent 10.00 1 2.3 3.3 20.00 1 2.3 3.3 25.00 2 4.7 6.7 30.00 5 11.6 16.7 35.00 1 2.3 3.3 40.00 5 11.6 16.7 50.00 6 14.0 20.0 60.00 5 11.6 16.7 70.00 1 2.3 3.3 75.00 1 2.3 3.3 80.00 2 4.7 6.7 Total 30 69.8 100.0 Missing 13 30.2 Total 43 100.0

PAGE 302

287 Appendix D: (Continued) Table D105: Experience wise summary sta tistics of percentage of time of total evaluation spent on informal assessments 95% Confidence Interval for Mean Exp Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum 1 5 years 51.6667 16.02082 6.54047 34.8538 68.4795 30.00 80.00 6 10 years 49.3750 18.98072 6.71070 33.5067 65.2433 30.00 80.00 11 15 years 40.0000 18.40894 5.82142 26.8310 53.1690 10.00 75.00 16 20 years 40.0000 20.00000 11.54701 -9.6828 89.6828 20.00 60.00 Over 20 years 48.3333 20.20726 11.66667 -1.8643 98.5309 25.00 60.00 Total 45.6667 17.84673 3.25835 39.0026 52.3307 10.00 80.00 Table D106: ANOVA for percen tage of time of total ev aluation spent on informal assessments for experience Sum of Squares df Mean Square F Sig. Between Groups 764.792 4 191.198 .564 .691 Within Groups 8471.875 25 338.875 Total 9236.667 29

PAGE 303

288 Appendix D: (Continued) Table D107: Teaching grade wise summary statistics of percentage of time of total evaluation spent on informal assignments 95% Confidence Interval for Mean Grade Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum 1st 60.0000 . . 60.00 60.00 2nd 45.0000 7.07107 5.00000 -18.5310 108.5310 40.00 50.00 3rd 50.0000 20.00000 11.54701 .3172 99.6828 30.00 70.00 4th 38.8889 17.63834 5.87945 25.3309 52.4469 10.00 60.00 5th 49.5455 17.81215 5.37057 37.5791 61.5118 25.00 80.00 6th 30.0000 . . 30.00 30.00 Total 45.3704 17.09284 3.28952 38.6087 52.1321 10.00 80.00 Table D108: ANOVA for percen tage of time of total ev aluation spent on informal assessments for different grades taught Sum of Squares df Mean Square F Sig. Between Groups 1084.680 5 216.936 .700 .630 Within Groups 6511.616 21 310.077 Total 7596.296 26

PAGE 304

289 Appendix D: (Continued) Table D109: Age wise summary statistics of percentage of time of total evaluation spent on informal assignments 95% Confidence Interval for Mean Age Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum 21 -26 60.0000 17.32051 10.00000 16.9735 103.0265 50.00 80.00 27 32 47.5000 23.62908 11.81454 9.9009 85.0991 30.00 80.00 33 38 36.5000 13.34375 4.21966 26.9545 46.0455 10.00 60.00 39 -44 65.0000 13.22876 7.63763 32.1379 97.8621 50.00 75.00 45 50 25.0000 . . 25.00 25.00 Over 50 46.8750 16.67708 5.89624 32.9326 60.8174 20.00 60.00 Total 45.8621 18.12994 3.36664 38.9658 52.7583 10.00 80.00 Table D110: ANOVA for percen tage of time of total ev aluation spent on informal assessments for different age groups Sum of Squares df Mean Square F Sig. Between Groups 3029.073 5 605.815 2.257 .083 Within Groups 6174.375 23 268.451 Total 9203.448 28

PAGE 305

290 Appendix D: (Continued) Table D111: Informal assessment frequency by county 20 25 30 35 40 50 60 70 75 80 Total 0 0 1 0 2 1 3 0 0 1 8 Public County A .0% .0% 12.5% .0% 25.0% 12.5% 37.5% .0% .0% 12.5% 100.0% 1 2 2 1 3 4 1 1 1 1 17 Public County B 5.9 % 11.8 % 11.8% 5.9 % 17.6% 23.5% 5.9% 5.9 % 5.9% 5.9% 100.0% 1 2 3 1 5 5 4 1 1 2 25 4.0 % 8.0 % 12.0% 4.0 % 20.0% 20.0% 16.0% 4.0 % 4.0% 8.0% 100.0% Question 26: What percentage of your time spent evaluating writing is spent on formal assessments? Table D112: Percentage of time of total evaluation spent on formal assignments Frequency Percent Valid Percent 20.00 2 4.7 6.7 25.00 1 2.3 3.3 30.00 1 2.3 3.3 40.00 5 11.6 16.7 50.00 6 14.0 20.0 60.00 5 11.6 16.7 65.00 1 2.3 3.3 70.00 5 11.6 16.7 75.00 2 4.7 6.7 80.00 1 2.3 3.3 90.00 1 2.3 3.3 Total 30 69.8 100.0 Missing 13 30.2 Total 43 100.0 Appendix D: (Continued)

PAGE 306

291 Table D113: Experience wise summary sta tistics of percentage of time of total evaluation spent on formal assessments 95% Confidence Interval for Mean Exp Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum 1 5 years 48.3333 16.02082 6.54047 31.5205 65.1462 20.00 70.00 6 10 years 50.6250 18.98072 6.71070 34.7567 66.4933 20.00 70.00 11 15 years 60.0000 18.40894 5.82142 46.8310 73.1690 25.00 90.00 16 20 years 60.0000 20.00000 11.54701 10.3172 109.682 8 40.00 80.00 Over 20 years 51.6667 20.20726 11.66667 1.4691 101.864 3 40.00 75.00 Total 54.3333 17.84673 3.25835 47.6693 60.9974 20.00 90.00 Table D114: ANOVA for percentage of tim e of total evaluati on spent on formal assessments for experience Sum of Squares df Mean Square F Sig. Between Groups 764.792 4 191.198 .564 .691 Within Groups 8471.875 25 338.875 Total 9236.667 29

PAGE 307

292 Appendix D: (Continued) Table D115: Teaching grade wise summary statistics of percentage of time of total evaluation spent on formal assignments 95% Confidence Interval for Mean Grade Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum 1st 40.0000 . . 40.00 40.00 2nd 55.0000 7.07107 5.00000 -8.5310 118.531 50.00 60.00 3rd 50.0000 20.00000 11.54701 .3172 99.6828 30.00 70.00 4th 61.1111 17.63834 5.87945 47.5531 74.6691 40.00 90.00 5th 50.4545 17.81215 5.37057 38.4882 62.4209 20.00 75.00 6th 70.0000 . . 70.00 70.00 Total 54.6296 17.09284 3.28952 47.8679 61.3913 20.00 90.00 Table D116: ANOVA for percentage of time of total evaluation spent on formal assessments for different grades taught Sum of Squares df Mean Square F Sig. Between Groups 1084.680 5 216.936 .700 .630 Within Groups 6511.616 21 310.077 Total 7596.296 26

PAGE 308

293 Appendix D: (Continued) Table D117: Age wise summary statistics of percentage of time of total evaluation spent on formal assignments 95% Confidence Interval for Mean Age Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum 21 26 40.0000 17.32051 10.00000 -3.0265 83.0265 20.00 50.00 27 32 52.5000 23.62908 11.81454 14.9009 90.0991 20.00 70.00 33 38 63.5000 13.34375 4.21966 53.9545 73.0455 40.00 90.00 39 44 35.0000 13.22876 7.63763 2.1379 67.8621 25.00 50.00 45 50 75.0000 . . 75.00 75.00 Over 50 53.1250 16.67708 5.89624 39.1826 67.0674 40.00 80.00 Tota l 54.1379 18.12994 3.36664 47.2417 61.0342 20.00 90.00 Table D118: ANOVA for percentage of tim e of total evaluati on spent on formal assessments for different age groups Sum of Squares df Mean Square F Sig. Between Groups 3029.073 5 605.815 2.257 .083 Within Groups 6174.375 23 268.451 Total 9203.448 28

PAGE 309

294 Appendix D: (Continued) Table D119: Frequency of formal assessments by county 20 25 30 40 50 60 65 70 75 80 Total 1 0 0 3 1 2 0 1 0 0 8 Public County A 12.5 % .0% .0% 37.5% 12.5% 25.0% .0% 12.5% .0% .0% 100.0 % 1 1 1 1 4 3 1 2 2 1 17 Public County B 5.9 % 5.9 % 5.9 % 5.9% 23.5% 17.6% 5.9% 11.8% 11.8% 5.9% 100.0 % 2 1 1 4 5 5 1 3 2 1 25 8.0 % 4.0 % 4.0 % 16.0% 20.0% 20.0% 4.0% 12.0% 8.0% 4.0% 100.0 % Question 27: Do you ever utilize the F CAT Writing Assessment rubric to score papers? Table D120: Use of FCAT writing rubric Frequency Percent Valid Percent Yes 22 51.2 55.0 No 18 41.9 45.0 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0 Table D121 : Frequency of using FCAT writing rubric Frequency Percent Valid Percent Others 6 14.0 26.1 Daily 1 2.3 4.3 Weekly 5 11.6 21.7 Monthly 11 25.6 47.8 Total 23 53.5 100.0 Missing 20 46.5 Total 43 100.0

PAGE 310

295 Appendix D: (Continued) Table D122:Use of FCAT rubric by county 1.00 2.00 Total 6 2 8 Public County A 75.0% 25.0% 100.0% 12 6 18 Public County B 66.7% 33.3% 100.0% 18 8 26 69.2% 30.8% 100.0% Table D123: Experience and use of FCAT writing rubric Exp Yes No Total 5 3 8 1 -5 years 62.5% 37.5% 100.0% 5 5 10 6 10 years 50.0% 50.0% 100.0% 6 5 11 11 -15 years 54.5% 45.5% 100.0% 3 1 4 16 20 years 75.0% 25.0% 100.0% 3 3 6 More than 20 years 50.0% 50.0% 100.0% 22 17 39 Total 56.4% 43.6% 100.0% Chi-Square = 0.966 p = 0.915

PAGE 311

296 Appendix D: (Continued) Table D124: Grade teaching and use of FCAT writing rubric Grade Yes No Total 0 1 1 Kindergarten .0% 100.0% 100.0% 0 2 2 1st .0% 100.0% 100.0% 0 3 3 2nd .0% 100.0% 100.0% 0 5 5 3rd .0% 100.0% 100.0% 9 1 10 4th 90.0% 10.0% 100.0% 9 5 14 5th 64.3% 35.7% 100.0% 1 0 1 6th 100.0% .0% 100.0% 19 17 36 Total 52.8% 47.2% 100.0% Chi-Square = 19.492 p = 0.003

PAGE 312

297 Appendix D: (Continued) Table D125: Age and use of FCAT writing rubric Age Yes No Total 2 1 3 21 26 66.7% 33.3% 100.0% 3 2 5 27 32 60.0% 40.0% 100.0% 7 4 11 33 -38 63.6% 36.4% 100.0% 1 6 7 39 -44 14.3% 85.7% 100.0% 1 0 1 45 50 100.0% .0% 100.0% 7 5 12 Over 50 58.3% 41.7% 100.0% 21 18 39 Total 53.8% 46.2% 100.0% Chi-Square = 6.601 p = 0.300 Question 28: How often do you us e the FCAT Writing Assessment? Table D126: Experience and freque ncy of using FCAT writing rubric Exp Others Daily Weekly Monthly Total 1 0 2 2 5 1 -5 years 20.0% .0% 40.0% 40.0% 100.0% 0 1 0 4 5 6 10 years .0% 20.0% .0% 80.0% 100.0% 2 0 3 1 6 11 -15 years 33.3% .0% 50.0% 16.7% 100.0% 1 0 0 2 3 16 20 years 33.3% .0% .0% 66.7% 100.0% 2 0 0 2 4 More than 20 years 50.0% .0% .0% 50.0% 100.0% 6 1 5 11 23 Total 26.1% 4.3% 21.7% 47.8% 100.0% Chi-Square = 14.204 p = 0.288

PAGE 313

298 Appendix D: (Continued) Table D127: Experience and freque ncy of using FCAT writing rubric Grade teaching Others Daily Weekly Monthly Total 2 1 4 2 9 4th 22.2% 11.1% 44.4% 22.2% 100.0% 3 0 1 6 10 5th 30.0% .0% 10.0% 60.0% 100.0% 1 0 0 0 1 6th 100.0% .0% .0% .0% 100.0% 6 1 5 8 20 Total 30.0% 5.0% 25.0% 40.0% 100.0% Chi-Square = 7.659 p = 0.264 Table D128: Age and frequency of using FCAT writing rubric Age Others Daily Weekly Monthly Total 0 0 1 1 2 21 26 .0% .0% 50.0% 50.0% 100.0% 1 0 0 2 3 27 32 33.3% .0% .0% 66.7% 100.0% 2 1 2 2 7 33 -38 28.6% 14.3% 28.6% 28.6% 100.0% 0 0 0 1 1 39 -44 .0% .0% .0% 100.0% 100.0% 0 0 1 0 1 45 50 .0% .0% 100.0% .0% 100.0% 3 0 1 4 8 Over 50 37.5% .0% 12.5% 50.0% 100.0% 6 1 5 10 22 Total 27.3% 4.5% 22.7% 45.5% 100.0% Chi-Square = 10.140 p = 0.811

PAGE 314

299 Appendix D: (Continued) Table D129: Frequency of FCAT rubric use by county .00 1.00 2.00 3.00 Total 1 1 1 3 6 Public County A 16.7% 16.7% 16.7% 50.0% 100.0% 2 0 3 7 12 Public County B 16.7% .0% 25.0% 58.3% 100.0% 3 1 4 10 18 16.7% 5.6% 22.2% 55.6% 100.0% Question 30: How helpful do you feel that the feedback from the FCAT rubric is to your students? Table D130: Helpfulness of feedback on FCAT rubric to students Frequency Percent Valid Percent Extremely helpful 1 2.3 4.5 Helpful 15 34.9 68.2 Somewhat helpful 3 7.0 13.6 Minimally helpful 3 7.0 13.6 Total 22 51.2 100.0 Missing 21 48.8 Total 43 100.0 Table D131: Helpfulness of FCAT rubric feedback by county 1.00 2.00 3.00 4.00 Total 0 6 0 0 6 Public County A .0% 100.0% .0% .0% 100.0% 1 5 3 2 11 Public County B 9.1% 45.5% 27.3% 18.2% 100.0% 1 11 3 2 17 5.9% 64.7% 17.6% 11.8% 100.0%

PAGE 315

300 Appendix D: (Continued) Table D132: Experience and helpfulness of FCAT writing assessment to students Exp Extremely helpful Helpful Somewhat helpful Minimally helpful Total 1 3 0 1 5 1 -5 years 20.0% 60.0% .0% 20.0% 100.0% 0 3 1 0 4 6 10 years .0% 75.0% 25.0% .0% 100.0% 0 4 0 2 6 11 -15 years .0% 66.7% .0% 33.3% 100.0% 0 1 2 0 3 16 20 years .0% 33.3% 66.7% .0% 100.0% 0 4 0 0 4 More than 20 years .0% 100.0% .0% .0% 100.0% 1 15 3 3 22 Total 4.5% 68.2% 13.6% 13.6% 100.0% Chi-Square = 16.573 p = 0.166 Table D133: Grade teaching and helpfulness of FCAT writing assessment to students Grade teaching Extremely helpful Helpful Somewhat helpful Minimally helpful Total 1 5 1 1 8 4th 12.5% 62.5% 12.5% 12.5% 100.0% 0 7 1 2 10 5th .0% 70.0% 10.0% 20.0% 100.0% 0 1 0 0 1 6th .0% 100.0% .0% .0% 100.0% 1 13 2 3 19 Total 5.3% 68.4% 10.5% 15.8% 100.0% Chi-Square = 2.028 p = 0.917

PAGE 316

301 Appendix D: (Continued) Table D134: Age and helpfulness of FCAT writing assessment to students Age Extremely helpful Helpful Somewhat helpful Minimally helpful Total 1 0 0 1 2 21 26 50.0% .0% .0% 50.0% 100.0% 0 3 0 0 3 27 32 .0% 100.0% .0% .0% 100.0% 0 4 1 1 6 33 -38 .0% 66.7% 16.7% 16.7% 100.0% 0 1 0 0 1 39 -44 .0% 100.0% .0% .0% 100.0% 0 0 0 1 1 45 50 .0% .0% .0% 100.0% 100.0% 0 6 2 0 8 Over 50 .0% 75.0% 25.0% .0% 100.0% 1 14 3 3 21 Total 4.8% 66.7% 14.3% 14.3% 100.0% Chi-Square = 22.583 p = 0.093 Question 31: Do you use any standardized writing assessments (SAT, FCAT, etc.)? Table D135: Use of standardized writing assignments Frequency Percent Valid Percent Yes 17 39.5 42.5 No 23 53.5 57.5 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0

PAGE 317

302 Appendix D: (Continued) Table D136: Use of standardized assessments by county 1.00 2.00 Total 6 2 8 Public County A 75.0% 25.0% 100.0% 4 14 18 Public County B 22.2% 77.8% 100.0% 10 16 26 38.5% 61.5% 100.0% Table D137: Experience and use of standardized assignments Exp Yes No Total 1 7 8 1 -5 years 12.5% 87.5% 100.0% 4 6 10 6 10 years 40.0% 60.0% 100.0% 6 4 10 11 -15 years 60.0% 40.0% 100.0% 3 1 4 16 20 years 75.0% 25.0% 100.0% 3 4 7 More than 20 years 42.9% 57.1% 100.0% 17 22 39 Total 43.6% 56.4% 100.0% Chi-Square = 5.899 p = 0.207

PAGE 318

303 Appendix D: (Continued) Table D138: Grade teaching and us e of standardized assignments Grade Teaching Yes No Total 0 1 1 Kindergarten .0% 100.0% 100.0% 0 2 2 1st .0% 100.0% 100.0% 3 0 3 2nd 100.0% .0% 100.0% 0 5 5 3rd .0% 100.0% 100.0% 6 3 9 4th 66.7% 33.3% 100.0% 6 9 15 5th 40.0% 60.0% 100.0% 0 1 1 6th .0% 100.0% 100.0% 15 21 36 Total 41.7% 58.3% 100.0% Chi-Square = 12.960 p = 0.044

PAGE 319

304 Appendix D: (Continued) Table D139: Age and use of standardized assignments Age Yes No Total 0 3 3 21 26 .0% 100.0% 100.0% 0 5 5 27 32 .0% 100.0% 100.0% 6 5 11 33 -38 54.5% 45.5% 100.0% 3 4 7 39 -44 42.9% 57.1% 100.0% 0 1 1 45 50 .0% 100.0% 100.0% 7 5 12 Over 50 58.3% 41.7% 100.0% 16 23 39 Total 41.0% 59.0% 100.0% Chi-Square = 8.587 p = 0.127 Question 32: How often do you provide your students with written feedback on their writing assignments? Table D140: Frequency of written feedback on writing assignments Frequency Frequency Percent Valid Percent Almost always 16 37.2 39.0 Frequently 19 44.2 46.3 Once in a while 5 11.6 12.2 Rarely if ever 1 2.3 2.4 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 320

305 Appendix D: (Continued) Table D141: Experience and freque ncy of giving written feedback Exp Almost always Frequently Once in a while Total 4 2 2 8 1 -5 years 50.0% 25.0% 25.0% 100.0% 4 5 1 10 6 10 years 40.0% 50.0% 10.0% 100.0% 5 5 1 11 11 -15 years 45.5% 45.5% 9.1% 100.0% 1 3 0 4 16 20 years 25.0% 75.0% .0% 100.0% 2 4 1 7 More than 20 years 28.6% 57.1% 14.3% 100.0% 16 19 5 40 Total 40.0% 47.5% 12.5% 100.0% Chi-Square = 4.055 p = 0.852

PAGE 321

306 Appendix D: (Continued) Table D142: Grade teaching and freq uency of giving written feedback Grade Almost always Frequently Once in a while Rarely Total 0 0 0 1 1 Kindergarten .0% .0% .0% 100.0% 100.0% 1 0 1 0 2 1st 50.0% .0% 50.0% .0% 100.0% 1 2 0 0 3 2nd 33.3% 66.7% .0% .0% 100.0% 0 4 1 0 5 3rd .0% 80.0% 20.0% .0% 100.0% 6 3 1 0 10 4th 60.0% 30.0% 10.0% .0% 100.0% 5 8 2 0 15 5th 33.3% 53.3% 13.3% .0% 100.0% 1 0 0 0 1 6th 100.0% .0% .0% .0% 100.0% 14 17 5 1 37 Total 37.8% 45.9% 13.5% 2.7% 100.0% Chi-Square = 47.769 p = 0.0001

PAGE 322

307 Appendix D: (Continued) Table D143 : Age and frequency of giving written feedback Age Almost always Frequently Once in a while Rarely Total 2 0 1 0 3 21 26 66.7% .0% 33.3% .0% 100.0% 2 2 1 0 5 27 32 40.0% 40.0% 20.0% .0% 100.0% 5 3 2 1 11 33 -38 45.5% 27.3% 18.2% 9.1% 100.0% 2 5 0 0 7 39 -44 28.6% 71.4% .0% .0% 100.0% 0 1 0 0 1 45 50 .0% 100.0% .0% .0% 100.0% 5 7 1 0 13 Over 50 38.5% 53.8% 7.7% .0% 100.0% 16 18 5 1 40 Total 40.0% 45.0% 12.5% 2.5% 100.0% Chi-Square = 10.810 p = 0.766 Question 33: What is your current position? Table D144: Current position Position Frequency Percent Valid Percent Others 4 9.3 9.8 Teacher 30 69.8 73.2 Media specialist 1 2.3 2.4 Reading specialist / Literacy coach 5 11.6 12.2 Administrator 1 2.3 2.4 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 323

308 Appendix D: (Continued) Table D145: Position distribution by county .00 1.00 2.00 3.00 Total 2 5 1 0 8 Public County A 25.0% 62.5% 12.5% .0% 100.0% 0 15 0 3 18 Public County B .0% 83.3% .0% 16.7% 100.0% 2 20 1 3 26 7.7% 76.9% 3.8% 11.5% 100.0% Question 34: Have you ever taught writing to students? Table D146: Ever taught writing to students Frequency Percent Valid Percent Yes 40 93.0 97.6 No 1 2.3 2.4 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0 Question 35: How many years have you taught writing to students? Table D147: Number of year s teaching writing to students # years Frequency Percent Valid Percent 1 5 years 8 18.6 20.0 6 10 years 10 23.3 25.0 11 15 years 11 25.6 27.5 16 20 years 4 9.3 10.0 Over 20 years 7 16.3 17.5 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0

PAGE 324

309 Appendix D: (Continued) Question 36: Do you currently teach writing to students? Table D148: Currently teaching writing to students Teaching Frequency Percent Valid Percent Yes 39 90.7 95.1 No 2 4.7 4.9 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0 Question 37: What grade level do you currently teach? Table D149: Grade level teaching Grade Frequency Percent Valid Percent Kindergarten 1 2.3 2.7 1st 2 4.7 5.4 2nd 3 7.0 8.1 3rd 5 11.6 13.5 4th 10 23.3 27.0 5th 15 34.9 40.5 6th 1 2.3 2.7 Total 37 86.0 100.0 Missing 6 14.0 Total 43 100.0 Question 38: What is the highest level of education that you have completed? Table D150: Highest level of education Level Frequency Percent Valid Percent Others 1 2.3 2.4 Bachelors 20 46.5 48.8 Masters 20 46.5 48.8 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 325

310 Appendix D: (Continued) Table D151: Highest degree distribution by county .00 1.00 2.00 Total 1 3 4 8 Public County A 12.5% 37.5% 50.0% 100.0% 0 10 8 18 Public County B .0% 55.6% 44.4% 100.0% 1 13 12 26 3.8% 50.0% 46.2% 100.0% Question 39: With which of the follo wing are you currently affiliated? Table D152: Affiliation Affiliation Frequency Percent Valid Percent Public school 27 62.8 65.9 Private school 13 30.2 31.7 Home school 1 2.3 2.4 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0 Question 40: In which county is your school? Table D153: County in whic h is the school is located Frequency Percent Valid Percent A 21 48.8 52.5 B 18 41.9 45.0 C 1 2.3 2.5 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0

PAGE 326

311 Appendix D: (Continued) Question 41: What is the name of your school? Table D154: Name of the school Name of the school Frequency Percent Valid Percent Others (did not complete question) 3 7.0 7.0 1 1 2.3 2.3 2 1 2.3 2.3 3 1 2.3 2.3 4 2 4.7 4.7 5 1 2.3 2.3 6 11 25.5 25.5 7 1 2.3 2.3 8 4 9.3 9.3 9 7 16.3 16.3 10 4 9.3 9.3 11 7 16.3 16.3 Total 43 100.0 100.0 Question 42: What is your gender? Table D155 : Gender of the respondent Frequency Percent Valid Percent Male 3 7.0 7.3 Female 38 88.4 92.7 Total 41 95.3 100.0 Missing 2 4.7 Total 43 100.0

PAGE 327

312 Appendix D: (Continued) Question 43: In which range does your age fall? Table D156: Age of the respondent Age group (years) Frequency Percent Valid Percent 21 26 3 7.0 7.5 27 32 5 11.6 12.5 33 38 11 25.6 27.5 39 44 7 16.3 17.5 45 50 1 2.3 2.5 Over 50 13 30.2 32.5 Total 40 93.0 100.0 Missing 3 7.0 Total 43 100.0


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim
leader nam 22 Ka 4500
controlfield tag 007 cr-bnu---uuuuu
008 s2010 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0004798
035
(OCoLC)
040
FHM
c FHM
049
FHMM
090
XX9999 (Online)
1 100
Minick, Vanessa.
0 245
Educators' beliefs about and approaches to the evaluation of student writing
h [electronic resource] /
by Vanessa Minick.
260
[Tampa, Fla] :
b University of South Florida,
2010.
500
Title from PDF of title page.
Document formatted into pages; contains X pages.
502
Dissertation (PHD)--University of South Florida, 2010.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
3 520
ABSTRACT: The overarching purpose of this study was to describe educators' beliefs about the evaluation of student writing. The inquiry was guided by the following research questions: (a) what are the differences in the ways in which educators approach evaluating student writing? (b) how do educators evaluate the effectiveness of their evaluation methods for judging the quality of students' writing samples? and (c) what factors impact the evaluation decisions of educators? The following variables were considered: public and private school settings, evaluation methods, and educators' beliefs about evaluating writing. In order to gain perspective of the current status of the methods utilized by educators in their evaluation of and response to student writing, it is helpful to observe them during the teaching of writing and to talk with them about their process for evaluating samples of student writing. A mixed methods approach was undertaken during this study and included the collection of questionnaire responses, educator interviews, a classroom observation, and the collection of student writing samples. Interesting points in the findings included the noticeable absence of the notions of validity and reliability in the decision-making process of educators, the apparent impact of educators' self-efficacies on their selection of evaluation methods, and a focus by educators on writing factors perceived as impacting readability. Implications and future directions for research are discussed.
590
Advisor: Jenifer Jasinski Schneider, Ph.D.
653
Assessment validity authentic rubrics response
690
Dissertations, Academic
z USF
x Curriculum and Instruction
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.4798