Direct Writing Prediction Models Identify At-Risk Writers

Main Article Content

Charles James Harding Conrad II

Abstract

Developing a sufficient level of writing proficiency takes time. It is also a complex skill difficult to measure. The history of writing assessments reveals changing views of construct validity, reliability and interpretation of results. This study used a binary logistic regression model with seven years of grades 3 to 12 annual direct writing assessments scored with the Oregon six traits rubric from 2012-2018. Three predictive models were developed to show how likely it would be for a participant to reach writing proficiency, and how long it may take to meet that expectation. The research question was, “To what extent can the Annual Writing Assessment scored with the six-traits writing rubric identify at-risk writers from Grades 3-12 at the International Community School Bang Na, Thailand?” Results indicated the bio-data did not prove significant in any of the three models. Increased direct writing data input improved the prediction accuracy. In the Year 1 model, only the average test scored proved significant. In the Year 2 model, the trait of conventions proved significant as one of the independent variables along with the first- and second-year averaged test scores, and the difference between those averages. In the Year 3 model, conventions and sentence fluency proved significant along with the first- and third-year averaged test scores. The process of developing the predictive models, and the results for identifying at-risk writers are presented in this quantitative longitudinal research.

Article Details

Section
Articles
Author Biography

Charles James Harding Conrad II, International Community School Bang Na

Dr. Charles James Harding Conrad II has an education doctorate in TESOL from Anaheim University and has been teaching in Thailand since 1996. He taught 4 years with Sarasas Affiliated Schools and has been teaching ESL at the International Community School, Bang Na since 2000.

References

Applebee, A. N. (1986). Problems in process approaches: Toward a reconceptualization of
process instruction. The Teaching of Writing, 85, 95-113.
Blanton, L. (2005). Student, interrupted: A tale of two would-be writers. Journal of
Second Language Writing, 14, 105–121.
Buckwalter, J., & Lo, Y. (2002). Emergent literacy in Chinese and English. Journal of
Second Language Writing, 11, 269–293.
Camp, R. (1993). Changing the model for the direct assessment of writing. Validating
holistic scoring for writing assessment: Theoretical and empirical foundations, 45-
78.
Cho, Y. (2003). Assessing writing: Are we bound by only one method? Assessing
Writing, 8(3), 165-191.
Conrad, C. (2020), 2012-2018 Direct Writing Assessment Scores for an international school
(K-12) in Bangkok, Thailand, Mendeley Data, v1
http://dx.doi.org/10.17632/hfkt24zxcw.1
Coe, M.T. (2000). Direct writing assessment in action: Correspondence of six-trait writing
assessment scores and performance on an analog to the Washington Assessment of
Student Learning writing test. Northwest Regional Educational Laboratory.
Portland, OR.
Collier, V. P., & Thomas, W. P. (2017). Validating the power of bilingual schooling: Thirty-
two years of large-scale, longitudinal research. Annual Review of Applied
Linguistics, 37, 203-217.
Culham, R. (2003). 6+1 Traits of Writing: The Complete Guide Grades 3 and Up. New
York: Scholastic Inc.
Diederich, P. B. (1974). Measuring growth in English. National Council of Teachers of
English. Urbana, IL.
East, M. (2009). Evaluating the reliability of a detailed analytic scoring rubric for foreign
language writing. Assessing writing, 14(2), 88-115.
Emig, J. (1971). The composing processes of twelfth graders. In Culham, R. (2003). 6+1
Traits of Writing: The Complete Guide Grades 3 and Up. Scholastic Inc. New York.
Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College
Composition and Communication, 32(4), 365-387.
Huot, B. (1996). Toward a new theory of writing assessment. College Composition and
Communication, 47(4), 549-566.
Kieffer, M. J. (2008). Catching up or falling behind? Initial English proficiency,
concentrated poverty, and the reading growth of language minority learners in the
United States. Journal of Educational Psychology, 100(4), 851.
Laerd Statistics. (2019). How to Perform a Binomial Logistic Regression in SPSS Statistics.
Retrieved from https://statistics.laerd.com/spss-tutorials/binomial-logistic-regression-using-spss-statistics.php
Mancilla-Martinez, J., Kieffer, M. J., Biancarosa, G., Christodoulou, J. A., & Snow, C. E.
(2011). Investigating English reading comprehension growth in adolescent language
minority learners: Some insights from the simple view. Reading and Writing, 24(3),
339-354.
Muñoz, C., & Singleton, D. (2011). A critical review of age-related research on L2 ultimate
attainment. Language Teaching, 44(1), 1-35.
Nakamoto, J., Lindsey, K. A., & Manis, F. R. (2007). A longitudinal analysis of English language learners’ word decoding and reading comprehension. Reading and
Writing, 20(7), 691-719.
Ortiz, S. (2018). Testing with English Learners and the C-LIM: Myths and Misconceptions.
[PowerPoint slides]. Retrieved from https://www.youtube.com/redirect?event=video_description&v=A0X5ljIyI1M&redir_token=o7lyzITFtDi9dFz4MYi1PmOXdh8MTU5MDU2MDMxNkAxNTkwNDczOTE2&q=https%3A%2F%2Fdrive.google.com%2Ffile%2Fd%2F1eSx4gUsYUHsIUHzllCWYSNdSAlV2A-pD%2Fview%3Fusp%3Dsharing
Purves, A. (Ed.). (1988). Writing Across Languages and Cultures: Issues in Contrastive
Rhetoric. Newbury Park, CA.: Sage.
Spandel, V., & Stiggins, R. J. (1980). Direct Measures of Writing Skill: Issues and
Applications. OR.: Northwest Regional Educational Laboratory.
Veal, L. R., & Hudson, S. A. (1983). Direct and indirect measures for large-scale evaluation
of writing. Research in the Teaching of English, 17(3), 290-296.
Weigle, S. (2002). Assessing Writing. Cambridge University Press. Cambridge, UK.
White, E. M. (1995). An apologia for the timed impromptu essay test. College Composition
and Communication, 46(1), 30-45.
Wiggins, G. (1994). The constant danger of sacrificing validity to reliability: Making
writing assessment serve writers. Assessing Writing, 1(1), 129-39.