Jump to content

Les Perelman

From Wikipedia, the free encyclopedia
Les Perelman
Born
Leslie Cooper Perelman

EducationUniversity of California,
Berkeley
(BA)
University of Massachusetts Amherst (MA, PhD)
OccupationEducator
Known forCriticism of standardized testing

Leslie Cooper Perelman is an American scholar and authority on writing assessment.[1][2][3] He is a critic of automated essay scoring (AES),[4][5] and influenced the College Board's decision to terminate the Writing Section of the SAT.[6]

Perelman is currently a research affiliate at the Massachusetts Institute of Technology (MIT).[7] Perelman taught writing and composition at MIT, where he served as Director of Writing Across the Curriculum and an Associate Dean of Undergraduate Education.[8] He was an executive committee member of the Conference on College Composition and Communication.[9] and co-chair of its Committee on Assessment.[10]

Teaching

[edit]

Perelman taught in and directed writing programs at Tulane University and the University of Southern California. At MIT, he taught writing and composition and served as the director of Writing Across the Curriculum and an Associate Dean in the Office of Undergraduate Education.[8]

Criticism of essay scoring

[edit]

SAT

[edit]

Following his 2005 study of essay samples as well as graded essays provided by the College Board for reference on the writing portion of the SAT, Perelman reported a high correlation between an essay's length and score. He also noted that the essays were not penalized for any factual inaccuracies.[11]

In 2013, Perelman met with David Coleman, the incoming president of the College Board, and an outcome of that conversation was Coleman's decision to abolish the mandatory SAT Writing Section.[12]

Automatic scoring

[edit]

In 2012, Perelman demonstrated that long, pretentious essays could achieve higher scores from the ETS scoring engine e-Rater as opposed to well written essays.[13] In 2014, Perelman collaborated with students at MIT and Harvard to develop BABEL, the "Basic Automatic B.S. Essay Language" Generator. The nonsense essays generated by BABEL are claimed to perform well when graded by AES systems. Automated graders, Perelman argues, "cannot read meaning, and they cannot check facts. More to the point, they cannot tell gibberish from lucid writing."[14] Perelman's work is cited by the NCTE in their Position Statement on Machine Scoring, which expresses similar concerns about the limitations of AES:

Computer scoring systems can be "gamed" because they are poor at working with human language, further weakening the validity of their assessments and separating students not on the basis of writing ability but on whether they know and can use machine-tricking strategies.[15]

Influence on Australian Educational Testing

[edit]

During 2017-2018, Perelman was commissioned by the New South Wales Teachers Federation to write three reports[16][17][18] to assist in efforts to reform Australia's national primary and secondary school assessments, the National Assessment Program – Literacy and Numeracy (NAPLAN). His work was a major factor in the decision of the National Education Council to overrule the Federal Education Minister and prevent the use of Automated Essay Scoring for the writing portion of the NAPLAN tests.[19]

References

[edit]
  1. ^ "NAPLAN's writing test is 'bizarre' but here's how kids can get top marks". ABC News. 8 April 2018.
  2. ^ "Bientôt, les devoirs seront notés par des machines".
  3. ^ "Essay-grading by software flawed, 'essentially impossible,' expert says".
  4. ^ "Construct Validity, Length, Score, and Time in Holistically Graded Writing Assessments: The Case against Automated Essay Scoring (AES)" (PDF). WAC Clearinghouse. Retrieved June 14, 2015.
  5. ^ "The BABEL Generator and E-Rater: 21st Century Writing Constructs and Automated Essay Scoring (AES)".
  6. ^ "The man who killed the SAT essay". Boston Globe. Retrieved June 14, 2015.
  7. ^ "People Directory". Massachusetts Institute of Technology. Retrieved June 11, 2015.
  8. ^ a b "iMOAT". Massachusetts Institute of Technology. Retrieved June 11, 2015.
  9. ^ "2015 CCCC Officers and Executive Committee". National Council of Teachers of English. Retrieved June 11, 2015.
  10. ^ "Committee on Assessment (November 2016)". National Council of Teachers of English. Retrieved July 29, 2016.
  11. ^ Winerip, Michael (May 4, 2005). "SAT Essay Test Rewards Length and Ignores Errors". The New York Times. Retrieved June 11, 2015.
  12. ^ Balf, Todd (6 March 2014). "The Story Behind the SAT Overhaul". The New York Times. Retrieved April 5, 2015.
  13. ^ Winerip, Michael (22 April 2012). "Facing a Robo-Grader? Just Keep Obfuscating Mellifluously". The New York Times. Retrieved 5 April 2013.
  14. ^ Kolowich, Steve (April 28, 2014). "Writing Instructor, Skeptical of Automated Grading, Pits Machine vs. Machine". The Chronicle of Higher Education. Retrieved June 11, 2015.
  15. ^ "NCTE Position Statement on Machine Scoring". National Council of Teachers of English. Retrieved June 11, 2015.
  16. ^ Perelman, Les. "Automated Essay Scoring and NAPLAN: A Summary Report" (PDF). New South Wales Teachers Federation. New South Wales Teachers Federation. Retrieved 28 December 2018.
  17. ^ Perelman, Les (2018). Towards a New NAPLAN: Testing to the Teaching (PDF). Sydney: New South Wales Teacher Federation. ISBN 978-0-6482555-1-2. Retrieved 28 December 2018.
  18. ^ Perelman, Les. "Problems in the Design and Administration of the 2018 NAPLAN" (PDF). New South Wales Teacher Federation. Retrieved 28 December 2018.
  19. ^ Koiziol, Michael. "Computer says no: Governments scrap plan for 'robot marking' of NAPLAN essays". No. 29 January 2018. Sydney Morning Herald. Retrieved 28 December 2018.
[edit]