This afternoon I got to experience one of the new professional development workshops GED Testing Service is rolling out this year. All About Scoring guides teachers on the criteria to score constructed responses on the GED Test.
Since the need for this kind of training is high, I was particularly interested in their format, tools…and how the presenters handle some of the sticky questions that educators ask me in similar sessions.
So I cordially threw out my most popular Extended Response FAQs and thought you’d be interested in the answers straight from GEDTS:
“Spelling is not scored on this rubric. But how can the computer pick up the keywords?”
I always get this question, and it comes from educators who were in K-12 when they started implementing computer-based testing. At that time, test takers quickly learned how to game the automated scoring with templates and keywords.
The GEDTS folks assured us: there are no keywords.
At this point in the workshop, my participants tend to look like Neo from The Matrix when he learns “there is no spoon.”
How is this possible?
GEDTS emphasized that the new automated scoring rubric was trained by thousands of sample responses graded by real humans, much like Watson the Super Computer learning to play Jeopardy. The scoring system learned to look for “a constellation of errors and qualities” in writing, not the “right answer.” The technical term for what the computer is doing is latent semantic analysis.
Basically, the computer can read like you do: looking for an adequate explanation of a concept using related terms. But using numbers.
This is Artificial Intelligence, folks. No joke. We’re living in the movie 2001. In 2015.
But there is a human back up system. If the computer can’t figure out the scoring, it gets spit back out for a human. If a test taker disputes a score, a human handles it. And humans regularly audit a selection of responses just to make sure the system is working properly.
So there are things the computer can’t do, but one thing it definitely won’t be looking for is keywords. We’ve come a long way since the first computer-based scoring in K-12, technologically speaking. If a student can game the automated scoring system, we’ve found the next Tony Stark, and we should worry about him or her hacking the Pentagon.
“But what about templates? What structure should I teach my students so they can pass?”
I’ll defer here to some direct quotes from the GEDTS trainers’ response:
There are no templates.
Ideas should drive the structure, not structure drive the ideas.
Using formulas for writing gives our students permission to check their brains at the door and not do the work we’re asking them to do.
The 2002 GED Test couldn’t handle good writing. Good writing doesn’t conform to the template of mediocre writing. We can celebrate really good writing by using the scoring tools.
“Who are the ‘subject matter experts’ quoted in the GEDTS scoring guide materials?”
Interesting back story: After collecting the thousands of sample responses, GEDTS participated in a process called Range Finding. This process involved real adult educators talking about how they would score the responses and why they would give certain scores to figure out the range for each score in the rubric. It took a few weeks. During the sessions the GEDTS Content Manager (not her real title) listened in, and quoted the educators for the guide.
There are no magic, elite subject matter experts driving these scores. They were real educators having conversations about scoring.
The goal of the workshop All About Scoring is that after enough practice with these scoring tools, you, too, can be a GEDTS scoring expert.
The scoring tools and more can be found at http://www.gedtestingservice.com/educators/constructedresponse