Skip to content

When the test fails

Picture this: three qualified and experienced science teachers huddled over a grade 10 science exam trying desperately to figure out not the answers to the questions, but the questions themselves.

Picture this: three qualified and experienced science teachers huddled over a grade 10 science exam trying desperately to figure out not the answers to the questions, but the questions themselves.

It was the first clue that this latest round of government exams had passed from the realm of minor annoyance to utter absurdity. The teachers had discovered what students have been saying all along: "Government exams suck!"

The problems with the science exam were many. First, the level of sophistication required to answer some of the questions was far beyond the government mandated learning outcomes for the course; second, many of the questions were so poorly written as to be unanswerable given the information provided; and thirdly, some of the passages were written far above the grade level of the students.

One teacher applied a readability index to several passages on the grade 10 science exam. Of the seven passages checked, one was written at a grade 8 level, one at grade 10, and five were grade 12 reading level. This, on a science exam where science, not reading ability, is being tested and where comprehension of the text is required before the science can be done.

But it isn't only the science exam that has caused frustration. One question on the Social Studies 11 exam asked the identity of Canada's first astronaut. Another alluded to Marshall Mcluhan, while a third asked about Frederick Banting. Interesting stuff maybe, but social studies teachers say it's not part of the curriculum, leaving one teacher to muse that she ought to play the Canadian edition of Trivial Pursuit to prepare her students for the next exam session. After all, if the exam is going to test random facts, why bother teaching the prescribed curriculum?

One of the big controversies this exam session was that not all students wrote the same exam. In English, there were two different "forms" and in Math 10 and other courses there were as many as five, and the level of difficulty on each form varied. In English in one class, for example, "Form B" exams were handed in at a rate 3 times greater than the "Form A." Were the B exams easier? It's hard to say, but they were completed much more quickly than the As.

The whole process raises serious concerns about the reliability of the exams to report on student progress. Teachers are hearing from students that they think that there is something fundamentally unfair about evaluating people using different tests. And finally, these exam not only have to be fair, they have to be perceived to be fair.

We can only hope that this fiasco causes teachers, administrators, parents and trustees to ask exactly what is being achieved with these exams, especially at grade 10 and 11. Who's being served? And how do these exams improve real learning? Oh yeah, and what does question #26 mean, anyway?

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks