- Huge time saver on the grading side of things
- Better than a cumulative SBG final exam, still worse than a MC final exam, but I'm OK with this
- Students seemed to appreciate the opportunity to focus their studying on only content areas that they were weakest in
- Since every student was working on their own stuff, the opportunity to cheat was gone
- That doesn't mean they didn't try however
- A lot of students asked "why don't other teachers do this?"
Cons:
- Huge time suck on the preparation side of things
- Creation of 34 different assessments is hopefully a 1 time chore
- Organization was a little cumbersome, tracking all that paper
- 15 standards was too much for most students
- I still think it's an acceptable goal, but most students simply have no appreciation for how much time should be spent on a problem when one truly understands the material. The ACT math section is going to be a harsh dose of reality.
Improvements for next year:
- Minimum of 5 assessments (see below)
- Possibly a requirement that students with a grade below B be working for the entirety of the 90 minute exam period
- Better tracking of what standards are chosen
- I emphasized that students should pick their lowest scores to ensure that their grade doesn't drop, but I did have some fail to heed that warning. Not sure what they expected to happen.
- Some form of prioritizing standards that haven't been assessed in a long time
- Related: low prioritizing "easy standards" that aren't incredibly relevant to the course overall
I would say the most surprising aspect of the whole experiment was how many students' first reaction was to try and cheat the system. Honestly - here I was working myself to the bone to given students a chance to demonstrate what they had learned on their terms, and their immediate response was "how can I twist this to my advantage?" IT WAS SETUP FOR YOUR ADVANTAGE YOU LAZY SACK OF CRAP! Sorry, had to get that off my chest.
For example, the most common question I got leading up to the final exam was "what happens if I leave everything blank?" Naively, the first time I heard this I thought "well, nothing happens." After all, one of the major points of the SBG philosophy is that grades should not behavioral rewards or punishments, but measures of student proficiency. Why should I care if a student opts out of a chance to demonstrate proficiency? After all, at least half of my students were on track to earn a D or worse leading up to the final exam. If a student is somehow satisfied with those results, who am I to stand in their way? However, I quickly saw that such a philosophy would lead to mass chaos if allowed to spread on a wide scale, so I haphazardly tried to toss in a minimum of 5 assessments out of the 34 total covered throughout the semester. It at least kept students awake and working for the 90 minute exam period.
Overall, most students either maintained their proficiency level (expected), or demonstrated a slight gain (hoped for). I don't think anyone did more than move a single gradation (a B to a B+ for example). I did allow students to lower their scores, not as a punishment, but because the grade should reflect student proficiency. If a student demonstrated 'C' level proficiency back in March, but 'D' level proficiency in June, the original C shouldn't be kept for old time's sake. Interestingly enough, this was probably the sole complaint I heard about the entire setup, but I didn't understand why. Every other teacher in the school simply weights the final exam at 20% of the semester grade and let's the numbers fall where they do. Are students just oblivious to how poorly most of them perform on cumulative final exams?
Verdict: I will totally be repeating this format in the future.
No comments:
Post a Comment