Saturday, June 22, 2013

Reflections on the SBG Final Exam

Pros:

  • Huge time saver on the grading side of things
    • Better than a cumulative SBG final exam, still worse than a MC final exam, but I'm OK with this
  • Students seemed to appreciate the opportunity to focus their studying on only content areas that they were weakest in
  • Since every student was working on their own stuff, the opportunity to cheat was gone
    • That doesn't mean they didn't try however
  • A lot of students asked "why don't other teachers do this?"
Cons:
  • Huge time suck on the preparation side of things
    • Creation of 34 different assessments is hopefully a 1 time chore
  • Organization was a little cumbersome, tracking all that paper
  • 15 standards was too much for most students
    • I still think it's an acceptable goal, but most students simply have no appreciation for how much time should be spent on a problem when one truly understands the material. The ACT math section is going to be a harsh dose of reality. 
Improvements for next year:
  • Minimum of 5 assessments (see below)
    • Possibly a requirement that students with a grade below B be working for the entirety of the 90 minute exam period
  • Better tracking of what standards are chosen
    • I emphasized that students should pick their lowest scores to ensure that their grade doesn't drop, but I did have some fail to heed that warning. Not sure what they expected to happen.
  • Some form of prioritizing standards that haven't been assessed in a long time
    • Related: low prioritizing "easy standards" that aren't incredibly relevant to the course overall

I would say the most surprising aspect of the whole experiment was how many students' first reaction was to try and cheat the system. Honestly - here I was working myself to the bone to given students a chance to demonstrate what they had learned on their terms, and their immediate response was "how can I twist this to my advantage?" IT WAS SETUP FOR YOUR ADVANTAGE YOU LAZY SACK OF CRAP! Sorry, had to get that off my chest. 

For example, the most common question I got leading up to the final exam was "what happens if I leave everything blank?" Naively, the first time I heard this I thought "well, nothing happens." After all, one of the major points of the SBG philosophy is that grades should not behavioral rewards or punishments, but measures of student proficiency. Why should I care if a student opts out of a chance to demonstrate proficiency? After all, at least half of my students were on track to earn a D or worse leading up to the final exam. If a student is somehow satisfied with those results, who am I to stand in their way? However, I quickly saw that such a philosophy would lead to mass chaos if allowed to spread on a wide scale, so I haphazardly tried to toss in a minimum of 5 assessments out of the 34 total covered throughout the semester. It at least kept students awake and working for the 90 minute exam period. 


Overall, most students either maintained their proficiency level (expected), or demonstrated a slight gain (hoped for). I don't think anyone did more than move a single gradation (a B to a B+ for example). I did allow students to lower their scores, not as a punishment, but because the grade should reflect student proficiency. If a student demonstrated 'C' level proficiency back in March, but 'D' level proficiency in June, the original C shouldn't be kept for old time's sake. Interestingly enough, this was probably the sole complaint I heard about the entire setup, but I didn't understand why. Every other teacher in the school simply weights the final exam at 20% of the semester grade and let's the numbers fall where they do. Are students just oblivious to how poorly most of them perform on cumulative final exams?

Verdict: I will totally be repeating this format in the future.


Tuesday, June 4, 2013

Gamification

Are bad ideas that get results still bad ideas? 

I feel almost dirty right now. I had an idea last night. We were going to be working on our last worksheet of the year today and I've been struggling all year to get kids to even bother putting pencil to paper. We've tried everything. Modeling style discussions disappeared a while ago because we couldn't have meaningful whole class discussions when only 2 kids did any work. We tried picking students at random to work through problems in front of the class, but again, when no one does any work, that quickly becomes a nightmare. I tried checking worksheets for a completion-based grade and that worked for a very short while, but soon we were back to square one. Quizzes have been open notes all year, with the idea being that if students are completing the work, it'll be right in front of them on the assessment. Still, maybe 10% of students are actually completing the worksheets. 

So I decided to give students an incentive to get the work done. I know that's not a new idea, but I hate anything that makes the reason to do get work done something other than learning. My idea was to give students a raffle ticket for every question on the worksheet that was 100% complete. This means detailed steps, the correct answer, and proper units. The raffle tickets will be put into a bucket and one will be drawn for a prize. The prize will be some token I have laying around, but I refuse to make it academic (and made that clear to the students). SBG makes extra credit a non-issue anyway. I told students that they were welcome to work together, but doing so would inflate the number of tickets out there and decrease an individual's odds of winning. 

Also, my rules dictated that I would NOT help in any way. All I would do is pass out tickets based on the number of complete problems I saw on a worksheet when it was put in front of me. I would not say which question was wrong or why. It was fun to see which students figured out that doing problems one at a time was the guaranteed way to keep track of which problems were correct. 

Sadly, a decent chunk of students (maybe 1/3) still did nothing. But that means that 2/3 of the classes were actively getting their work done! And they have no idea what they MIGHT get! And they know it's NOT academic! 

I feel really weird about how well it worked. I guess the real evidence will come from assessment data to see if getting them to try will yield to proficiency, or if they really did just copy answers to get a raffle ticket.