This will be my 6th year teaching and my 3rd year as a 'Modeler.' I have never experienced any professional development / curriculum framework like Modeling. Why else would I take it upon myself to create a geometry curriculum based around the modeling philosophy?
But I find myself at a crossroads; I've yet to have a huge amount of success with Modeling Instruction, both data-based success as well as student opinion-based success. Classroom management has always been a weak spot for me, and I'm starting to think that Modeling is actually making my issues worse.
It's only the 4th day of school, and it's already spectacularly clear to me that my students are well behaved so long as I'm fulfilling the traditional sage-on-the-stage role (which I abhor, btw). Show a quick video clip? They listen attentively. Try to facilitate a discussion about the clip? Good luck. Detail a process using a document camera projecting onto a large screen? Students silently take notes. Stop the 'lecture' to have students explore a concept on their own (or in small groups)? Fugetaboutit.
In talking with other teacher in 'better' school districts and I've always just assumed that 'better' students don't give their teachers as much grief as my students give me. But seeing how cooperative everyone is when I dim the lights and lecture with a doc cam has made me think twice about that. Obviously my students *can* behave and take class seriously, but they often choose not to.
The simple solution is put Modeling on the back burner for the sake of my own sanity, but I honestly don't think that's going to have a significant impact on the amount on the % of students who fail my class (the only metric that my district cares about when determining my worth). Besides, deep down I know in my heart that Modeling is a better way to get to deeper understanding of content. Just because students are quiet and (seemingly) attentive doesn't mean they're learning anything.
It all boils down to the paradigm shift all teachers who embrace Modeling must deal with, I'm simply still struggling to find an effective method to facilitate that shift. Students aren't paying attention to anything non-lecture because they're been taught that lecture is all that matters. The assumption they make is that if I'm not delivering content, then the content must not be that important. So I guess my goal for this year is to drive home the notion that just because I'm not the focus of the class, doesn't mean major breakthroughs aren't happening.
Monday, September 9, 2013
Saturday, June 22, 2013
Reflections on the SBG Final Exam
Pros:
- Huge time saver on the grading side of things
- Better than a cumulative SBG final exam, still worse than a MC final exam, but I'm OK with this
- Students seemed to appreciate the opportunity to focus their studying on only content areas that they were weakest in
- Since every student was working on their own stuff, the opportunity to cheat was gone
- That doesn't mean they didn't try however
- A lot of students asked "why don't other teachers do this?"
Cons:
- Huge time suck on the preparation side of things
- Creation of 34 different assessments is hopefully a 1 time chore
- Organization was a little cumbersome, tracking all that paper
- 15 standards was too much for most students
- I still think it's an acceptable goal, but most students simply have no appreciation for how much time should be spent on a problem when one truly understands the material. The ACT math section is going to be a harsh dose of reality.
Improvements for next year:
- Minimum of 5 assessments (see below)
- Possibly a requirement that students with a grade below B be working for the entirety of the 90 minute exam period
- Better tracking of what standards are chosen
- I emphasized that students should pick their lowest scores to ensure that their grade doesn't drop, but I did have some fail to heed that warning. Not sure what they expected to happen.
- Some form of prioritizing standards that haven't been assessed in a long time
- Related: low prioritizing "easy standards" that aren't incredibly relevant to the course overall
I would say the most surprising aspect of the whole experiment was how many students' first reaction was to try and cheat the system. Honestly - here I was working myself to the bone to given students a chance to demonstrate what they had learned on their terms, and their immediate response was "how can I twist this to my advantage?" IT WAS SETUP FOR YOUR ADVANTAGE YOU LAZY SACK OF CRAP! Sorry, had to get that off my chest.
For example, the most common question I got leading up to the final exam was "what happens if I leave everything blank?" Naively, the first time I heard this I thought "well, nothing happens." After all, one of the major points of the SBG philosophy is that grades should not behavioral rewards or punishments, but measures of student proficiency. Why should I care if a student opts out of a chance to demonstrate proficiency? After all, at least half of my students were on track to earn a D or worse leading up to the final exam. If a student is somehow satisfied with those results, who am I to stand in their way? However, I quickly saw that such a philosophy would lead to mass chaos if allowed to spread on a wide scale, so I haphazardly tried to toss in a minimum of 5 assessments out of the 34 total covered throughout the semester. It at least kept students awake and working for the 90 minute exam period.
Overall, most students either maintained their proficiency level (expected), or demonstrated a slight gain (hoped for). I don't think anyone did more than move a single gradation (a B to a B+ for example). I did allow students to lower their scores, not as a punishment, but because the grade should reflect student proficiency. If a student demonstrated 'C' level proficiency back in March, but 'D' level proficiency in June, the original C shouldn't be kept for old time's sake. Interestingly enough, this was probably the sole complaint I heard about the entire setup, but I didn't understand why. Every other teacher in the school simply weights the final exam at 20% of the semester grade and let's the numbers fall where they do. Are students just oblivious to how poorly most of them perform on cumulative final exams?
Verdict: I will totally be repeating this format in the future.
Tuesday, June 4, 2013
Gamification
Are bad ideas that get results still bad ideas?
I feel almost dirty right now. I had an idea last night. We were going to be working on our last worksheet of the year today and I've been struggling all year to get kids to even bother putting pencil to paper. We've tried everything. Modeling style discussions disappeared a while ago because we couldn't have meaningful whole class discussions when only 2 kids did any work. We tried picking students at random to work through problems in front of the class, but again, when no one does any work, that quickly becomes a nightmare. I tried checking worksheets for a completion-based grade and that worked for a very short while, but soon we were back to square one. Quizzes have been open notes all year, with the idea being that if students are completing the work, it'll be right in front of them on the assessment. Still, maybe 10% of students are actually completing the worksheets.
So I decided to give students an incentive to get the work done. I know that's not a new idea, but I hate anything that makes the reason to do get work done something other than learning. My idea was to give students a raffle ticket for every question on the worksheet that was 100% complete. This means detailed steps, the correct answer, and proper units. The raffle tickets will be put into a bucket and one will be drawn for a prize. The prize will be some token I have laying around, but I refuse to make it academic (and made that clear to the students). SBG makes extra credit a non-issue anyway. I told students that they were welcome to work together, but doing so would inflate the number of tickets out there and decrease an individual's odds of winning.
Also, my rules dictated that I would NOT help in any way. All I would do is pass out tickets based on the number of complete problems I saw on a worksheet when it was put in front of me. I would not say which question was wrong or why. It was fun to see which students figured out that doing problems one at a time was the guaranteed way to keep track of which problems were correct.
Sadly, a decent chunk of students (maybe 1/3) still did nothing. But that means that 2/3 of the classes were actively getting their work done! And they have no idea what they MIGHT get! And they know it's NOT academic!
I feel really weird about how well it worked. I guess the real evidence will come from assessment data to see if getting them to try will yield to proficiency, or if they really did just copy answers to get a raffle ticket.
I feel almost dirty right now. I had an idea last night. We were going to be working on our last worksheet of the year today and I've been struggling all year to get kids to even bother putting pencil to paper. We've tried everything. Modeling style discussions disappeared a while ago because we couldn't have meaningful whole class discussions when only 2 kids did any work. We tried picking students at random to work through problems in front of the class, but again, when no one does any work, that quickly becomes a nightmare. I tried checking worksheets for a completion-based grade and that worked for a very short while, but soon we were back to square one. Quizzes have been open notes all year, with the idea being that if students are completing the work, it'll be right in front of them on the assessment. Still, maybe 10% of students are actually completing the worksheets.
So I decided to give students an incentive to get the work done. I know that's not a new idea, but I hate anything that makes the reason to do get work done something other than learning. My idea was to give students a raffle ticket for every question on the worksheet that was 100% complete. This means detailed steps, the correct answer, and proper units. The raffle tickets will be put into a bucket and one will be drawn for a prize. The prize will be some token I have laying around, but I refuse to make it academic (and made that clear to the students). SBG makes extra credit a non-issue anyway. I told students that they were welcome to work together, but doing so would inflate the number of tickets out there and decrease an individual's odds of winning.
Also, my rules dictated that I would NOT help in any way. All I would do is pass out tickets based on the number of complete problems I saw on a worksheet when it was put in front of me. I would not say which question was wrong or why. It was fun to see which students figured out that doing problems one at a time was the guaranteed way to keep track of which problems were correct.
Sadly, a decent chunk of students (maybe 1/3) still did nothing. But that means that 2/3 of the classes were actively getting their work done! And they have no idea what they MIGHT get! And they know it's NOT academic!
I feel really weird about how well it worked. I guess the real evidence will come from assessment data to see if getting them to try will yield to proficiency, or if they really did just copy answers to get a raffle ticket.
Thursday, May 30, 2013
Choose your own final exam!
At the end of the first semester, I gave the traditional final exam, but with the SBG flair. It was cumbersome to say the least. To fit 25-30 standards onto a single test expanded the test to about 8 pages. In the end, it was a good test in that students were able to finish it and it adequately covered the entire semester. But after 3 days of proctoring finals, I had 2 days to grade 150 exams that were each 8 pages long. Never again.
So leading up to the end of this semester I had an idea: if the core of Standards-Based Grading (SBG) is to pinpoint specific content that students have mastered, why bother making them take a final exam full of content that they've already demonstrated proficiency on? Why not let students focus their efforts on only the standards that they've struggled with?
Here's what I did: I wrote individual assessments for every standard (30 for geometry, 24 for physics and 30 for astronomy). Then I wrote a Google Form to allow students to tell me which standards they'd like to take on the final. Once I sifted through the data, I was able to give each student an exam tailored to their specific needs. The major stipulations were that students had to pick at least 10, and they had to start by picking the standards which hadn't been mastered yet.
Here's the form I'm having students fill out (please don't submit anything - it'll only confuse me):
https://docs.google.com/forms/d/18MfYDxVId687PPFSuyvySl_LkguVkpDNsM5w_PQVyn8/edit
I'm really excited to see how this all pans out. Gotta be willing to try something new, right?
Friday, April 19, 2013
Day 132: Intersecting Secants
Unit 9 is the first unit of my new experiment that I hadn't given any thought to last summer when I wrote this all out. It was one of those "I've done a lot, I can finish the rest as we go next year" thoughts that doom all teachers in the summer. As a result, much of the instruction in this unit has been direct because creating student-centered discovery lessons apparently takes a LOT of time and energy. That, coupled with a lot going on in my personal life has left me just trying to get by.
I'm at least still making every effort to demonstrate WHY the ideas we're discussing are true. No joke, the prepared materials that I was given when I started teaching were nothing more than: "This is the theorem, this is how it's used, now you try." No attention was given to the why & how questions, which is what led me to create this curriculum.
For secants, the main idea we're discussing is the relationship between the exterior angle and it's intercepted arcs. Similar to chords, but when the angle is outside the circle, we're looking for the difference between the two arcs, not the sum.
After a brief exploration demo, students were given the class period to work on U9 WS3.
I'm at least still making every effort to demonstrate WHY the ideas we're discussing are true. No joke, the prepared materials that I was given when I started teaching were nothing more than: "This is the theorem, this is how it's used, now you try." No attention was given to the why & how questions, which is what led me to create this curriculum.
For secants, the main idea we're discussing is the relationship between the exterior angle and it's intercepted arcs. Similar to chords, but when the angle is outside the circle, we're looking for the difference between the two arcs, not the sum.
After a brief exploration demo, students were given the class period to work on U9 WS3.
Thursday, April 18, 2013
Day 131: Chords
For whatever reason, I never interpreted chords as a subset of secants. Rather, I thought of chords as a separate classification of segments, akin to tangents. What's interesting (to me anyway, I get that I'm not 'normal') is that it seems as though every math textbook ever written handles the chapter on circles different than every other text. There is SO MUCH you can discover with circles, so it ends up forcing a value judgement as to what you what students to know (and how to present it).
I went with:
I went with:
- Intersecting chords form an angle within the interior of a circle. The measure of that angle is half the sum of the two intercepted arcs.
- If a chord is bisected at a right angle, the bisector is a diameter of the circle (I always thought that it was neat that you could find the center of a circle with 2 chords)
- Intersecting chords break each other into 4 pieces (2 pieces each). The product of the 2 pieces of one chord is equal to the product of the 2 pieces of the other.
- Equal chords intercept equal arcs and are equidistant to the center of the circle. Probably one of the more confusing and not obviously relevant idea.
On the upside, I haven't had any students pronounce it with the 'ch' from 'church' sound. Progress?
Reflection 75% of the way through
I'm certainly noticing a subconscious shift in my teaching style in this unit, which is most likely the result of a lot of influencing factors. Essentially, I'm seeing that I'm implementing less and less of my vision, which was to incorporate the modeling method and student discovery based learning into geometry. I've slowly morphed into a 'traditional' teacher with instruction that looks more like a lecture than anything else. I tell myself that it could be worse - the curriculum materials that i was given when I started teaching here 5 years ago were very basic fill-in-the-blank notes, low level homework quizzes, and multiple choice tests. I can say with confidence that what I'm doing now is light years beyond where I was.
Some possible reasons why this is happening:
Some possible reasons why this is happening:
- After 8 months, I grew tired of beating my head against the wall, trying to force a paradigm shift on unwilling participants. Path of least resistance and all. (FWIW, I hate feeling like this, but I can't deny reality)
- The content in this unit (circles) is a mile wide and an inch deep. And what stinks is that it doesn't even offer itself to adaptation. As in, it would be hard to go narrow and deep because of how interconnected everything is. Plus, the ACT loves circle questions, so I feel obligated to show 10th graders everything they might see next year on the most important test of their lives (I hate saying that, too).
- These students (either 10th graders in general, or possibly just 10th graders in my school) are simply not mature enough to handle a modeling approach. MAYBE, if the whole school shifted gears and the philosophy was being reinforced by every teacher a student saw, but when you're the lone wolf, students feel justified in resisting change. "If I can just get through this class, I can get back to a teacher who'll just tell me the answers and I can memorize how to get them."
Looking forward to next year and I am absolutely going to continue with this curriculum that I have spent all year developing. I might try the modeling approach with whiteboards & group discussion with the hopes that the class of 2015 was an anomaly (something that's actually talked about openly in the school).
I will certainly NOT go back to the teacher led, "sage on the stage" method. I hate it. In terms of grades, I'm not seeing any significant difference between that method and my method, so there's no real reason to switch back, other than getting tired of listening to students complain. And let's be honest, if I allowed student complaints to effect me, I wouldn't have lasted this long in this job.
Subscribe to:
Posts (Atom)