For the Fall 2020 Pandemic Semester, I taught two sections of GVSU’s 5-credit MTH 124: Functions and Models, our calculus-prep course. This is the course for which I wrote Active Prelude to Calculus, and this was my first time getting to teach the course from my own text.
I taught the course in an unusual pandemic-induced hybrid format that is likely familiar to readers. My classroom could hold up to 20 socially-distanced students wearing facemasks (plus me, also masked), but I had 30 students registered in each section of the course. Some students needed to self-isolate or quarantine periodically, and other students needed the option to attend online at all times. This led to a format where some students would attend class in person, while the remainder were present in a synchronous Zoom meeting that I hosted. (The logistics of that were surprisingly successful and not as big a deal as I feared in August; that should be the subject of a future post, as it is not my main purpose here.) All of this is to say: the pandemic forced some structural changes, and the changes tied to assessment ended up being really good for my students and me.
Working with three other colleagues also teaching the course, we decided to rethink our assessment plans since we had to navigate the issue of some-students-in-person, some-students-online, all simultaneously. We centered the course around 12 Core Learning Targets and weekly Checkpoints (quizzes) that assessed student understanding of those learning targets in a mastery-based system. Here is the basic gist of it:
- In the last ~30 minutes of our final meeting of each week, students would download that week’s Checkpoint from Blackboard in PDF.
- After working to respond to as many of the questions as they could, they would use a scanning app on their phone to submit a single PDF of their work.
- Each learning target typically had a cluster of 2-4 short questions that students needed to answer nearly perfectly in order to earn a mark of “M” (mastered) for credit on that learning target; if they didn’t demonstrate mastery, the mark was “NY” (not yet), and they could re-attempt that target on a future Checkpoint.
- Checkpoints were open-book, open-note. The only constraints on students were: no collaboration with other human beings, and no using resources beyond the text and their self-generated notes.
Checkpoints constituted 36% of students’ semester grades — effectively 3% for each learning target. (You can see my full syllabus if it’s helpful.) Each of the 12 learning targets appeared on 3 separate Checkpoints: for instance, Checkpoint #5 had learning targets 3, 4, and 5, and then Checkpoint #6 had learning targets 4, 5, and 6, and so on. Once a student had mastered a learning target, they could ignore that question on subsequent Checkpoints. And, if a student hadn’t mastered a learning target after three attempts, they had two options: they could (for a limited number of targets) have an oral assessment with me by Zoom to demonstrate mastery, or, during final exam week, take a Custom Checkpoint with a 4th attempt on up to 3 learning targets (I wrote 25 Custom Checkpoints during final exam week ;-)).
The course was not a full instantiation of mastery grading, but the vast majority of the graded work allowed for multiple attempts: 36% for Checkpoints, 20% for weekly WeBWorK sets (10 exercises each, with up to 8 attempts per exercises, but students could ask for more attempts if they were unsuccessful after 7), 15% for weekly Writing Assignments (graded on a points scale, but with some options for revision & resubmission), and 10% for Daily Prep Assignments that were marked on the basis of effort and completeness. These together constituted 81% of the course; there was an additional 4% for a pair of metacognition assignments, plus 15% for a relatively traditional final (though it, too, was administered online and was open-note, open-book).
There is lots of evidence that mastery grading is good for learning. I absolutely found this to be the case for my students, in my experience and theirs. The combination of frequent, lower-stakes assessment; high standards for mastery; emphasis on conceptual understanding; and the more encouraging message of “not yet!” led to one of the most successful teaching experiences I’ve had.
I know that I was fortunate to have a really good group of students this fall (generally hard-working with positive attitudes and great patience for navigating the circumstances of COVID) and that certainly influences the results, but the group didn’t seem that different from those I’d taught in two sections of the course in Fall 2017. In fall 2017, the DFW rate was 33%; in fall 2020, the DFW rate was 21%. And in fall 2020, a very small percentage of students earned marks in the C-range; the vast majority of my students demonstrated very strong understanding in the core ideas of the course and ended up earning semester grades of B or better.
Finally, my students self-reported that they really liked this system: that it lowered the pressure for them, that it allowed them to focus more on learning and understanding, and that it was not demoralizing to get “NY” on their paper. From my perspective as instructor, it made my grading more straightforward and meaningful, and there was abundant evidence that my students adhered to the expectations for academic honesty. The combination of multiple attempts and open-book, open-note assessments resulted in all or nearly all students doing the work themselves. Frequently, students would write on their own paper before submitting: “not yet” or “I need to work on this more.”
Along with a link to this blog entry, I will post on the AC-users list some more information for interested instructors.