I made a comment to a blog post on Mark Guzdial’s blog, and I wanted to repost it here.
My response:
Normative, Ipsative, and Criterion-based - how do we measure our success rate in CS Ed? Here, a criterion-based metric has been proposed: no more than 5% of the class can fail. When writing my dissertation, I compared our Computational Thinking to other courses in the “Quantitative and Symbolic Reasoning” bucket (http://imgur.com/a/fUxsQ), for a normative comparison. When you wrote your “Exploring Hypotheses about Media Comp” paper, you used an Ipsative assessment to improve your CS1’s dropout rate (“… a course using Media Computation will have a higher retention rate than a traditional course.”).
It’s hard to make an argument that one of these approaches is better than the other. Criterion-based measures are usually formed when you look at many normative distributions, and so aren’t that different in practice. Ipsative can be unsatisfactory and insufficient for making honest comparisons. Looking at it normatively doesn’t always help that much, if we assume that most of our introductory courses aren’t doing a very good job.
But questions of what we’re basing this on aside, does 5% feel like the right number? Currently, we have about a ~10% DFW rate in our CT course. I think we’re nearing the bottom of what we can reasonably do to bring those DFW students to success - most of these are students who stopped working for reasons outside my control or who had medical crises. I’m not sure I could squeeze out another 5% without really lowering my standards for success. And that’s in a non-majors class where my expectations are very different from what I want out of CS majors.
Ultimately, I think my big reaction is that assessment is really, really hard (e.g., Allison’s work) and we aren’t good enough at it yet to really be able to micromanage our pass/fail rates too much. Whatever arbitrary number we choose as success is tied very heavily to how we measure success in the first place.