The ICER schedule is out, so I’m reading through the papers I can find. I just finished Nelson and Ko’s position paper on “On Use of Theory in Computing Education Research”, and I found it thought provoking.

In particular, this paper speaks to one of my darkest desires: to design and develop instead of doing research. Obviously, the paper itself is more nuanced than that dichotomy: the authors present trade-offs in researching theories vs. researching designs. They present some limitations of theory-driven research for CS Ed:

  • Time spent testing explanatory theories may not pay off and just distract from high-value design research
  • We can miss out on debates about theories (e.g., “Is Cognitive-load theory falsifiable?”)
  • Focusing on proof-of-concept designs that are “sufficiently-effective” may inhibit us from creating “optimally-effective” designs
  • We may not try to answer research questions unique to CS (“What does it mean to know a programming language?”)
  • We may over-rely on theory when reviewing papers and dismiss viable research ideas because they don’t fit our preconceived theoretical notions.

I wouldn’t have found that last point too compelling, if the authors didn’t reveal a (personal?) anecdote about a paper being repeatedly rejected.

“The original consensus was that the experimental design is strong, but the results are weak. Specific strengths of the experimental design included random assignment and large numbers of students involved. This paper was rejected for several reasons, many of these reasons using CLT as a critical lens. One CLT critique in the reviews was that a theoretically-framed design evaluation must produce results that support the theory_. More specifically, in this case CLT predicted a design with worked examples should do better, but instead the empirical results of the paper were mixed”_

This is insane, and just plain bad science. It’s terrifying to think that there are reviewers out there who think that finding results counter to a theory is a reason to reject - if anything, it only increases the need to have the paper published and discussed! I really would like to hear more about the circumstances of this paper. (Sidebar: we just ran a small study on WEs last spring, and actually had similar results: our quasi-experimental study with voluntary use of WEs surprisingly yielded no significant difference; if anything, students who took advantage of WEs were doing worst, even when we tried to control for factors like self-efficacy and ability).

Overall, I feel that this paper is also somewhat dangerous and should be read by experts, not novices, in CS Ed. You need to have a background in Education/Learning Sciences theory before you can decide how much to stray from it. I worry that some people might try to use these arguments to ignore all of the great stuff that comes to us from Ed. Still, I can’t deny that it’s probably going to have an effect on me - I want to live more in the design world than the theory world.

Nelson and Ko have a list of recommendations at the end, and I appreciated them enough to copy/paste here:

  1. Focus on design, domain-specific theories of learning, and validated measures of effects of learning.
  2. Publish work that distills theory from other fields to make it actionable for design and guiding design search.
  3. Publish all novel designs with some promise of improving outcomes.
  4. Publish all novel designs that future work might build on to then actually improve learning.
  5. Publish all novel designs that appear not to work well that other researchers might recreate, to avoid wasted effort.
  6. Conduct a periodic qualitative study of the critiques used in peer review in our community to detect and mitigate bias.

Title: On Use of Theory in Computing Education Research

Authors: Greg L. Nelson (Univ. of Washington); Andrew Ko (Univ. of Washington)

Venue: ICER’18