Thinking Aloud about Confusing Code: A Qualitative Investigation of Program Comprehension and Atoms of Confusion
\emph{Atoms of confusion} are small patterns of code
that have been empirically validated to be difficult to hand-evaluate by
programmers. Previous research focused on defining and quantifying this
phenomenon, but not on explaining or critiquing it. In this work, we address core
omissions to the body of work on atoms of confusion, focusing on the how' and
why' of programmer misunderstanding.
We performed a think-aloud study in which we observed programmers, both
professionals and students, as they hand-evaluated confusing code. We
performed a qualitative analysis of the data and found several
surprising results, which explain previous results, outline
avenues of further research, and suggest improvements of the
research methodology.
A notable observation is that correct hand-evaluations do not imply
understanding, and incorrect evaluations not misunderstanding.
We believe this and other observations may be used to
improve future studies and models of program comprehension. We argue
that thinking of confusion as an atomic construct may pose challenges to
formulating new candidates for atoms of confusion.
Ultimately, we question
whether hand-evaluation correctness is, itself, a
sufficient instrument to study program comprehension.