Part of PL research is building a science of programming. PL has a science of program correctness: we make precise claims about soundness / completene

Evaluating Human Factors Beyond Lines of Code

submited by
Style Pass
2024-11-23 18:30:04

Part of PL research is building a science of programming. PL has a science of program correctness: we make precise claims about soundness / completeness / undefined behavior / etc., and we justify those claims with proofs. PL has a science of program performance: we make precise claims about relative speed or resource utilization compared to prior art, and we justify those claims with benchmarks. 

This observation still rings true today – one can find dozens of examples in every PL venue (even POPL!) describing systems or techniques as “usable”, “intuitive”, “easy to reason about,” and so on. However, the thesis of this post is that PL research lacks a commensurate science of human factors for dealing with these claims. PL papers tend to make imprecise usability claims and justify them with weak methods like comparing lines of code (LOC). The goal of this post is to provide actionable advice for PL researchers to improve their human-centered vocabulary and methods. Rather than just saying “run more user studies”, I will argue to embrace qualitative methods, modernize usability metrics, and leverage prior work on programmer psychology.

I am not the first person to argue for a more human-centered approach to PL. “Programmers Are Users Too” and “PL and HCI: Better Together” are excellent articles that advocate for importing ideas from the field of human-computer interaction (HCI) into PL research, such as user studies, iterative design, and contextual inquiry. These techniques are valuable; I use them in my own research on languages with sizable populations of existing users, such as Rust.

Leave a Comment