Hi! I'm Carl, a UX researcher living in Minneapolis, MN.
LinkedIn | Twitter | GitHub | Scholar | ResumeI'm an experienced industry UX researcher with an academic background in human factors psychology. My approach is multi-method: I'm as comfortable with linear mixed-effects models as in-depth interviews. My passion is for refining quantitative methods and sharing the ability to use quantitative approaches with other researchers. However, I always choose the right qualitative or quantitative tool for the research question at hand.
Unlike visual design or interaction design, it's hard to intuitively assess if UX research is good or bad. I have the experience to understand research for its strategic usefulness and methodological validity.
Here are a few things I'm particularly proud of and can share publicly:
I conducted research to empirically validate my team's usability improvements to Red Hat OpenShift across versions. This showed the positive impact of UX design on the product for the first time at my organization. I also discovered some mathematical flaws in the Single Usability Metric (SUM) benchmarking method that I proposed and integrated solutions for.
I coded a task completion calculator to speed up my process of analyzing completion data statistically and visualizing automatically. This also allowed others on my team to use the appropriate binary confidence intervals, which can be challenging to calculate for those with less familiarity in statistics.
The OpenShift UX and product teams lacked insight into and agreement on who our users were and what motivated their work. I championed and led a robust qualitative approach to develop representations of our hard-to-find user base. I published a high level description of our approach in the Red Hat Research Quarterly (p. 16).
I developed my master's thesis research about human-automation trust perceptions into a peer-reviewed journal article. My work focused on situations where users have conflicting advice from human co-operator and an automated decision aid. In the face of human/automation conflicting advice, I found users have a strong bias for human advice that is only reversed when the human as seen as a novice and the automation is seen as an expert.