CHI 2017

Doctoral Consortium

May 2017

I was delighted to be selected for the CHI 2017 Doctoral Consortium. CHI is the premier international conference for human-computer interaction. (HCI researchers study how humans interact with computers and design technologies that let humans interact with computers in novel ways). This year the conference was held in Denver, Colorado.

Bottom image credit: Karl Fasick, bottom left image credit: Joseph Kohlmann

The Doctoral Consortium was a 2-day event held before the main conference. It was an opportunity for each of the 22 doctoral students attending to present on our PhD work to date and discuss open problems in our research with senior HCI researchers and our peer group.

Presenting at the CHI Doctoral Consortium. Image credit: Romina Carrasco

I focussed my presentation on the timeline tool I’ve built for analysing historical document collections over time (read more about that here). This was a great opportunity to seek feedback from an HCI community perspective.

I’ve come away thinking about:

  • Evaluation Methods
    I need to crystallise what I expect my PhD contribution to be. (Using my tool designs and evaluations to make general recommendations for visualisations of cultural data?) As this, and my research questions, will determine which evaluation strategies for my tools are appropriate to meet those goals.
    • Am I concerned with the distinction between helping historians do tasks they’re already doing and enabling them to ask new research questions of the data? How do I evaluate for each?
    • So far I have leaned away from doing lab studies and I will need to discuss in my thesis differing approaches to evaluating interfaces/visualisations from HCI and design perspectives.
    • If I wanted to do a larger/longer evaluation the application would need to be more precise and robust. As I’m concerned about spending too much time working on engineering issues, it could be productive to ‘Wizard of Oz’ features for evaluating with users? Or I could maybe connect the tool with a large online collection, so it wouldn’t have to be robust across different datasets; this might be useful for reaching many users in a short time.
  • What are some of the limitation/failure modes of my way of analysing texts? What themes will not be discoverable using this tool?
  • I talked about some of the issues of trust and control for users in connection with removing results in my tool to improve legibility. And it may be interesting to link in with current research concerning trust in machine learning methods.
  • The tool might be useful for analysing other types of text data where time is a meaningful dimension. For instance, analysing social media data or transcripts from interviews conducted over a period of time.

I also presented a poster as part of the conference poster sessions. See my poster here.

Presenting at the CHI poster sessions. Images credit: Dan Lockton

Attending the rest of the conference was very inspiring; there were a great range of creative and innovative visualisations, interfaces and displays presented (including some really wacky and futuristic examples!).

I was particularly interested by the presentation for ‘Finding Similar People to Guide Life Choices: Challenge, Design, and Evaluation’ by Fan Du, Catherine Plaisant, Neil Spring, Ben Shneiderman. This paper discusses designing interfaces when there are non-trivial consequences for users in what they conclude from them; I felt there were parallels with some of my own findings about the importance of trust and control for historians in the interfaces they use. Their study demonstrated that “users are more engaged and more confident about the value of the results to provide useful evidence to guide life choices when provided with more control over the search process and more context for the results, even at the cost of added complexity”. This preference for control in search, even with more complexity, is something to think about in my own work concerning designing interfaces to support scholars.