by

An idiosyncratic trip report from CHI 2017

I wanted to give some shout-outs and observations from stuff I saw [1] during my trip to CHI 2017 as an NSF rotating program officer. [2] I didn’t get to see that many talks because I spent a lot of time in NSF advice mode (including the NSF session that Chia Shen organized and Amy Baylor and I helped out with; slides from that are available.), and those I did see tended to be in spaces where I’m not yet expert but where I am managing some proposals. That way of choosing sessions turned out to be productive: Ron Burt argued at CSCW 2013 that one should occasionally go into other communities, and I did get some interesting insights that I wanted to put out there for other people to consider. Stories below are in roughly chronological order.

Barry Brown gave a nice talk about the social and semantic shortcomings of self-driving cars. The high level point is that in driving, people send signals to each other all the time (not just middle fingers) that help coordinate driving behavior. These signals get sent both with the car’s body — we drift, we leave gaps, we close gaps, we turn the wheel just a little at a stop — and with our own — gaze, nods, frowns, waves (and sometimes those middle fingers). Further, we have driving norms that differ by road condition, location, and culture. His claim is that self-driving cars neither read nor send these signals well, and don’t obey these norms, because the way they “see” driving is primarily in terms of finding where to drive and avoiding collisions. This, in turn, will cause coordination problems with other drivers as well as lead the self-driving cars to tend to be taken advantage of because they are relatively cautious compared to human drivers. It made me think about a self-driving car trained in Texas (very accommodating drivers, on average) taking a trip to New York (not so much), about whether self-driving cars could cope with India city traffic, and about just how you’d give a self-driving car a little more semantic signaling and social grace. [3]

Huiyuan Zhao‘s talk about their system “Block Party” also made a nice point about how common map interfaces (in particular, Google Maps) emphasize place and route selection at the expense of other use cases. In particular, Block Party aims at use cases like tourism and moving that require exploration, sensemaking, and discovery of places, which in turn benefit from the use of pictoral, situated views and tools for orientation. Google Maps has tools like Street View that support these activities, but the talk claimed they are too tucked away in the interface behind the primary tasks, so people tend not to use them. Evidence for this comes from a comparison between the features people use in Google Maps versus Block Party (which foregrounds these exploration-related features) when completing sensemaking tasks; Block Party users were more likely to explore situated views and remembered more about the neighborhoods they explored. This has some straight-up design implications about map interfaces having multiple modes. They also had an interesting speculation about cases where people are exploring a place together (such as a CHI lunch group trying to figure out where to go), that suggest interactions where multiple phones are yoked to present different views or support different parts of the sensemaking task. [4]

There was another paper in the same session that Nancy Smith presented around environment designs that are less centered around human needs and goals (in particular, there was motivation from the apparently-growing Animal-Computer Interaction community). I am less personally attuned to this paper, though it had some plausibly interesting theoretical grounding, but when Nancy claimed that human environments are over-engineered for human safety with respect to animals, it made me think about the Brown and Laurier paper’s claim that autonomous cars’ focus on safety might lead to other negative consequences. The parallel was interesting and I wonder if it would be useful to think about other places where we’re doing that as well, either in the specific about safety, or other values that are consistently over- or under-emphasized in design. [5]

One such value, which I think is over-claimed and under-implemented in general in CHI work, is that of human agency. [6] Thus, it was nice to see agency get center stage in Amanda Lazar‘s double-feature on designing tangible and sharing interfaces for people with cognitive impairments. Using a “critical dementia” theoretical framing that encourages us to think less of loss and impairment [7] and more of experiences and strengths, she’s done a lot of work to develop toolkits aimed at supporting dementia sufferers’ self-expression and connection with both family and formal caregivers. I wish there had been a stronger statement of how agency was reasoned about during the design process, as well as some discussion about possible risks to agency, but it was still cool and moving work. [8] [9]

There were also a couple of other little nice themes in that session. First, both Amanda’s talks and Anthony Hornof‘s work to design for people with Rett’s Syndrome (who have very severe cognitive and motor impairments) wound up pointing to worlds where flexible tooling might allow therapists, caregivers, and/or family to explore simple systems that could improve experiences and maybe agency for people with very individual needs that mainstream assistive technologies don’t address well. [10] Second, and related, is a theme about designing for caregivers and not just for the cared-for; this came out pretty strongly in Kellie Morrissey et al.’s paper about their attempt to build a mapping system that asked people to contribute information about the suitability of places for people with dementia. [11] It was a really nice session.

I also dropped in on a usable security session that was fun, if slightly wacky. [12] Yomna Abdelrahman and Mohamed Khamis gave a cute little talk about guessing phone pins and lock patterns using thermal imaging. It’s unclear if it’s a practical attack (especially if people immediately use the phone, messing up the thermal signature), but at least for simple pins and patterns it’s pretty effective if you can get a thermal image within 30 seconds or so. [14] Sauvick Das presented an interaction technique that used rhythmic tapping as a shared group password that identify particular individuals in the group while rejecting attackers. I’m not sure I believe it’s the next big thing in authentication, as it feels like a lot of work for the benefit in a low-security situation. I did, however like his the underlying framing of “socially intelligent” security that calls attention to security requirements and goals in families and small groups. [15][16] Joshua Tan‘s paper also had a fun element, using a unicorn avatar generator to create pictoral rather than textual hashes of public keys with the hope that this would lead to more effective detection of adversarial imposters when using cryptophones. Not so much it turns out, at least in this implementation and experimental context, but the problem of helping people reliably and easily verify key hashes is a good one. [17]

The Tan paper, along with one by Yun Huang, called an important point I’ve been thinking about, about how the way we frame problems shapes our ability to work on them and the impact we might have. The main problem in the Tan paper, for instance, wasn’t a security problem: whether people can reliably detect differences between a reference picture or text and a communicated one is a perception and cognition problem. They hadn’t really thought about it this way, and it might have been productive to get a cognitive psychologist in on this to help design the representations, the comparison interaction, or both. [18] Yun’s talk was about leveraging diverse abilities in crowds to support video captioning. The emphasis in the talk was on solving the video captioning problem, and it was a reasonable talk and approach: people with different levels of hearing and English fluency tend on average to do different captioning tasks, so divvy them up appropriately. For me, though, the general problem of developing good systems that maximize people’s ability to contribute is the more interesting bit, and a focus on that aspect might have made the talk more memorable. That might have also changed the methods from ones where people were binned fairly coarsely to one where people’s actual behaviors were observed and used for maximizing outcomes. [20]

The Huang paper was part of a session on crowdsourcing where the first two papers invited plausibly-interesting parallels between crowdwork and other forms of work. Lynn Dombrowski talked about the problem of “wage theft”, i.e., low-income workers being systematically unpaid for work through employer practice or neglect. The paper was not itself about crowdwork, and in the talk there was some reasonable speculation about what technologies might do to support low-wage workers; still, it would be useful to make explicit a number of implicit parallels to crowdwork platforms and how employer power and platform/legal policy increase these risks. [21] The second talk by Ali Alkhatib did look to make some explicit parallels between crowdwork and piecework. I was really happy that this talk did some definitional work (“crowdwork” is often used to mean everything from Wikipedia contribution to Turk to TaskRabbit), and I appreciated the laying out of the history of piecework [22] [23]. The talk was less clear about just how piecework should inform our thinking about crowdwork and other on-demand markets (there were some discussions of complexity that didn’t quite come through), but overall it was nice to see these papers trying to deconstruct work markets — and very relevant to NSF’s push on Work at the Human-Technology Frontier; see also a related Dear Colleague Letter soliciting workshops and research coordination networks on the topic.

Finally, I’d like to think about getting rid of conference keynotes. [24] In general I have pretty tepid responses to them, and the two I saw were no exception — especially frustrating since I thought both had promise but then left me a little empty. The Monday one, by Neri Oxman, started with a great premise: we’ve spent so much time thinking about parts and assembly, but HCI in general and the maker/fabrication/prototyping movement could really benefit from thinking about materials and form instead (including ones that are inspired by natural forms). I was excited to hear some deep thoughts about this, but the talk itself was more a portfolio of a lot of visually appealing projects without enough synthesis or useful takeaways for my taste. [26] The Wednesday one, by Wael Ghonim, had the key point that we need to take seriously the values that algorithms promote and design them to promote the values we care about. That’s a point I can get behind, but the talk was much too much about the problem, which I think this audience has some sense of already, and didn’t have much concrete thoughts on ways forward: how might Quora or Facebook or Google News restructure algorithms and interactions to be better? [27] Even wrong or incomplete speculations I think would have gotten people’s juices flowing.

And that is most of what I have to say about CHI this year (plus this post is impossibly long), so I’ll stop. It was big fun and I want to thank the organizers, sponsors, authors, and other participants for making it possible, and I imagine I’ll be back next year.

# 30 #

[1] I encourage other folks to write similar reports to call attention to things they liked at the conference. Asking people to pay attention to your own stuff isn’t bad (I wrote a note asking people to read this, mea culpa!), but there’s real personal, relational, and community value in highlighting good stuff from other people.

[2] The views expressed in this post are solely my own and do not represent those of my Foundational overlords.

[3] Barry Brown and Eric Laurier. 2017. The Trouble with Autopilots: Assisted and Autonomous Driving on the Social Road. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 416-429. DOI: https://doi.org/10.1145/3025453.3025462

[4] Huiyuan Zhou, Aisha Edrah, Bonnie MacKay, and Derek Reilly. 2017. Block Party: Synchronized Planning and Navigation Views for Neighbourhood Expeditions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1702-1713. DOI: https://doi.org/10.1145/3025453.3026035

[5] Nancy Smith, Shaowen Bardzell, and Jeffrey Bardzell. 2017. Designing for Cohabitation: Naturecultures, Hybrids, and Decentering the Human in Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1714-1725. DOI: https://doi.org/10.1145/3025453.3025948

[6] Systems such as recommender systems and other filtering technologies, or behavioral support and persuasive technologies, or machine learning decision-making and interactive agents, should be thinking a lot more about agency than they are. If I were ejected back into the research world tomorrow, I’m pretty sure that thinking about how to better define and reason about agency in both design processes and algorithms would be my big next research direction.

[7] I’ve always had a soft spot for assistive technology work, though have always been afraid to do it myself because I’m not sure I’d have the emotional chutzpah to work closely with folks who live with these impairments. This critical dementia framing is a useful counter to that.

[8] Amanda Lazar, Caroline Edasis, and Anne Marie Piper. 2017. Supporting People with Dementia in Digital Social Sharing. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2149-2162. DOI: https://doi.org/10.1145/3025453.3025586

[9] Amanda Lazar, Caroline Edasis, and Anne Marie Piper. 2017. A Critical Lens on Dementia and Design in HCI. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2175-2188. DOI: https://doi.org/10.1145/3025453.3025522

[10] Anthony Hornof, Haley Whitman, Marah Sutherland, Samuel Gerendasy, and Joanna McGrenere. 2017. Designing for the “Universe of One”: Personalized Interactive Media Systems for People with the Severe Cognitive Impairment Associated with Rett Syndrome. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2137-2148. DOI: https://doi.org/10.1145/3025453.3025904

[11] Kellie Morrissey, Andrew Garbett, Peter Wright, Patrick Olivier, Edward Ian Jenkins, and Katie Brittain. 2017. Care and Connect: Exploring Dementia-Friendliness Through an Online Community Commissioning Platform. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2163-2174. DOI: https://doi.org/10.1145/3025453.3025732

[12] Blase Ur‘s talk about designing well-grounded and educational password meters was less wacky but quite solid. There’s a well-justified decision to estimate password strength using a relatively compact neural net that can run on the client, but since those are often hard to interpret, to provide educational explanations and suggestions using a rule-based password parser; this ‘rationalization’ type of explanation can make a lot of sense. The experimental design was solid and the finding that people created stronger but just as memorable passwords was nice, though at a slight cost of user satisfaction because the feedback imposed cognitive load. It was also one of the clearest and best-designed talks I’ve seen in a while. [13]

[13] Blase Ur, Felicia Alfieri, Maung Aung, Lujo Bauer, Nicolas Christin, Jessica Colnago, Lorrie Faith Cranor, Henry Dixon, Pardis Emami Naeini, Hana Habib, Noah Johnson, and William Melicher. 2017. Design and Evaluation of a Data-Driven Password Meter. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3775-3786. DOI: https://doi.org/10.1145/3025453.3026050

[14] Yomna Abdelrahman, Mohamed Khamis, Stefan Schneegass, and Florian Alt. 2017. Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3751-3763. DOI: https://doi.org/10.1145/3025453.3025461

[15] Thinking about usable privacy and security above the level of individuals but below the level of large organizations is one of former NSF/SaTC program officer Heng Xu‘s big pushes, a good one I think.

[16] Sauvik Das, Gierad Laput, Chris Harrison, and Jason I. Hong. 2017. Thumprint: Socially-Inclusive Local Group Authentication Through Shared Secret Knocks. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3764-3774. DOI: https://doi.org/10.1145/3025453.3025991

[17] Joshua Tan, Lujo Bauer, Joseph Bonneau, Lorrie Faith Cranor, Jeremy Thomas, and Blase Ur. 2017. Can Unicorns Help Users Compare Crypto Key Fingerprints?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3787-3798. DOI: https://doi.org/10.1145/3025453.3025733

[18] There’s a pernicious problem in CHI (and, many other disciplines) about not effectively engaging other domains. Joe Marshall had an alt.chi talk (which I did not see) about this [19], and Liz Murnane focused in on it for her dissertation. I’ll add one observation to this, which is that a number of our favorite methods (including grounded theory, user centered design, and machine learning) are often badly applied in ways that encourage us to ignore what is already known in our own and other fields, which in turn limits our ability to advance the conversation. Hopefully there will be a useful blog post about this down the road.

[19] Joe Marshall, Conor Linehan, Jocelyn C. Spence, and Stefan Rennick Egglestone. 2017. A Little Respect: Four Case Studies of HCI’s Disregard for Other Disciplines. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’17). ACM, New York, NY, USA, 848-857. DOI: https://doi.org/10.1145/3027063.3052752

[20] Yun Huang, Yifeng Huang, Na Xue, and Jeffrey P. Bigham. 2017. Leveraging Complementary Contributions of Different Workers for Efficient Crowdsourcing of Video Captions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4617-4626. DOI: https://doi.org/10.1145/3025453.3026032

[21] Lynn Dombrowski, Adriana Alvarado Garcia, and Jessica Despard. 2017. Low-Wage Precarious Workers’ Sociotechnical Practices Working Towards Addressing Wage Theft. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4585-4598. DOI: https://doi.org/10.1145/3025453.3025633

[22] I was, in fact, essentially a pieceworker for about 3 years during and after undergrad, working for a bank typing dollar amounts onto checks just as fast and accurately as I could and getting my hourly rate set by my typing rate. I’ve also seen some amount of wage theft as an hourly employee at a wide variety of jobs (3 years at McDonalds, 6 months as a dishwasher, another 6 as a weekend night auditor at a hotel, 4 months taking phone orders for pizza, credit cards, and most incongruously given how little I knew/know about lingerie, Victoria’s Secret).

[23] Ali Alkhatib, Michael S. Bernstein, and Margaret Levi. 2017. Examining Crowd Work and Gig Work Through The Historical Lens of Piecework. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4599-4616. DOI: https://doi.org/10.1145/3025453.3025974

[24] Maybe one could start the average conference instead with a welcome and a big poster session where everyone started interacting with and meeting each other right away. [25]

[25] I do realize this means less drinking at the poster sessions.

[26] This talk style of too many projects, not enough synthesis tends to be more common with people who are either (a) from portfolio-oriented disciplines, where I think this is a little more of a norm, or (b) senior folks who have done a lot of work and are more of a mind to show breadth rather than carve out a deep path through it. Both are hard on talk consumers, who could really use work by the speakers to carve out the takeaways. It’ll be interesting to see if I inflict the same pain as I get (even) older.

[27] There were also some inconsistencies here around paternalism, values, and agency: there’s a delicate balancing act between not wanting platforms to arbitrate truth but also wanting them to encourage discussions that are “objective”. This goes back to the need to think about agency and whose values are being supported. I’ll also point out that focusing on “fake news” or “misinformation” risks leading us toward positions that if we just find the true news and mitigate misinformation, everything is going to be All Better. Unlikely. These stories take the form of news, but what they’re really doing is expressing and reinforcing values, claiming and recruiting group membership, and defending friends and attacking opponents. Serious work in this space is going to have to engage with the idea that not all policy discourse or political or personal values are grounded in fact-based deliberation.

Write a Comment

Comment