by

DanCo’s idiosyncratic CSCW 2019 trip report

With the semester winding down and a little time in my pocket, I thought it would be a good time to write up some of my [1] favorite talks from CSCW 2019 this year; it’s always nice to give a little love around the holidays. This is not a full trip report and I saw plenty of fun people and stuff, but I wanted to call out a few things that I particularly liked and that you might like too, if you are kind of like me. I’ll go in roughly chronological order, and sorry if I list a first author instead of a speaker (I didn’t note speaker names, usually).

There were several good talks in the Moderation I Monday session, but I particularly liked Eshwar Chanraskharan’s talk about their Crossmod paper. My high level takeaway was that it gives moderators tools to get decisions/suggestions that align with specific representative communities (e.g., our community is like this one, so let’s learn from their moderation decisions), and/or broad consensus moderation across a number of communities. It looks like it would do a nice job of helping people balance of global and local norms in a collection of subcommunities, as well as picking exemplars of the local norms they’d like to have.

Chandrasekharan, E., Gandhi, C., Mustelier, M. W., & Gilbert, E. (2019). Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 174. [ACM DL] [PDF]

I got to most of the Gender, Identity, and Sexuality session; for me, Morgan Klaus Scheuerman’s talk on face-based gender classification was pretty interesting. In particular, it made me think about what classification algorithms and systems that use them should do when categories are blurry, evolving, contested. The presentation of this in the motivating case of gender was thoughtful and the main presented design recommendation (maybe object recognizers should focus on recognizing objects rather than inferring gender) made sense to me. I think there’s also a much broader space for thinking about the social construction of category boundaries and definitions; the talk reminded me a little of Sen et al.’s CSCW paper on cultural communities and algorithmic gold standards [ACM DL] and Feinberg et al.’s CHI paper around critical design and database taxonomies [ACM DL].

Scheuerman, M. K., Paul, J. M., & Brubaker, J. R. (2019). How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis Services. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 144. [ACM DL] [PDF].

The end of Monday social support and intervention session was good fun; Emily Harburg’s talk on their CheerOn system/paper was especially nice for me. The high level idea was to marshall emotional and knowledge support for project teams trying to make progress on sometimes ill-defined and often frustrating pieces of problems. This resonated with me because of a related project, Goalmometer [2], and I liked the idea of getting expert/experienced folks to “watch” teams and encourage them at tough times. It felt like a super-natural fit for MOOCs and similar online learning situations, where the population from iteration N might become a valuable resource for iteration N+1, and the presentation itself was really thoughtful on both the design and deployment aspects.

Harburg, E., Lewis, D. R., Easterday, M., & Gerber, E. M. (2018). CheerOn: Facilitating Online Social Support for Novice Project-Based Learning Teams. ACM Transactions on Computer-Human Interaction (TOCHI), 25(6), 32. [ACM DL] [PDF]

On Tuesday I didn’t get to see that much because I was in the Grouplens paper session and the town hall for much of the day. The morning Protest and Participation session had several fun things; probably the one that was most striking (but also perhaps a bit depressing) was Samantha McDonald’s talk about how Congress’ customer management-like systems for communicating with constituents lead to a kind of flat, performative, meaningless style of responding to citizens [3].

McDonald, S., & Mazmanian, M. (2019). Information Materialities of Citizen Communication in the US Congress. [ACM DL] [PDF]

I was also in the late afternoon Language and Expressivity II session [4], where I really enjoyed [5] the first talk that Yubo Kou gave on their paper around impression management through image use in conversation, focused on Chinese people and, interestingly, through the lens of Confucianism. The high level conclusions about how people used and interpreted imagery depending on their relationships might not have been that different if they’d used a more general status and power framing vs. Confucianism, but I had a nice chat with Yubo afterward about this and did appreciate the use of cultural frameworks that match phenomena of interest.

Wang, Y., Li, Y., Gui, X., Kou, Y., & Liu, F. (2019). Culturally-Embedded Visual Literacy: A Study of Impression Management via Emoticon, Emoji, Sticker, and Meme on Social Media in China. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 68. [ACM DL]

Onward to Wednesday, where I started in the shortest-title session ever, “AI”. I liked both of the last two talks quite a lot. Richmond Wong’s talk about their paper on how different communities/disciplines talk about fairness in the context of AI was sweet, with a nice (if slightly sad) parallel play kind of description of communities that as Brian McInnis would say, “talk past” rather than “talk with” each other [6], and some work to lay out analytic tools for focusing on particular dimensions of fairness that might be useful for cross-disciplinary work.

Mulligan, D. K., Kroll, J. A., Kohli, N., & Wong, R. Y. (2019). This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 119. [ACM DL] [PDF]

But for me, Carrie Cai’s talk about their paper on how doctors make sense of AI-based assistants stole the show. The most notable bit for me was about doctors’ tendency to think about the system in terms of the properties they use to assess other medical advice, suggestions, and diagnoses — things like conservativeness of diagnoses; knowledge of the underlying physiology; strengths and weaknesses around particular symptoms, elements of physiology, or diagnoses; “clinical taste” in terms of the background and training of the doctors used to train the system. I came away even more convinced that we need to be thinking about designing systems with AI components with more attention to the context of use (versus the algorithm itself, where it feels most of the attention is on average). I think best paper of the conference for me, of those that I saw.

Cai, C. J., Winter, S., Steiner, D., Wilcox, L., & Terry, M. (2019). Hello AI: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 104. [ACM DL]

And, I think I will leave it there. Hopefully this admittedly idiosyncratic report will still be interesting for folks to read and help get some papers that I liked some attention I think they deserve.

#30#

[1] Although I don’t have to officially NSF-disclaim since I left NSF a few months before the conference, I had enough NSF-based connections to some work discussed at the conference that I’ll still point out that these opinions entirely represent my own thinking and not that of my former NSF overlords.

[2] Inspired by the “thesis thermometer” my labmate Sara Drenner gave me way back in PhD-land, the idea was to let people self-declare progress on a project without having to hierarchically pre-decompose a problem into smaller tasks to check off. Done badly, this becomes bullshit estimating, but done well, it might let people reflect on what progress means in the context of other things going on beyond the Gantt chart. A lot of student teams did design work around versions of this, and it never quite escaped the design/prototyping stage, but it still strikes me as an important problem.

[3] Not unlike, unfortunately, the public reading of talking points on both sides that’s taking the place of substantive debate in many of Congress’s more recent general communications with the public.

[4] As a co-author on Hajin Lim’s paper around her field deployment of the SenseTrans system for annotating other-language posts with NLP-based outputs to support cross-lingual sensemaking and social connection. Some interesting general bits about how people rely on and trust AI support for communication, depending on how much they already know about the person, the language, the system, and some specific mostly-positive impacts of this kind of system on people’s social interaction.

Lim, H., Cosley, D., & Fussell, S. R. (2019). How Emotional and Contextual Annotations Involve in Sensemaking Processes of Foreign Language Social Media Posts. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 69. [ACM DL]

[5] Despite asking the stupidest question I have asked in some time. But, still, you should ask questions.

[6] I had a sense of this at HCOMP 2016 when I went there, where it felt like there were several different communities that happened to be studying the same high level topic, without a ton of engagement between them. Not to pick on HCOMP in particular, as it can happen in lots of interdisciplinary places, but conferences in particular should be places where we’re trying to help make engagement across happen.

Write a Comment

Comment