On engagement

My girlfriend, alas, will be disappointed by the sense of “engagement” I’m about to speak of.

I had an awkward conversation at CSCW where I completely failed to connect to someone who I was trying to honestly engage. This made me sad, and got me to thinking about other failures to engage I’ve been involved in.

  • I was listening to a couple of women about the lack of women in computer science. I’d had some conversations with women students about their experiences and frustrations when I taught at James Madison. I started to share one, but was cut off by one who told me because I wasn’t a woman I could never understand and wasn’t qualified to have opinions on the topic.
  • At GROUP 2007, I was chatting with an anthropologist at a dinner at GROUP 2007 who told me about their work, which was pretty fun, then asked what I do. I replied that I use theory to design systems, to which they replied “You don’t use theory, you make theory” and turned away. End of conversation.
  • I’ve also been on the giving end of Fail To Engage. The fact that I use theory is funny, because I used to have trouble taking social sciences seriously because people are complicated; the theories and models felt so limited that I didn’t see their value until I started working with social science folks through the CommunityLab project between CMU, Michigan, and Minnesota. But I had a number of conversations early on where I probably sounded like a total engineering shit. [1]

Most likely, you can conjure up memories when you’ve been on both sides of this, and chances are they’re not great memories. So, my main point is that, just as asking questions is a kind of academic love, the will to engage is academic love as well.

This is not a Lieberman- or Dourish-style call for people from different disciplinary, methodological, or theoretical backgrounds to lay down their arms and embrace alternate perspectives. [2] I do think being open to this is generally good, but you have to call your shots when deciding on extended, serious engagement with other perspectives or disciplines. It can make you uncomfortable, it takes time to learn the lingo, your home tribe may not value your expeditions, and you can’t afford to engage with everything. [3]

In the context of a single conversation, though, refusing to engage is probably a net lose, especially with someone who is reaching toward the things you care about. Engaging in these contexts is a very practical kind of academic love that gives you a chance to spread your work, interact with people, and connect to ideas you otherwise might not. Those people and connections might in turn propagate your ideas into communities that might not otherwise see them ($1 to Ron Burt).

This willingness to engage is a hallmark among people I deeply respect in academia. Jon Kleinberg has a lot on his plate, but when you do talk to him, you know that he’s engaged with you. Phoebe Sengers does effective critical work around technology in part because she has a real empathy for and understanding of the things she critiques, and she’s happy to engage with people and ideas across the spectrum. Helen Nissenbaum has impact across intellectual communities because she’s willing to engage with folks who speak other languages.

We could do worse than to emulate them. [4]

  1. This probably still happens.
  2. For a parallel, funny-but-sad discussion of divisions between various races, classes, and creeds, see the lyrics for Tom Lehrer’s National Brotherhood Week.
  3. As Steven Wright once said, “You can’t have everything. Where would you put it?”.
  4. Plus, every time you blow someone off, God kills a kitten.

Writing more useful systems papers, maybe

A couple of years ago I wrote a little advice for systems papers writers for the CHI 2011 Facebook group; with the CSCW deadline coming up and CHI closer than you think in the rearview mirror, it seems useful to resurrect here.  It’s imperfect advice, and others will disagree; I’m hoping they’ll put some of their own bits here.

The tl;dr: Tell readers why your work matters, not what you did, and think about what they can take away. Bonus list of crash landings at the end, along with snarky but useful footnotes and pointers to very useful articles.

The long form: Cliff asked me (in 2010) to write a little bit about what makes for a good paper with a significant system component in CHI. Henry Lieberman’s The Tyranny of Evaluation, Saul Greenberg and Bill Buxton’s Usability evaluation considered harmful (some of the time), James Landay’s discussion of the pain of systems work, and CHI’s own guide to successful archive submissions have said a fair amount already, and you should go read those, too.

But, as someone who’s reviewed dozens of papers a year for the last 4 years, I’ll cheerfully weigh in on what I look for as a reviewer, tempered by things I’ve overheard in PC meetings. You’ll have to forgive all the footnotes; I just read a long law review article and they’re all the rage in that discipline. This is also more collected thoughts than careful treatise. So, caveat emptor.

First, your system, itself, is probably the least important, least interesting thing in your paper [0]. A lot of papers read like a story about “what I did on my summer vacation” [1] (literally, often, because of the rhythm of internships and HCI conference deadlines [2]). Here’s my idea. Here’s what I did. Isn’t it cool? And it’s natural to focus on yourself and what you did, because you know it best.

But people didn’t care about your summer vacation when you were talking about it in fifth grade, and they don’t care about it now, unless it matters to the field or to society at large, and unless you did it well [3]. To tell that story, “what I did” is like the skeleton — and like most skeletons, it’s a little creepy if you see it all by itself.

Instead, think of the raw work as the basis for talking about things that other people can learn from. Sometimes, people can learn from “how I did it”. In some scientific traditions, providing enough detail that someone could implement approximately the same system if they wanted to replicate the work is really important [4]. Especially if the mechanics of the implementation are novel, or informative because they illustrate problems or approaches that might have broad applicability, then they become interesting and worth spending precious writing time, paper space, and reader attention on. But if the guts are mostly things a competent senior could do, they’re not the important part.

The real action is on “why”: why what you did matters [5]. Many aspects of this are outlined in the CHI guide to successful papers  and the reviewing guide , both of which you should read every year [6]. These include demonstrating a contribution, originality, validity of the work, and offering benefits to the reader. With design/system papers, there’s such a nice paragraph there that I will just quote it:

“[Reviewers] often criticize authors for conducting studies without adequate theoretical basis, or for not providing enough evidence or sound reasoning for claims. A further concern is lack of justification for design choices and not explaining why certain design features have been included. In summary, you should explain not only what you did, but also why you did it, so that readers (including reviewers) can be convinced that you made appropriate choices. Explaining your choices can also stimulate more research by helping others see alternative approaches.”

One way to think about that little nugget is that people want to understand why this system: why is your system is arguably “right”, or “better”, or “interesting”, or “useful”, in the context of your problems and your contributions. A solid evaluation of the system/design/technique in use through some combination of usability testing, lab studies, field studies, and logitudinal deployment (the right technique(s) should be chosen based on your problems and contributions), showing its potential or actual value, is one way to do that [7].

But it’s only one way. You can use theories and empirical work about specific designs, about individuals’ capabilities and goals and values, about psychology or sociology or economics or S&TS or [insert your favorite discipline here] to show that your problem is important and your system is a reasonable response to that problem. You can also use these, as well as making parallels to other problems and fields, to argue that your system has a greater, general value, and that many researchers can benefit from it. You can argue that a system that successfully does X might improve lives or the world. But make the case [8].

Many of the same techniques can also be used to motivate specific design choices. You can reason about alternate systems, and their choices, and consider alternate choices [9] and why they might be better. Don’t limit yourself to the academic literature, either; there are a lot of non-research systems that work just fine, and comparing to/critiquing/borrowing from/expanding on those designs is working smarter, not harder. You can appeal to your own experience in the past, to iterations you did along the way, to user studies you or other people did to give you insight into the design. But again, as a consumer, I want to know why this system and not that one; the considerations you had and tradeoffs you made are money in my bank as a designer and as a researcher [10].

So, that’s my rant, for now; hopefully it’s useful. Remember this is my perspective, not a universal guide to CHI success (reread the guide to successful submissions!), and that other, really smart people will disagree with me on some points. But I think most reviewers would agree with the following list of crash landings:

  • Fail to motivate why the problem is important.
  • Don’t tell me why your system is a plausible approach for that problem.
  • Do a cursory job of talking about related work that doesn’t help me understand how yours fits in and what you took from it.
  • Spend your time on minutae of implementation that doesn’t matter.
  • Present your system as though, like Athena, it sprang fully formed from the head of Zeus.
  • Avoid showing me what I can learn from your choices, or talking about the general issues other designers might face.
  • Perform a bad evaluation, and/or fail to provide other kinds of justification.
  • Make me guess what your contributions are, and how one might apply your work in other contexts.
  • Write badly, confusingly, disrespectfully of readers’ time.

Don’t crash land. You might not get in anyways; sometimes the work is not ready, sometimes it gets bad reviews, or bad reviewers, or tired or grumpy reviewers who make a mistake. And sometimes you’re just unlucky. But you’ll do better, more useful, and more publishable work by thinking about how to communicate “why” and not just “what”, and thinking harder about the reader and not the writer of the paper. And that’s a good thing.

— footnotes —

[0] Unless you have a truly bad evaluation, in which case that is the least interesting thing in your paper. The general sentiment at the UIST 2009 PC meeting was “I’d rather see a paper with no evaluation than a bad evaluation”.

[1] And let’s not get started on “what was done on the summer vacation provided to me”, passive voice, allegedly dispassionate and detached boring writing style that papers affect and that makes readers die a little inside. Read The Elements of Style, or On Writing Well, and take it to heart. Readers will thank you, and they will probably feel a little better about your paper as well.

[2] Johnathan Grudin, among others, has been advocating for a move toward journal publication over the last few years, to get at deeper, better research. There are practical career values to doing some journal-based work, too; I didn’t get a job out of grad school in part because I didn’t have any journal papers — direct quote from a respected mentor. I’ve also gotten advice from several people that even if your department, or the field, doesn’t care about journal publication, your school or college might. Do you really want your tenure application to go down in flames because of an angry group of civil engineers or economists or historians? So journals seem like a good idea, although please don’t let that stop you from submitting to CHI along the way!

[3] People who have written NSF proposals should have heard “intellectual merit” and “broader impact” when they read that sentence.

[4] Based on CHI reviews and PC meetings I’ve seen and done, as well as the explicit statement in the Guide to Successful Papers about originality, replication is much more likely to succeed if there’s a novelty component as well. Jeff Heer and Michael Bostock’s CHI 2010 replication of some classic information visualization perception studies using Mechanical Turk to solicit participants is one such.

[5] If you can’t clearly articulate this, you may be doing it wrong. Go read Richard Hamming’s You and Your Research immediately. Crotchety, but valuable. Then, as an encore, start working your way through Phil Agre’s Networking on the Network about how to manage the professional side of the career. It’s not perfect, but valuable.

[6] And if you’re not reviewing, shame on you, you freeloader. The average submission gets attention from 5 people, so the authors of every submission should conspire to do at least 5 reviews.

[7] There is an unfortunate perception that to get a systems/design paper through the CHI reviewing process you need an evaluation. Not true. You need a good evaluation. 🙂 Or a really good non-evaluative justification. Both is even better.

[8] Usually in the introduction. You’d really like your readers to want to read your paper, so convincing them it’s important and valuable work should start early.

[9] That is, don’t just list related work, but use it, respectfully, to talk about your system, your choices, and why what you did is novel, interesting, and valuable. And try to avoid the style you sometimes see in papers where the related work goes at the end and is mostly used to talk about limitations with that work that the current paper heroically overcomes. It’s awfully easy for that to sound like “look how dumb they are and how smart I am”, and that’s a big turn-off to reviewers, especially if (as is often the case) they did some of that work.

[10] You should be writing this stuff down as you go along, so you can talk about it later, and keeping every version of the system that matters, even a little bit, for helping people — including yourself — learn more from your work. Bonus points if you tell me about your failures, and what didn’t work. I realize it’s scary to talk about the making of the sausage, and it’s hard to get an epic fail published, but things that didn’t work along the way are useful.

Good luck, fellow travelers. And please, add your own thoughts.

CSCW Mini trip report

The trip report is a bit of a lost art, but I did want to capture a few things I liked at CSCW and share them with folks around me. It’s roughly chronological and focuses on sessions, and is only one of many paths through the conference (I wish I’d had about 3 of me; there were a lot of things I’d like to have seen); hopefully other folks will talk about their own paths in other spaces.

Both plenaries were big fun: good topics, engaging speakers.  Ron Burt had a new wrinkle around the idea of bridging ties in networks, calling out the importance of how the network is built. In particular, bridging ties that come from being embedded in a specific community for a while appear to be most valuable: the most successful performers oscillate between tight, local clusters within communities (in which trust is built and shit gets done) and broad connections across communities. It was a nice layer of nuance on the strong ties/weak ties story. Moira Burke and Bob Kraut’s paper analyzing the effect of strong and weak ties in Facebook added another, finding that talking with strong ties adds social support but also social stress — and was a better predictor of finding work than talking to weak ties.

Ron also talked about the value of encountering information outside your normal circles, so I took that at face value and went to the Gesture and Touch session.  Richard Harper and Helena Mentis gave a fun talk about how the gross, exaggerated motions that sometimes are needed to connect with Kinect led people to a playful, “carnival” attitude. I also liked Svetlana Yarosh, et al.’s paper about  ShareTable, designed for parent-child communication across divorced households; they were sensitive both to the issues of divorced families and the insight that families need to do things together at least as much as talk together. And all four were nice, effective talks–in general, the talks I saw at CSCW were good, better than average for conference talks I’ve seen.

The filter bubble panel was fine, though it was so focused on political discourse, and particularly the U.S. conservative/liberal split, that I wondered how generalizable the stories would be to other contexts and information domains. There was some useful theoretical grounding that hopefully helps with that, but I did wish the discussion had been broader. I also really wish I’d seen the Making the World a Better Place session, but I wound up spending that helping set up the demos session instead.

Tuesday I was more in my own space, and, as Ron Burt would predict, I got less stunningly new information, though it was still fun. At the “Practices in Social Networks” session I had high curiosity for the Manya Sleeper, et al. talk about self-censorship in Facebook because of Xuan Zhao’s work around self-curation in Timeline. It sounded like they were trying to figure out how much good something like Google+ circles would do if they were zero-cost, and I do think the idea of thinking hard about audience in social media is going to be important. It was a tale of two halves: the question of self-censorship focused on the types of information while the question of making it more share-able felt more focused on audiences, and there wasn’t as much connection as I’d hoped between the two.  Maybe in the paper. Then Eric Gilbert presented the shortest paper in CSCW, on underprovision of attention to new submissions in Reddit.  It was great to see someone studying Reddit (Pinterest is also ripe for colonization), and his method of counting multiply-submitted items where the n>1’th submission made it to the front page (with the implication that the earlier submissions had been ignored) was clever. The Cliff Lampe, et al. paper (presented by Jessica Vitak) also did a nice job of pointing out that use is not binary, and of looking at folks who are not college students using Facebook, so kudos there as well.

We had a couple of papers in the “Not Lost in Translation” session so I spent some time there.  Mary and Hao-Chuan did a nice job overall; practice talks paid off handsomely in both cases. Hao-Chuan’s observed that bilingual speakers allow us to design asymmetrical systems that selectively apply machine translation (think Chinese native-English second language speakers generating turns in Chinese that are translated for an English-only partner, but getting the English statements from their partner un-translated), allowing us to leverage the bilingual abilities for better outcomes. The idea that cultural difference should be a resource rather than a barrier is often raised, and this is one example of how to do it. I also liked Naomi Yamashita, et al.’s discussion of transmission lag in second language contexts. Usually CSCW systems stamp out lag wherever it’s found, but here in small doses it improved group outcomes. In large doses, it led to interaction chaos among native speakers who wound up talking over each other unawares, but the idea of lag as a resource was also cool.

Stuart Geiger and Aaron Halfaker’s talk about how different ways to measure participation in Wikipedia leads to different results was fun as well: great talk and cool point. The high-level idea was to look at time spent on Wikipedia and measure in labor hours, rather than edit counts. The argument was that labor hours was a more natural way to think of work outputs; a little Marxist, but interesting. This has all the problems around estimating session times that weblog analysis has, and doesn’t account well for tool efficiencies (think using Huggle to revert vandals and add warnings versus doing it by hand), but it was stimulating and they were thoughtful.

In the Controversy, Arguments, Rule Breakers, and Politics session, the R. Kelly Garrett and Brian Weeks paper about real-time corrections to political misinformation online was relevant to some of our work around coaching commenters in a discussion forum to be better posters–you have to give information in ways that don’t trigger defensive reactions (“Ego Threat”, they called it). Likewise the, Ben Towne, et al. paper that studied how seeing controversy and deliberation around an artifact would affect people’s perceptions of artifact quality looked fun. In general, it lowers it, though I wondered if folks more  embedded in the community norms would be more comfortable seeing the disagreements. At this point I kind of ran out of gas so I took off. I probably should have done this earlier; you should spend at least a little bit of your time getting out of dodge to talk to people, experience the location, have fun, and stay sane, but there were so many cool papers that I really wanted to try to stick with them.

On Wednesday, the Future of Crowd Work paper by Niki Kittur, et al. was intriguing: what would it take for crowd work to be something you’d be happy for your kid to grow up doing? They were proposing a move away from a faceless, fluid assembly line of repetitive tasks toward a way of organizing crowd work that would support advancement, dignity, and fairness for workers while broadening the kinds of work that might be done. They had a near-infinite supply of questions, and the 17-page paper is probably worth a read. My main question was that it still felt like a two-tier system: researchers and organizations would create tasks for the workers. I don’t think that’s how they meant it, but it would be useful to avoid an “us and them” mentality.

As an alum, I also had to go see the Most Cited CSCW paper session about the original GroupLens work. It was fun, and slightly campy, and interesting to see what they thought they got right and got wrong. It’s always amazed me how that first trio of papers from MIT, Bell, and GroupLens anticipated so many of the issues that would arrive later (and a little sad that so much of the followup work addressed only algorithmic accuracy). I also saw last year’s CSCW talk where Leysia Palen and Beki Grinter talked about their 10-year old paper about instant messaging in teen life, and I think there’s real value in this kind of look back.

Finally, at the closing plenary, Jascha Franklin-Hodge’s talk about the relentless (but somewhat disorganized and decentralized) use of A/B testing and data mining in political campaign messaging, and the value of serious thinking about user interfaces that make participation easier, also resonated well. He was thoughtful about the tension between getting the job done and generalizing results, and you could imagine interesting collaborations between academic researchers and political campaigns that could lead to insights around motivating participation. Studying the digital side of the campaign would be a fantastic ethnographic opportunity, too. My main question is that it’s not clear that good UI design should determine who gets elected 🙂 — but it was a nice way to close the conference.

So, that’s it. Left out are all the hallway conversations, the reconnecting with old friends, the meeting of new (lots of chats with grad students this time around, which was fun), the deliciously kitschy Buckhorn banquet, and all the other things that make the conference both intellectually and interpersonally stimulating. Overall, great conference and kudos to all the folks who put it together.