DanCo’s idiosyncratic CSCW 2019 trip report

With the semester winding down and a little time in my pocket, I thought it would be a good time to write up some of my [1] favorite talks from CSCW 2019 this year; it’s always nice to give a little love around the holidays. This is not a full trip report and I saw plenty of fun people and stuff, but I wanted to call out a few things that I particularly liked and that you might like too, if you are kind of like me. I’ll go in roughly chronological order, and sorry if I list a first author instead of a speaker (I didn’t note speaker names, usually).

There were several good talks in the Moderation I Monday session, but I particularly liked Eshwar Chanraskharan’s talk about their Crossmod paper. My high level takeaway was that it gives moderators tools to get decisions/suggestions that align with specific representative communities (e.g., our community is like this one, so let’s learn from their moderation decisions), and/or broad consensus moderation across a number of communities. It looks like it would do a nice job of helping people balance of global and local norms in a collection of subcommunities, as well as picking exemplars of the local norms they’d like to have.

Chandrasekharan, E., Gandhi, C., Mustelier, M. W., & Gilbert, E. (2019). Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 174. [ACM DL] [PDF]

I got to most of the Gender, Identity, and Sexuality session; for me, Morgan Klaus Scheuerman’s talk on face-based gender classification was pretty interesting. In particular, it made me think about what classification algorithms and systems that use them should do when categories are blurry, evolving, contested. The presentation of this in the motivating case of gender was thoughtful and the main presented design recommendation (maybe object recognizers should focus on recognizing objects rather than inferring gender) made sense to me. I think there’s also a much broader space for thinking about the social construction of category boundaries and definitions; the talk reminded me a little of Sen et al.’s CSCW paper on cultural communities and algorithmic gold standards [ACM DL] and Feinberg et al.’s CHI paper around critical design and database taxonomies [ACM DL].

Scheuerman, M. K., Paul, J. M., & Brubaker, J. R. (2019). How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis Services. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 144. [ACM DL] [PDF].

The end of Monday social support and intervention session was good fun; Emily Harburg’s talk on their CheerOn system/paper was especially nice for me. The high level idea was to marshall emotional and knowledge support for project teams trying to make progress on sometimes ill-defined and often frustrating pieces of problems. This resonated with me because of a related project, Goalmometer [2], and I liked the idea of getting expert/experienced folks to “watch” teams and encourage them at tough times. It felt like a super-natural fit for MOOCs and similar online learning situations, where the population from iteration N might become a valuable resource for iteration N+1, and the presentation itself was really thoughtful on both the design and deployment aspects.

Harburg, E., Lewis, D. R., Easterday, M., & Gerber, E. M. (2018). CheerOn: Facilitating Online Social Support for Novice Project-Based Learning Teams. ACM Transactions on Computer-Human Interaction (TOCHI), 25(6), 32. [ACM DL] [PDF]

On Tuesday I didn’t get to see that much because I was in the Grouplens paper session and the town hall for much of the day. The morning Protest and Participation session had several fun things; probably the one that was most striking (but also perhaps a bit depressing) was Samantha McDonald’s talk about how Congress’ customer management-like systems for communicating with constituents lead to a kind of flat, performative, meaningless style of responding to citizens [3].

McDonald, S., & Mazmanian, M. (2019). Information Materialities of Citizen Communication in the US Congress. [ACM DL] [PDF]

I was also in the late afternoon Language and Expressivity II session [4], where I really enjoyed [5] the first talk that Yubo Kou gave on their paper around impression management through image use in conversation, focused on Chinese people and, interestingly, through the lens of Confucianism. The high level conclusions about how people used and interpreted imagery depending on their relationships might not have been that different if they’d used a more general status and power framing vs. Confucianism, but I had a nice chat with Yubo afterward about this and did appreciate the use of cultural frameworks that match phenomena of interest.

Wang, Y., Li, Y., Gui, X., Kou, Y., & Liu, F. (2019). Culturally-Embedded Visual Literacy: A Study of Impression Management via Emoticon, Emoji, Sticker, and Meme on Social Media in China. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 68. [ACM DL]

Onward to Wednesday, where I started in the shortest-title session ever, “AI”. I liked both of the last two talks quite a lot. Richmond Wong’s talk about their paper on how different communities/disciplines talk about fairness in the context of AI was sweet, with a nice (if slightly sad) parallel play kind of description of communities that as Brian McInnis would say, “talk past” rather than “talk with” each other [6], and some work to lay out analytic tools for focusing on particular dimensions of fairness that might be useful for cross-disciplinary work.

Mulligan, D. K., Kroll, J. A., Kohli, N., & Wong, R. Y. (2019). This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 119. [ACM DL] [PDF]

But for me, Carrie Cai’s talk about their paper on how doctors make sense of AI-based assistants stole the show. The most notable bit for me was about doctors’ tendency to think about the system in terms of the properties they use to assess other medical advice, suggestions, and diagnoses — things like conservativeness of diagnoses; knowledge of the underlying physiology; strengths and weaknesses around particular symptoms, elements of physiology, or diagnoses; “clinical taste” in terms of the background and training of the doctors used to train the system. I came away even more convinced that we need to be thinking about designing systems with AI components with more attention to the context of use (versus the algorithm itself, where it feels most of the attention is on average). I think best paper of the conference for me, of those that I saw.

Cai, C. J., Winter, S., Steiner, D., Wilcox, L., & Terry, M. (2019). Hello AI: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 104. [ACM DL]

And, I think I will leave it there. Hopefully this admittedly idiosyncratic report will still be interesting for folks to read and help get some papers that I liked some attention I think they deserve.

#30#

[1] Although I don’t have to officially NSF-disclaim since I left NSF a few months before the conference, I had enough NSF-based connections to some work discussed at the conference that I’ll still point out that these opinions entirely represent my own thinking and not that of my former NSF overlords.

[2] Inspired by the “thesis thermometer” my labmate Sara Drenner gave me way back in PhD-land, the idea was to let people self-declare progress on a project without having to hierarchically pre-decompose a problem into smaller tasks to check off. Done badly, this becomes bullshit estimating, but done well, it might let people reflect on what progress means in the context of other things going on beyond the Gantt chart. A lot of student teams did design work around versions of this, and it never quite escaped the design/prototyping stage, but it still strikes me as an important problem.

[3] Not unlike, unfortunately, the public reading of talking points on both sides that’s taking the place of substantive debate in many of Congress’s more recent general communications with the public.

[4] As a co-author on Hajin Lim’s paper around her field deployment of the SenseTrans system for annotating other-language posts with NLP-based outputs to support cross-lingual sensemaking and social connection. Some interesting general bits about how people rely on and trust AI support for communication, depending on how much they already know about the person, the language, the system, and some specific mostly-positive impacts of this kind of system on people’s social interaction.

Lim, H., Cosley, D., & Fussell, S. R. (2019). How Emotional and Contextual Annotations Involve in Sensemaking Processes of Foreign Language Social Media Posts. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 69. [ACM DL]

[5] Despite asking the stupidest question I have asked in some time. But, still, you should ask questions.

[6] I had a sense of this at HCOMP 2016 when I went there, where it felt like there were several different communities that happened to be studying the same high level topic, without a ton of engagement between them. Not to pick on HCOMP in particular, as it can happen in lots of interdisciplinary places, but conferences in particular should be places where we’re trying to help make engagement across happen.

SOUPS 2019 trip report

I wanted to go ahead and write up a trip report for SOUPS since I went on NSF’s dime and wanted them to get the nectar from it.  (Standard disclaimer that this post is a personal reaction with my Cornell-researcher hat on, this is not the position of my NSF overlords, etc.)

I was sitting next to Heather Richter Lipford, the general chair, and she asked me what I thought of the conference overall. So, my overall reaction was that the conference was Pretty Good.

On the plus side:

  • Nice breadth of topics with good subdivision and theming overall in sessions. This was fun to see, and gave a nice combo of focus and variety.  Even more could have been made of this structurally, with authors encouraged to reference/align their talks with other stuff in their session, but it led to interesting thought packages.
  • I got a really nice sense of experimental rigor in lab and deployment studies and scale development work; sometimes conferences in the broad HCI space are a little more iffy in their quantitative methodology and analysis in ways that make psychologists sad, and I didn’t feel that here.
  • Regardless of method, there were reasonably good descriptions of and details of findings from specific studies, in general clearly presented; I almost always had a good idea of what was going on and what the motivations and procedures were.
  • I also appreciate how many of the papers included supplemental materials used in the studies to support better evaluation of the work, reuse, and reproducibility; I’m a big fan.

But:

  • There wasn’t enough putting findings in the bigger picture of what’s already known from prior work in related contexts, and thus some sense of the work not cumulating versus a series of one-off papers about, e.g., user attitudes toward specific problem/context X.  (This is not just a SOUPS problem, but I’ve felt it here every time I’ve come, so I feel compelled to call it out).
  • Design implications in general followed well enough from the findings, but they were rarely deep or creative, at least as presented in the talks, and many of them didn’t say much about the feasibility of implementing them given all the forces at play around privacy and security problems.
  • The lightning talks were a little up and down because there were so many different types, ranging from dissertation proposals to software presentations to provocations, and they tended to be a little more in “here’s what I’m doing” rather than “here’s what you’ll get from it” mode.

Now that I’ve given the high-level reaction, I’ll give blow-by-blow descriptions and sometimes personal reactions to each of the sessions/talks.  There are based largely on the talk content, with occasional chasing to the paper to fill in cracks. For folks who want to focus on the things I found most interesting, use your browser’s find feature to search for the string +++; this isn’t meant to be an indictment of other work, just an appreciation for the ones that resonated best with me.

It might be an interesting exercise to compare these to the actual paper abstracts, to both get the picture from the authors’ point of view and to think about the value of attending talks versus reading paper abstracts. Here’s a link to the full program for further reading; I also link to each full paper below.

Sunday night poster session

I read most of the posters and talked with maybe a third of the presenters, and on balance it was a pretty fun poster session; the list of poster topics and authors is worth a skim just to see the breadth of the topics, and includes links to the poster extended abstracts.

Probably the biggest theme was around developing security and privacy education content, including:

A couple of other posters stood out for me.

Monday Keynote

Jennifer Valentino-DeVries at the New York Times (and formerly the Wall Street Journal) gave the opening keynote.  The first half of the talk was a nice general romp around how her (and the industry’s) coverage of tech companies has evolved over time, moving more from a basic “what is this” stance with a somewhat positive-to-quirky overall sentiment, toward a “what is wrong/risky about this” stance.  The specific risks discussed were not that new or surprising — bots, disinformation, data aggregation and sharing, and privacy are all things that people in this community talk about — but doing these from a journalistic lens was interesting.

The second half of the talk probed alignments and differences between academic and journalistic analysis of technologies. Alignments include the value of evidence and reproducibility, and the complementary approaches to communication each takes that could work really well together. Those complementary approaches drive some of the main differences, around the nature of a journal versus a newspaper article (nuance, length), and around the goals of an academic audience versus a general audience. There was also a claim that academics are more driven by underlying process/explanation of phenomena, versus journalists focusing more on observation/description of important phenomena, and that investigative journalism has a longer timescale (months) than many academics realize. She also briefly talked about structural possibilities for collaboration, ranging from news reports on papers, to papers with journalistic extensions/applications, to fuller collaboration where the academic side brings expertise and skills on particular technologies that journalists can’t really have.

It made me wonder whether there would be interesting things to do around training journalists to work with technical academics, or vice versa (e.g., what might you tell PhD students about this, and in what contexts/classes?)

Monday late-morning papers: Special populations, and validated scales

The first talk looked at security and privacy issues around visually impaired users, and how these folks interacted with allies/caregivers (sometimes problematically). Nice data collection plan involving relatively intense observations + interviews, across work, public, and home contexts, that sounded potentially rich. However, the talk itself was too focused on things we already have some inklings of, about definitions of privacy and risks of caregivers that didn’t feel that novel or that specific to visual impairment.  The talk made some strong hints about intersectionality and marginalization being important findings, but frustratingly didn’t present much about this — and I wondered whether what is really going on is about aspects of identity (which the paper put its weight behind), versus the needs induced by those aspects (which I think would be a promising way to connect and generalize some of the concerns across needs and contexts). The recommended design approaches around using participatory design and designing for collaborative use also didn’t feel that novel.

The second talk was about the privacy challenges faced by older adults (but not ones with cognitive impairments, who were specifically excluded).  Probably the strongest finding here that was specific to older adults was a clear articulation of privacy-safety and privacy-autonomy tradeoffs that come with the need for support from others — and that these tend to be resolved away from the privacy side if monitoring/surveillance is still the best option for  maintaining autonomy/agency. Another part of the talk focused on whether they saw age as increasing or decreasing threat that came up with an “it depends” kind of answer that resonates pretty well with some of Emilee Rader and Rick Wash’s earlier work around security folk models of threats. Later parts of the talk sounded less focused on older adults’ challenges: hand-me-down technology, misconceptions of how the technology works, strategies for managing risks, and delegating troubleshooting, don’t feel exclusive to older adults and although these might be harder problems for them it’s hard to make strong claims about them (though to its credit the talk noted this and posed that in part as future work).

+++ The third talk I wrote notes on paper for and forgot to transcribe them before losing the paper, but looking back at the published paper reminded me that the idea was to try to better-represent users’ privacy concerns in a validated scale that acknowledges that people have different concerns in different contexts (particularly, social versus institutional contexts tend to trigger different constellations of concerns). The scale development work and justifications were pretty thoughtful, and the validation was not bad, using the scale to collect perceived privacy ratings of pairs of interfaces designed to be more or less privacy-risky. Interestingly, the scale was more sensitive to privacy differences in social than institutional contexts, and the talk was very up front about this, and the possibility that One True Scale is not the way to go, versus scales designed for different kinds of privacy problems. This I can relate to; I think a lot of research spends too much time with concepts that are too broad to really grapple with (sometimes papers propose to tackle all of online hostility, or all of disinformation, or lump all of social influence into one construct).

The fourth talk is also a scale development talk, this one a 6-item 5-point scale focused on security attitudes, in particular, attentiveness to and engagement with cybersecurity measures. They presented a mediation-type model that places this attitudes scale along with an existing SeBIS scale that’s about security intentions, suggesting that doing both scales likely increases predictive power. The work is grounded in the Theory of Reasoned Action, and the candidate scale items were drawn from a broad range of existing scales related to that. The scale development and validation process were described even more clearly than the last talk, and given that several other talks had short-scale-type questionnaires they had developed independently, this kind of validated work seems pretty valuable.

Monday after lunch: Security behaviors and experiences

+++ The first talk was a fun little look at how media representations of hacking both help form and are evaluated in part by people’s mental models of hacking. Perhaps not surprisingly, media affect those models, and usually for the worse: it suggests that hacking is more obvious/overt than it usually is (phishing or virus pop-ups being an exception), that hackers are stronger (and encryption and other defenses are weaker) than they are, and that hackers usually have specific and often important targets (thus, regular users have nothing to fear). The talk also had a nice bit about how people evaluate the realism of the media, in terms of their perception of the technical behavior, their ability to relate to the situation and its match with their existing folk models, and some assessment of the cinematic quality. Some overlap with other folk model work, but a nice take on it from the media lens, and the speaker drew a nice parallel to how Hollywood has medical advisors to reduce the risk that medical representations in film  lead to negative consequences for viewers to argue that maybe we should have something similar for security and privacy behavior.

The second talk uses one of B.J. Fogg’s models of behavior, that posits that behavior is at the nexus of motivation, ability, and a trigger, focusing on triggers of privacy and security behaviors that they claim (correctly, I think) are less well-studied than motivation and ability. Analysis of a number of stories leads to a typology of these triggers, with three main categories: (1) social triggers based on advice from or observation from other people, (2) forced triggers from external stimuli that bring security/privacy front and center such as data breaches or required password changes, and (3) proactive/internal triggers based on routine or habit.  They then asked MTurkers to tell them what motivated recent activities around mobile authentication methods, password updates, uninstalling apps, and Facebook privacy settings. It turns out that trigger type varies based on security behavior (forced, in particular, is more associated with password changes, as one might expect given password expiration policies). It also varies, not surprisingly, based on baseline security behavioral intentions (proactive is most common in folks with high security behavior intention scores), and people are much more likely to share with others behaviors triggered by a social trigger.

The third talk is a fairly close replication study of a SOUPS 2015 paper on how experts and non-experts differ in terms of their security advice and practices (I wish the talk said a bit about why a replication was needed; the paper notes that non-expert practices might have improved over four years).  There were a few methods changes from the original, notably changing one question that originally asked for an overall rating of the goodness of a piece of advice; that question combined effectiveness and realisticness, so the new study splits that out (and finds several examples of practices rated as effective but not so realistic/implementable, including password managers and 2FA, arguing that this means we need better usability for these tools).

+++ The fourth talk addresses the key verification authentication ceremony in the Signal protocol, in which people verify that some computations on their public keys match to avoid man in the middle attacks. Unlike last year, when they presented a paper that tried to persuade everyone to do it, this year they’re looking for an approach informed by risk communication for encouraging this only when it’s needed, in order to avoid imposing unnecessary costs in the face of minimal risks.  This is especially important in Signal since usually when the computations change it’s because the software is reinstalled (and there’s no evidence that there are actual man in the middle attacks in the wild), and many conversations aren’t that risky. It’s also hard for people to understand this because the idea of a safety number, the difficulty of verifying it, and the risk of not doing it are all not scrutable to them (and the dialog boxes in the current version don’t do a good job of explaining things). Redesigning dialogues and workflows to simplify them, give more information about the process, and use clearer terminology helps people make more informed decisions and arguably better mental models of what the ceremony means and what the risks involved are.

The fifth talk is about consumer experiences of ransomware, via a large scale representative sample. About 3% of folks were judged to have experienced it yearly (9% overall) based on guided self-reports. Very few people pay, it turns out, although a fair number of folks change their security behaviors post-attack (notably antivirus stuff,  being more careful in browsing, and backups — though the talk worried that this didn’t change enough). Perhaps surprisingly, demographics don’t affect likelihood of being a victim once you control for people’s security behaviors and their own prior exposure to online scams. I like that they checked this — I wish people did more work to really measure the constructs they care about instead of easier-to-measure ones (personality traits, I’m looking at you), and when pressed on it by a questioner they were very clear that they didn’t think demographics matter, except inasmuch as they correlate with more relevant constructs — kudos for that.

Monday late talks: “New Paradigms” (with an emphasis on design and usability)

+++ The first talk is about how to do privacy by design in the context of data science work, where you’re looking to balance needed access to personally identifiable information for data science with privacy concerns. The claim is that this would be a lot more effective if people thought hard about how to estimate how much and what kind of data they really need to do the work, and carefully scope the work to be able to give meaningful answers to the questions.  This implies moving away from a relatively prevalent all-or-nothing mindset about access toward partial and just in time access, at appropriate and monitored levels, in auditable ways. This leads to an approach to thinking about privacy loss not so much in terms of properties of the full dataset; instead, you might compute actual privacy risks and losses based on actual data access in the course of doing the work, impose a privacy budget on the analyst’s behavior, and make analysts make more decisions about accessing specific bits of data and seeing the effect of this on their privacy budget. The talk describes doing this for real in a record linkage task with real experts in both data analysis and privacy loss. I was curious how the privacy loss calculations might scale/adapt to doing multiple tasks on data over time with different analysts, since some of the losses might be cumulative over time.

The second talk is about “moving from usability to MPC and back again” rooted in a problem of data aggregation of sensitive data — here, gender and race disparities in wage rates in Boston. Half of the talk was about the usability of MPC itself. This is partly about helping non-experts understand the benefits of MPC and trust it; there were some parallels to how the earlier Signal talk addressed creating appropriate explanations aimed at the target audience rather than the developers. It’s also partly about the problems of validating encrypted, distributed data, which they look to address by constraining input based on domain knowledge. The second half addressed how one might use MPC methods to do web analytics analyses without collecting and distributing individual-level data, by computing analytics measures locally and aggregating them using MPC methods.

The third talk looks at the somewhat alarming rise in phishing websites’ use of valid HTTPS certificates, and whether the certificates they create are distinguishable from legitimate sites both in general and in the case of specific targets. Phishing sites tend to have more duplicate and invalid certificates on average, but still plenty of valid ones — and although the measurable features of those certificates vary in terms of type of validation and identity of the certificate authority, they don’t vary enough to be able to build strong models that would predict likelihood of a phishing-based certificate, especially if you care about false positives at all. Once you get to the level of impersonating a specific target’s certificate, phishing sites get a lot worse, rarely being able to populate them with target-specific data (interestingly, unless the phishing website was hosted on a service offered by the target).

Then there were lightning talks. The first one was based on one of the posters, the Star Wars poster mentioned earlier. The second was about “cognitive aspects of socially engineered payment diversion fraud”, and though they didn’t well-define what that meant, it was surprising that victims thought they were less likely to be hit in the future (despite the clear evidence they are vulnerable), and also that phishing training didn’t seem to reduce these risks. The third talk was also based on a poster, around designing more inclusive alerts for privacy and security, focused on calling out a couple of methods for doing so (inclusive design, lead user/extreme/early adopter personas) that might have benefits for the general population as well as the target users. The last one is talking about a “usability, deployability, security” analysis of Android Security Keys for two-factor authentication (2FA) and how it compares to USB-based 2FA, suggesting it’s a little better because you don’t need to do device pairing or device mounting as you often do with USB key-based solutions.

Tuesday morning talks: Developers

+++ The first talk was a survey + interview study of relatively small/independent app developers and their perceptions around using advertising in apps. It turns out they see using ad networks as a practical necessity relative to other monetization models (like paying up front), though largely based on their own perceptions rather than data, and they don’t feel like they make that much money anyways. They also claim to want to protect user security and experience, but choose ad network defaults primarily to minimize time and effort on their end without deep consideration of either the monetary benefits to them or the privacy risks to users. Finally, they don’t do much monitoring of the ad network’s behavior, seeing it as something that’s both the network’s responsibility and something they don’t have much power over. Given these findings, the talk presented reasonable high-level followon research questions/design ideas, and overall it felt like a nice, useful descriptive study of this context.

The second talk worked to apply the idea of “code smells” to look at crypto API library usability (“usability smells”), based on some API usability principles proposed by Green and Smith. Interestingly they started by some work analyzing Stack Overflow to see how well those principles lined up with the actual (usability) problems developers came up with, although I wasn’t so convinced by the analysis that worked to map the issues into a taxonomy. The descriptions of specific issues didn’t feel like great fits the higher-level categories the talk proposed, nor did the mappings of issues to the proposed “usability smells”; those were also more high level and generic than most of the descriptions of code smells I see floating around online. So I wasn’t sure that this would actually be useful for API developers or users to heuristically evaluate API usability even though I like the high level idea.

The third talk was about firewall interfaces for sysadmins. The highest level claim is that at least for this task, survey results show that the common wisdom that sysadmins prefer command line interfaces is wrong. In that survey, sysadmins reported being pretty skilled overall with firewalls and that they only spend so much time per week on that. This means that some of the advantages of CLIs, including flexibility and providing access to all functions, tend to be less salient to many people who didn’t need to get deep into the options too often. Folks who liked the GUIs liked that they tended to present info/status more effectively, and gave enough of the common functionality that it was better than remembering CLI commands for something they don’t use that much. This sounds like it’s related to a more generalizable story about relative advantages of CLIs versus GUIs with respect to task frequency and difficulty; I’m not sure there’s too much specific about firewalls or security here.

+++ The fourth talk was about how sysadmins think about/deal with software updates. They’re more proactive than end-users in seeking out updates (which makes sense), and describe info related to updates as scattered in many places, which makes researching and deciding on them harder than it should be. The talk then spent some time on update testing and deployment processes, pointing out that sensible-sounding staged/staggered rollout strategies mean that vulnerable and user-facing production machines often wait a relatively long time to get the updates. (It also means that for a while you’re running different environments, which might also increase costs/load of supporting them). It’s also surprisingly hard to develop concrete deployment strategies that minimize disruption and make decisions around buggy updates. All of this is constrained by organizational context, where policy, coordination, and resources impact what sysadmins can actually do.

Tuesday late morning: Authentication

The first talk addressed the usability of doing continuous (or implicit) mobile authentication, looking at the cost/benefit of required re-authentications when the continuous authentication method gets suspicious or stale. Those reauthentications can be intrusive and interruptive, so the talk looks at methods the system might use to reduce that pain, including encouraging voluntary re-authentications at less intrusive times and providing info about the system’s current confidence in the user’s authentication status to help people understand and decide to voluntarily re-authenticate. Clever idea, although some of the implementations they tested (particularly one that gives a 10 second warning before imposing a re-authentication) don’t feel like good designs relative to either no warning or giving continuous status that people can check and act on between tasks; that said, their user study suggests that even the 10 second warning is better than no warning, and all warnings increase people’s use of voluntary re-authentication, which is arguably good. All task interruptions turn out to be super-annoying, though, so they emphasize the idea of trying to delay re-authentication at least a little bit to a time when the person isn’t doing an important task or when they’re not accessing sensitive information. Those recommendations aren’t so surprising based on prior work around interruptability by Iqbal, Bailey, Fogarty, and others, but it’s good to think about them in the context of balancing usability and security.

The second talk looks at to what extent people can intentionally vary biometric measures (such as keystroke touch features on mobiles), perhaps interesting because most biometrics work assumes these features are byproducts rather than intentional. The idea is that this might highlight some problems with those assumptions or provide new resources for stronger authentication.  To do this, they provided people with a visualization of their typing behavior with respect to these features (based on timing, length, and location of press on key), along with a visualization of a desired target behavior that behaves differently. The upshot from measurement is that people can often, but imperfectly, do this, and (perhaps not surprisingly) more complex variations in terms of amount and relationship between changes increase the error rate. Their conclusion is that this might be a useful adjunct to password creation, and more generally that we should think harder about keystroke biometrics as a controllable variable and what that implies. I thought about piano players and the level of control they have over pressure and timing, and concluded that there might be something here.

Exploring Intentional Behaviour Modifications for Password Typing on Mobile Touchscreen Devices. Lukas Mecke, Daniel Buschek, Mathias Kiermeier, Sarah Prange, Florian Alt.

+++ The third talk is about why people don’t, and don’t effectively, use password managers. The first observation is that “password manager” can mean both things that save entered passwords like web browsers or Apple’s Keychain, and also things like standalone password generation/management-focused tools, and those should be thought about separately. Part of the issue for people who don’t is that some nontrivial number of people don’t even know what a password manager is; others don’t know/trust web browsers to save the info, or conclude it doesn’t work well when related tech like “Remember me” checkboxes on doesn’t use that saved info. Password generator-managers are seen as meaningfully more secure, but hard to set up and use, have some worries about single points of failure both around security from others (including other authorized users of the computer) and around forgetting one’s master password.

The fourth talk is about the potential for universal 2FA (U2F) keys, given some known issues around them including potential for loss, utility of SMS-based 2FA, and the vagaries of personal experience shaping how willing people are to interact with them. To this end they did a lab study of setup and a diary-based one week field study of everyday use. On the lab study, the high level was that key setup was a little harder, but login a little faster, with keys than SMS. The diary study found that sites kept sending SMSs even when people registered and preferred the hard keys — annoying —  and also that sites supported automatic logins that reduced key use — sad for security; the talk conclusion is that this means keys might be most effective primarily for specific use cases (sensitive data, public computers).

The fifth talk is about usability of second factors one might use in two-factor authentication, with a very similar setup as the prior talk: lab study of setup, two-week study of use (in a simulated banking website with experimenter-posed banking tasks). High level takeaway is that improving usability of setup of second factos is pretty important — and that thinking about differences in users’ capabilities (vision, age) in this domain would be important as well. The U2F key was fastest to authenticate but lowest-rated in terms of usability scale ratings, and more prone than most other methods to setup failures because of the workflow. (Passwords were, interestingly, rated as more usable than any other second factor). More generally, they confirmed a lot of common themes about security use in general and 2FA in particular, in terms of people’s perceptions of their security threats affecting their preferences, and worries about the availability of the required login info/codes/devices. SMSs performed less well overall in terms of use time, setup time, and usability than I might have expected. Timed one-time passwords were relatively hard to set up (again, an issue with the workflow of setup), but pretty usable once set up.

Tuesday afternoon (Privacy):

The first talk looked at how the rules about the right of access in GDPR create privacy risks through possible errors in how organizations verify the identity of requesters. They do this, interestingly, by having two authors attempt to get information about each other through right of data requests that use information faked in the kinds of ways one might do if one is hacking and social engineering. The attacks work about 25% of the time on 55 large companies (though of those, 1/5 of them leak information about the wrong person). It’s a cute talk, and I appreciate the ethical discussion they had; my main question is that I don’t know if it’s especially a problem with GDPR versus anything that has avenues for recovering personal information. Interestingly I saw a newspaper article a few days after the conference reporting a very similar finding (from a different person I think, though).

The second talk is a content analysis of privacy choices provided by 150 websites related to opting out of email, targeted advertising, and data deletion. Interestingly and on the plus side, many privacy choices are located in multiple locations (though less so as sites get smaller, and primarily this happens around email subscription management). On the minus side, reading grade levels are very high for descriptions of privacy choices and their wordings inconsistent between sites, making it harder for users to engage with them, Further, the described impact of those choices is often ambiguous, and the user interaction style often pretty terrible.  Maybe not surprising, but interesting to have catalogued in detail with concrete evidence. Recommendations fit the findings pretty well, including language standardization, centralizing privacy choices, and simplifying privacy controls.

+++ The third talk is about neural habituation/generalization to security notifications, which are often both repetitive and have similar look and feel with non-security-related elements. The idea of ignoring security messages is well-known, but actually connecting it to neurological theory and using that to drive the modeling is cool, and lets you differentiate habituation from fatigue. They do it in the context of a controlled design experiment in MTurk where they control for design similarity, amount of similar stimuli seen, and fatigue vs. habituation vs. generalization.  Conclusion: make security messages visually distinct — worth do even though (and I suppose theoretically, because) this violates general UI guidelines.

Then, more lightning talks. The first talk pointed out that, since seizing cell location data now requires a warrant based on a case called Carpenter, we should think about what other kinds of location data should be thus protected. Based on a survey, most users think that most such data should be protected (and deleted immediately after not needed by the page). The second talk described a notification system (e.g., application notifications such as arriving emails, etc.) that is privacy and security-aware, addressing concerns of both users and notifiers. The third talk described a case study of building a UX team inside a privacy/security product organization, with some thoughts on building the business case, what to look for in employees, and how to integrate UX into the company’s development process. The last talk called for designing social (versus informational) interventions to address the privacy paradox and described a general research approach toward it (scanning a little like a doctoral consortium talk).

Tuesday late afternoon talks: Privacy and Sensing

The first talk looked at experienced fitness trackers’ practices and concerns about sharing fitness data. There’s a lot of work on this around sharing in social media, but here the authors are looking to understand motivations and audiences for sharing very broadly, in many contexts and circles. And, that’s what they found: people share with a wide variety of recipients, from friends and family to physicians and fitness activity incentive programs (with very different goals for each group). Further, they found that on average people don’t see fitness data as very sensitive in itself, versus with respect to social norms and impression management. (Though one questioner made the cool point that if they were _asked_ for some of this very same data, versus sharing it _themselves_, they might see it as more intrusive/risky.)

The second talk looked at people’s perception of risks of smart home type devices, through a combo of eliciting mental models and interviews. The models varied; some were more focused on the technical flow of the data, while others focused on more the functions of the devices. Perceptions of risk, appropriate use and sharing, data retention and controls, etc. also varied in ways that align reasonably well with ways people talk about these issues in other contexts (as did the summary conclusions about people’s perceptions of risk and ability to act) — good for face validity, but leaving me less clear about how this moves the conversation forward. Perhaps attitudes/knowledge around devices has evolved since other studies, and there is some value in confirming general findings in new contexts, and the paper notes aspects of both of these, but I wish this had been clearer both in the paper and the talk.

The third talk was about user understanding, and security and privacy perception of, voice-based personal assistant ecosystems. The big takeaway is that people don’t think so much about the system’s ecosystem; this, in turn, leads them to a tendency to think that data is processed locally, not understanding the data sharing implications of third party skills or interacting with other smart home devices. As with the other talks in this session (and really, the first two in the first session), I really wanted to know how this built on what’s already known about related concepts: the last paper, for instance, explicitly noted that the findings aligned with more general internet perception findings. This paper did have a piece that focused on perceptions of shopping through smart assistants that surfaced some interesting concerns around it and that felt more novel; it made me wonder if there were other specific use cases or domains that would surface interesting specifics about people’s beliefs and behavior.

Then, one last brace of lightning talks. The first talk looks to come up with a useful gloss on privacy as “consistency of the informational behaviors of a system with the reasonable expectations/norms of its users” and a related paradigm for thinking, designing, and developing around it. The second describes an OSS project “2FA notifier” designed to support use of 2FA on popular websites that provide it by helping users be aware of and do the enabling of it without having to be proactive about it. Third proposes that password logins should allow for mild errors in the password content by using some keystroke dynamics to help distinguish legit users from attackers, and giving that slack to presumed legit users. +++ The last talk suggests that instead of focusing on building trust in technology as a way to encourage usage, that trust is maybe a side effect of desirable designs. Instead of thinking trust, think adoption motivation, and more generally that “trust” is an overloaded term and that you probably should be measuring something more directly related to what you’re really trying to build/create/enhance — this, I am totally on board with from seeing how this plays out around other complex concepts treated too generically, including “harrassment”, “disinformation”, and “influence”.

#30#

Getting and giving more out of NSF reporting

tl/dr: Treat NSF reports as a required structural opportunity to celebrate, reflect, and plan. Give program officers (just) enough info to understand, share, and think about the cool outcomes and real impacts of the projects.

More detail: My goal with this post is to help you get more personal value out of writing annual NSF reports and also to make them more useful to NSF. I am writing this in my personal role as a faculty member who happens to have experiences as an NSF program officer, not in any official capacity; that said, I’ll talk about my perspective on this both as a PI and based on my experience as a program officer. [A]

Let’s start with the PI side of this, about developing a positive attitude toward report-writing and what you can get out of it. Reports have real value to NSF [B], but the value often isn’t apparent to PIs themselves. My own experience for my first few reports was a little negative. Poking around on the web for report-writing advice turned up phrases like “grit your teeth” and “those darn annual reports”, and they weren’t very helpful. I apparently wrote ok reports and haven’t had one returned, but early on I wasn’t sure why I was doing it except it had to be done.

Then, around year three, I started thinking about the reports as a chance to celebrate, reflect, and plan. It felt good to talk about people I worked with, mentored, and taught, and the knowledge they gained and discovered. It was cool to see how my thinking evolved over the course of the project given circumstances and people, to step outside of the activity of research and do a little bit of meta-level thinking about it [D], and to consider where the work was going and what it meant for the field.

A number of folks have come to a similar framing about proposal writing as a chance to step back and think about what’s important; I encourage you to do that for reporting too [E] — treat reports as a required structural opportunity to celebrate, reflect, and plan. These — especially the first two — are activities that I don’t spend enough time on as a faculty member.

I’m not going to talk much about the requirements or individual sections of the report, as plenty of documents do this [F]. In particular, the Community for Advancing Discovery Research in Education has a nice description of the requirements and most of what I would say would be redundant.

Instead, I’m going to switch gears and talk about my personal experience reading the reports as a program officer, and what made reports more useful and satisfying to consume [G]. Frankly, my early experiences reading annual reports were similar to writing them as a PI: guidance and rationale were minimal [H]. Then an experienced program officer and deputy division director pointed out that beyond the general NSF rationales, reports are the main structural opportunity for program officers and PIs to communicate about awards in progress.

As with the celebrate-reflect-plan framing on the PI side, communicate-engage on the program officer side made reviewing reports a lot more rewarding. It was nice to be able to drop people positive comments on their projects, including occasionally sharing ideas the reports sparked, and it definitely helped me understand areas that weren’t in my wheelhouse [I].

This works best when the report does a good job in the Accomplishments section of reminding me about the key goals of the project and reporting period. Then a thoughtful-but-brief summary of activities is helpful to know how you’re attacking the problem; rambling descriptions are less useful. The most useful reports say a little more about the interesting outcomes and how they contribute to the field. Emphasizing findings is valuable because these reports are a main way program officers stay up on a broad range of projects and fields; we can’t attend/read all the conferences and journals our PIs engage with [J]. Good reports also given useful highlights about the education, outreach, and broader impacts aspects of the project.

I also checked the products and participants sections. NSF wants the products to be correctly uploaded — and papers to acknowledge support — so they can be tracked and associated with the awards. In particular, publications properly entered will appear along with your award abstract in the NSF award database, and that’s useful for us, you, and future folks looking at awards. I often saw issues in the participants section, with PIs failing to list the folks contributed to the project and describe their contributions (see question 4 in the Division of Environmental Biology’s blog post on the topic for useful thoughts on this); this often caused me to send reports back for revisions.

The impacts section is another place I often saw problems. I think this is part because it’s hard to articulate concrete impacts especially early in a project’s lifespan, in part because impact tends to be cumulative in a way annual (or “final”, which is really just “the last annual report” [K]) reports aren’t, and in part because we often don’t spend enough time thinking about the impact of our work beyond the papers. Too many reports default to the same generic, hopeful language proposals often use about potential impacts — in the worst case, cutting and pasting from the proposal. Generalities are not useful, and as a program officer I preferred a report say “nothing to report” on aspects of the impact section rather than make stuff up, or just repeat findings from the accomplishments section (another common approach).

Instead, compelling impact sections give specific descriptions of, evidence for, and/or concrete plans to increase the impact of the project and the underlying research. Are other people reacting to the work, in the main discipline or others, as shown through citations, awards, invited talks, syllabus use, new collaborations, or other concrete evidence that they are thinking about the work? Are students getting valuable experiences and outcomes from project activities, both as research participants and students in courses? Are educational, dataset, source code, implementation, infrastructure, and other materials released to the public, documented, maintained, evangelized, and used by others? Are there concrete possibilities for tech transfer or actual impacts on society beyond “this might be useful, someday”? And for any or all of these, does it make sense to plan activities to increase the chances of having these kinds of impacts? Going back to the lead for this post, report writing should have some benefits for you — and taking a chance to think about how to increase the impact of your work is one of those [L].

And that’s where I think I’ll leave it. As a reminder, this is my own thinking about reports from both the PI and program officer side, and not official NSF policy or prescription, but hopefully it’s useful in helping you both in the writing of your reports and thinking about your work.

#30#

[A] And, as always when I mention NSF, this is my own thinking and does not represent in any official way the opinions of NSF.

[B] NSF offers lots of good reasons to do reporting from an NSF perspective, e.g., accountability for the funded PIs, as well as tracking research and educational impacts and specific outcomes. These are good things to do. Not completing reports in a timely manner also impacts one’s ability to get future funding. That said, these talk mostly about why annual reports are good for NSF, not for you.

[D] For what it’s worth, “go meta” is my number one piece of generalizable advice about being an academic. Don’t just read the paper or go to the talk or listen to the lecture; think about the genre, and what works and doesn’t work for you, and why, and use that going forward. Don’t just review the paper/proposal or do the program committee/review panel; use it as a chance to think about quality science and how people think and talk about it. Then use these meta-insights to be a better reader, writer, teacher, reviewer, community member.

[E] Not everyone buys in; I remember advocating for this at a faculty meeting and being called “Panglossian”. Perhaps true, but it still helps me both feel better about report writing and write better reports.

[F] These include official guidance on NSF’s take on annual reporting (as of 2016 but still current as I write in early 2019), including special instructions for writing reports for conferences/workshops/doctoral consortia and the like, and more info on the mechanics of process and using research.gov to do the reporting.

[G] There are a couple of documents from NSF itself that also have somewhat more detailed thoughts on good report writing, including one from the Brain and Cognitive Sciences division and another from the Division of Environmental Biology. Some of the things I say in this document are based in part on these, along with conversations with other program officers, largely in the Division of Information and Intelligent Systems.

[H] That said, NSF has pretty good high-level training for program officers and a good community of practice that includes both other program officers and especially deputy division directors, the unsung heroes of NSF management who absorb an enormous amount of both corner cases and institutional memory. But it’s got many of the same apprenticeship model characteristics that doing a PhD (or really, being a faculty member) has.

[I] Program officers cover a lot of territory, not all of which is their specific expertise. Further, program officers (especially permanent ones) sometimes wind up adopting awards pretty far from their own areas, for example, when a rotating program officer in charge of certain topics leaves.

[J] Interesting outcomes are also fun to share with other program officers and NSF’s outreach people.

[K] A related issue is that for NSF, a “final report” is not cumulative; it’s just a final “annual report”, and should only cover the last year of activity. This confuses many PIs, and I found I had to return some number of “final” reports for this.

[L] Thinking about providing evidence of impact was also important in my post on writing research statements, so that might be worth a read (and contains pointers to other notions of impact and people who’ve spoken about it as well, including Elizabeth Churchill’s thoughts and Judy Olson’s Athena Award talk).

Personal trip report thoughts on SOUPS 2018

I wrote a trip report on SOUPS 2018 (the Symposium on Usable Security and Privacy) for other folks at NSF since NSF paid for it, and I thought I would go ahead and share a lightly edited version of it more widely because I like to call out other people’s interesting work with the hope that more people see it. As always, the views in this post are mine alone and do not represent those of my NSF overlords.

SOUPS, founded and oft-hosted by Carnegie Mellon, is historically a good conference focused toward on the human and design side of security and privacy in systems; here’s the 2018 SOUPS program, for reference.  I’m a relative newcomer to SOUPS, having only come since 2017 in my role in NSF’s Secure and Trustworthy Cyberspace program. So, this may be a bit of an outsider view — perhaps not so bad to get from time to time. I’ll structure the report in three main bits: first, to highlight a couple of themes I liked that were represented well by particular sessions; second, to note some other papers I saw that triggered pleasant paper-specific reactions; and third, to gripe a bit about a wider CHI problem that I also felt some of at SOUPS this year and last, that many papers are too focused on particular new/novel contexts and not enough on learning from past work and building generalizable, cumulative, fundamental knowledge upon it.

Some cool sessions on risks close to home, inclusiveness, and organizational aspects

Gripe aside, I liked a number of the sessions I saw. The last session of the first day was the highlight for me, with a clear theme around the privacy risks of those close to us (friends, family, associates) versus risks imposed from outsiders (strangers, companies, governments). The first paper, by Nithya Sambasivan et al., looked at this in the context of phone sharing among females in South Asia, and how technical novelty and cultural norms combined to shape attitudes about and actions toward privacy risks. The talk had some interesting bits about trying to increase the discoverability of privacy-enhancing behaviors and mechanisms such as deleting web cookies/history or private browsing modes.

The second paper in that session, by Yasmeen Rashidi et al., focused on how college students deal with pervasive, casual photography by those around them (mostly, as Anita Sarma pointed out, focusing on overt rather than covert photography, which I thought was a nice observation). The study used a method I hadn’t bumped into before called an “experience model” that summarized key moments/decisions/possible actions before, during, and after photo sharing; I thought it was an interesting representation of ethnographic data with an eye toward design. The beneficial aspects of surveillance in college fraternities reminded me of Sarah Vieweg and Adam Hodges’ 2016 CSCW paper about Qatari families experiencing social/participatory surveillance as largely positive — surveillance is generally cast as pure negative, but there are contexts where it’s appropriate and meaningful.

The third paper by Hana Habib et al. compared public and private browsing behavior using data from the CMU Security Behavior Observatory. Perhaps not surprisingly, people do more private/sensitive stuff in private modes, but maybe more surprisingly, self-reports aligned reasonably well with logged data. Here, too, there was evidence that people were at least as concerned about threats from co-located/shared users versus external users. There’s also evidence that people assume private browsing does more privacy-related work than it really does (for instance, some folks believed it automatically enables encryption or IP hiding), possibly to people’s detriment.

The fourth paper in the session, by Reham Ebada Mohamed and Sonia Chiasson, was close to my own heart and research, with connections to Xuan Zhao, Rebecca Gulotta, and Bin Xu’s work on making sense of past digital media. It focused on effective communication of digital aging online through different interface prototypes (shrinking, pixellation, fading), which made me think straightway of Gulotta et al.’s thinking about digital artifacts as legacy.  But unlike that work, which was more about people’s reaction to their own content fading, this paper was more about using indicators of age to make the pastness of a photo more salient in order to evoke norms and empathy about the idea that things in the past are in the past and thus, as Zhao et al. argued, often worth keeping for personal reasons but not necessarily congruent with one’s current public face. The talk also explicitly put this kind of analog, gradual aging in opposition to common ways of talking about information forgetting as digital, binary, absolute deletion, and that was fun as well (and well-aligned with Bin Xu, Pamara Chang, et al.’s  Snapchat analysis and design thinking).

Another nice first-day session was a set of lightning talks that clustered, broadly, around inclusion and empowerment in security and privacy issues. These included a general call toward the problem from Yang Wang, a focus on biased effectiveness of authentication systems for people of various demographic categories from Becky Scollan, a discussion of empowering versus restricting youth access online from Mariel Garcia-Montes, and a transtheoretical model-based call to develop personalized, stage-appropriate strategies to encourage self-protective privacy and security behavior from Cori Faklaris. On balance these were interesting, and more generally I like the move toward thinking about inclusive privacy/privacy for particular populations, both for their own sake and as edge/extreme cases that might speak back to more general notions of privacy.

On the second day there were also some fun talks I saw in the last session (detailed notes, alas, lost in a phone crash).  These included Julie Haney and Wayne Lutters on how cybersecurity advocates go about their work of evangelizing security in corporations; James Nicholson et al. on developing a “cybersecurity survival” task, parallel the NASA Moon Survival Task, to get insight into IT department versus general company attitudes toward security that looked both promising and well-put-together; and a paper by an REU site team, presented by Elissa Redmiles, about co-designing a code of ethics with VR developers around privacy, security, and safety. It was nice to see an example of a successful REU site experience, and it highlighted a framing of people’s desire for “safety” in cyberspace that I think might make for a root goal concept that “private”, “secure”, and “trustworthy” each capture some aspects of as a means.

Some cool papers

There were also a number of individual papers that caught my eye, including one by Sowmya Karunakaran et al. from Google about what people see as acceptable uses of data from data breaches. They had some interesting stories about both cross-cultural and cross-scenario comparisons (being able to survey 10K folks from six countries has its advantages); probably the most surprising tidbit was that people were least happy about the idea of academic researchers using these data–less so than targeted advertising, and much less so than notifications/warnings/threat intelligence sharing. I say surprising because some folks have observed that Amazon Mechanical Turk workers are more comfortable sharing personal data in tasks posted by academics than by others because academics are perceived as both more trustworthy and more legitimate (though Turk is different than breaches since Turkers have the choice of whether to participate or withhold data, which they don’t in the case of the breaches).  The ordering also roughly paralleled the amount of personal benefit the breach victims perceived for each use, which makes sense; it might be interesting to run a comparable parallel study around appropriate uses and users of non-breached, but openly released, datasets of social trace data.

There was a nice null-results paper by Eyal Peer et al. on whether face morphing — blending two or more faces into a composite — can influence decision-making by blending a person’s own face subliminally into the face of a person in an advertisement or communication campaign. This had a lot of theoretical juice behind it based on the prior face morphing literature and more general work around influence and cognitive psychology, so it was surprising that it didn’t work at all when tested. This caused the team to go back and do a mostly-failed replication study of some of the original work on face morphing’s impacts on people’s likability and trust ratings of images that included their faces. I admire the really dogged work by the team to chase down what was going on, and one more data point in the general story of research replicability; it might be a nice read for folks wanting to teach on that topic.

Susan McGregor’s keynote on user-centered privacy and security design had a couple of cool pieces for me. First, there was a bit about how standards for defining “usability” talk in terms of “specified” users and contexts, which raises cool questions about both who gets to do the specifying, and how to think about things as they move outside of the specified boundaries. Not a novel observation, but one worth highlighting in this context and related to the inclusive privacy discussion earlier. Second, there was a nice articulation of the distinction between usability and utility, and how scales/questions for measuring usability can accidentally conflate the two. For instance, something that might be rated “easy” to use might really be not that easy, but so worth it that people didn’t mind the cost (or vice versa; I remember a paper by Andrew Turpin and William Hersh in 2001 about batch versus interactive information retrieval system evaluation that suggested that a usable-enough interface can make up for some deficits in functionality). This raises ideas around how to develop scales that account for utility: rather than “usable”/”not usable”, what if we ask about “worth it”/”not worth it”. Some posters in the poster session had moves toward this idea, trying to measure the economic value of paying more attention to security warnings or of space/time/accuracy tradeoffs in a secure, searchable email archive.

I also liked Elham Al Qahtani et al.’s paper about translating a security fear appeal across cultures. There’s been some interesting work in the Information and Communication Technologies for Development (ICTD/HCI4D) communities showing that peers and people one can identify with are seen as much more credible information sources. This implies that you might want to shoot custom videos for each culture or context, and that turned out to be the case here as well — though just dubbing audio over an existing video with other-culture technologies and actors turned out to be surprisingly effective, raising cost-benefit tradeoff questions. Sunny Consolvo noted that Power Rangers appears to be able to use a relatively small amount of video in a wide variety of contexts, and that there might be strategies for optimizing the choice of shooting a small number of videos, the closest-fitting of which for a given culture/context could then be dubbed into local languages. Wayne Lutters had an alternate suggestion, to explore using some of the up-and-coming “DeepFake” audio and video creation technologies to quickly and locally customize videos — presumably, including one about the dangers of simulated actors in online content. 🙂

Norbert Nthala and Ivan Flechais’ paper about informal support networks’ role in home consultations for security reminded me quite a bit of some of Erika Poole’s work around family and friends’ role in general home tech support. The finding that people valued perceived caringness of the support source at least as much as technical prowess was both surprising and maybe not-surprising at the same time, but was good to have called out for its implications around designing support agents and ecosystems around security, privacy, and configuration.

There was also a nice, clean paper by Cheul Young Park et al. about how account sharing tends to increase in relationships over time, a kind of entangling that to some extent accords with theories of media multiplexity (gloss: people tend to use a wider variety of media in stronger relationships, though it’s not clear what the causal direction is). The findings had nice face validity around the practicalities of merging lives, ranging from saving money on redundant subscription service accounts such as Netflix to questions of intimacy around sharing more sensitive accounts. It also raises the question (in parallel with Dan Herron’s talk at Designing Interactive Systems 2017 about how to design account systems that can robustly handle relationships ending and disentangling.

A call for more generalizable, cumulative work

Now, to the gripe. The highest level thing I liked least, based on my experiences there both last year and this year, is that too much of SOUPS focuses on descriptive/analytic work around specific new security and privacy contexts, without enough consideration of underlying principles about how people think about security and privacy, and how studying the new contexts adds to that. It’s important, for instance, to study topics such as Cara Bloom et al.’s 2017 paper on people’s risk perceptions of self-driving cars or Yixin Zou et al.’s paper on consumers’ reactions to the Equifax account breach (which won a Distinguished Paper award). These are relevant contexts to address, and from what I remember the presentations/posters I saw about them were pretty good in and of themselves.

But for my taste, on average I don’t think we do enough work to connect the findings from the specific domains and studies at hand to more general models of how people think about trustworthy cyberspace, and how properties of the contexts and designs they encounter affect that thinking. For example, what do we learn about studying the risks of self-driving cars relative to other autonomous systems, or drones versus social media photo sharing versus (surveillance) cameras, or new IoT setups versus more classic ubiquitous computing contexts, or to point back at myself a bit, how Turkers’ privacy experiences add to our understanding of privacy and labor power dynamics more broadly? To what extent are there underlying principles, phenomena, models that could help us connect these studies and develop broadly-applicable models?

This is related to a more general concern I have in the human-computer interaction (HCI) community about how methods that encourage deep attention to one context or dataset — including but not limited to many instances of user-centered design, ethnography, grounded theory, and machine learning modeling — can lead researchers to ignore relevant theoretical and empirical research that could guide their inquiries, improve their models, and more rapidly advance knowledge. (Anyone who wants an extended version of this rant, which I call “our methods make us dumb”, is free to ask.) I also see a lot of related work sections whose main point appears to be to claim that this exact thing hasn’t been done exactly yet, rather than trying to illustrate how the work is looking to move the conversation forward. This, also, is not SOUPS-specific; you see it in many CHI papers (and, it turns out, CHS proposals).

Okay, gripe over and post over as well [1].  Hopefully there were some useful pointers here that help you with your own specific topics, and that your thinking and findings are broad and useful. 🙂

#30#

[1] For once, no footnotes. [2]

[2] Oops.

Finding NSF programs and program officers for your research

tl/dr: Figuring out where to send proposals at NSF can be confusing. Understanding NSF’s org structure and solicitation mechanisms, using NSF’s award search tool (and colleagues) to look for programs and program officers that manage awards related to you work, and effectively working with program officers to find good fits can help you out.

More detail:

Getting started with applying for funding can be pretty confusing, even if you have good mentors, and as both a mentor and now a three-year rotating program officer at the National Science Foundation I’ve answered versions of this question many times. So, I figured it was time to write down some of the things I often say, though as always, these views represent my personal opinion and experience and not those of my NSF overlords. Further, there are many folks with many opinions on the topic, so ask and search around (though I was surprised not to find too many posts about this when I was putting this together).

I’ll organize the post around three main themes/tasks: (1) understanding NSF’s organizational and solicitation structure, (2) finding places in that structure that might fit your work, and (3) investigating those places through contacts with program officers and panel/review service.

First, structure, because it’s helpful to understand the basic mechanisms through which NSF solicits proposals. The root organizational structure is a hierarchy that broadly aligns with a swath of academia’s own organization of fields, with the top level being Directorates: CISE (Computer and Information Science and Engineering), SBE (Social, Behavioral, and Economic Sciences), ENG (Engineering), EHR (Education and Human Resources) and so on. [1] Directorates contain Divisions; inside of CISE, for instance, are three — CCF (Computing and Communication Foundations), CNS (Computer and Network Systems), and IIS (Intelligent Information Systems) — along with OAC (the Office of Advanced Cyberinfrastructure). Then inside of Divisions are typically Programs; IIS, for instance, contains RI (Robust Intelligence), III (Information Integration and Informatics), and CHS (Cyber-Human Systems).

Most of the core programs have some kind of core solicitation attached to which you can submit proposals. So, for instance, you wouldn’t submit to CISE or to IIS, you might submit instead to one of the core programs inside of it. This isn’t NSF-wide (in EHR, the EHR Core Research solicitation crosses the whole directorate, for instance), but programs that field solicitations it’s the general structure [2].

There are also cross-cutting solicitations that as the name implies cut across the hierarchical structure, that multiple organizational units at NSF fund and administer together. Some are foundation-wide things like CAREER; some are broad cross-cutting ones like SaTC (Secure and Trustworthy Cyberspace) that multiple directorates participate in; some are cross-cutting but within individual directorates like CRII (CISE Research Initiation Initiative) and CCRI (CISE Community Research Infrastructure) [3]. You’ll also sometimes see a Dear Colleague Letter come out that asks for proposals in a specific topic or area, or that invites supplements to existing awards for a specific purpose [4].

Now that we understand solicitations can come from many places and take several forms (core solicitations, cross-cutting solicitations, and dear colleague letters that contain requests for proposals), the next trick is finding ones that might fit you [5].

To that end, NSF’s award database has a lot of value. Using various keywords that sound like your research [6] will bring back award abstracts that show you what’s being funded (pay attention to the award dates, though — sometimes you will get pretty old awards) as well as the programs and program officers who are managing those awards. Those are places and people that you should be aware of as possible funding targets.

NSF also has tools for searching funding opportunities and finding about about announcements from programs (which often contain information about funding opportunities). For instance, this sample search looking for CISE program announcements will give you a list of communications, including solicitations, FAQs, and Dear Colleague Letters that someone believed were relevant to the CISE community. The volume can be pretty high, but it’s an easy scan/filter task, and finding a relevant opportunity you didn’t know about can be high value. In particular, new opportunities sometimes crop up. Being aware of ones that might fit you can give you a leg up versus people who are not aware of them [7].

I’ve also seen that it’s useful to be aware of executive branch research priorities, often articulated by the Office of Science and Technology Policy (OSTP), as well as NSF’s own strategic plans, activities, and announcements [8]. It turns out that many cross-cutting solicitations — often the larger ones in terms of dollars — come out subsequent to OSTP and NSF Director-level initiatives, suggesting that it makes sense to keep an eye out for new solicitations related to those topics [9].

Finally, asking colleagues in your intellectual spaces where they submit can also give you a sense of potentially interesting programs and program officers. Said colleagues will often have useful experience with and advice about interacting with them. More generally, junior folks often think they should figure everything out for themselves, but there’s a ton of value in working with more senior mentors on funding. This ranges from collaborating on proposals, to asking for thoughts on finding opportunities and fit of ideas to them, to getting specific feedback on specific proposal ideas and even drafts. People are busy but also often generous, and getting advice from colleagues and mentors is the number one thing I think junior faculty could do to get better faster at proposal writing.

Okay, now that you’ve identified some potential targets using the methods above, it’s time to dig more deeply into whether they really are fits.  Even if you’ve done the homework to look up official NSF program descriptions and awards made by that program in the past, and even if you ask colleagues, it can be hard to tell how well a particular proposal idea is going to fit a particular program because the official text of a solicitation only gives so much information.

One way to learn more about what a solicitation is about in practice is to search for (recent) awards made under it, assuming it’s not brand new. Many solicitations will have a link near the bottom of the page to help with this; there’s also an advanced search tool that can help you (among other things) find all the proposals funded by a specific solicitation, although you’ll need to find the right Program Element Code to narrow to a particular program/solicitation.

Your most likely source of information, though, is to email/talk with relevant program officers about whether your project ideas fit the programs they work with. They probably have the clearest sense of what a program’s goals are and how a project idea might fit them, often have a high level sense of how panelists might react to some aspect of a project idea, sometimes have deep expertise of their own they can bring to bear [10], and may also know other parts of NSF that could be interesting homes for a project idea [11]. Most program officers are also genuinely interested in mentoring, especially for junior researchers, so you should feel empowered to reach out to them.

It’s helpful to ground conversations with program officers in specific 1-2 page project writeups. Having a writeup in advance helps focus your own thinking and will also make interactions with program officers more efficient and effective [12]. These writeups might not be too different from an expanded project summary of the kind you might submit with a proposal, but focusing more on the specific questions, contributions, activities, and evaluations you’re considering, and less on generic “why it’s important” text. Thinking about Heilmeier’s Catechism for proposing research can be helpful here [13].

Once you have a passable version of that (it doesn’t have to be perfect), email it to the most relevant program officer you can think of in the most relevant program or two, based on the homework you’ve already done as described above. Note that solicitations often list multiple program officers, and different folks usually handle different subtopics/panels within a given solicitation.  So, best if you can identify one who handles awards related to your idea (whether in this solicitation or in general) and mail them. If you can’t tell who is best, the first person listed is often a “lead” for the solicitation and it’s reasonable to mail them and ask them who to ask. Don’t email all of them, especially individually; that’s wasteful and inconsiderate of time.

You might ask them about their thoughts on fit to their own program(s) and other programs or program officers they might recommend, as well as any thoughts they have on the proposal itself or on framing it for panelists in their program. If you’re new enough to a program or to NSF that you don’t have a good feel for it, it might make sense to ask if you could have a talk where you ask more general questions as well as talk about the writeup.

Program officers will have different levels of responsiveness to these questions. Some are more willing to talk general program or NSF issues than others. Some try hard not to inject their own opinions on proposal content both to increase fairness (relative to other PIs not getting feedback) and in case their opinions are wrong. Some prefer to reduce their contact with PIs during the proposal process in general, with the goal of avoiding biases induced by having such contact, and may want to interact by email versus calls or in-person visits.

But, you should at least get a response about program fit, and my general sense is that NSF program officers are pretty generous in interacting with PIs. If you’ve been waiting more than a week [14], it’s legitimate to re-send the mail, or try a different program officer associated with the program. Don’t take it personally, or give up on the idea of contacting POs [15].

Another way to get a sense of a program, and connect to its program officers and reviewing community, is to serve as a panelist. I’ve written a separate blog post about that so I won’t say much here, except that serving is a great way to learn a lot about proposal writing and evaluation while both serving and representing your intellectual communities while meeting both folks in those communities and program officers.

And I think that’s my story on this.  Hopefully this was useful for thinking about how to find places and people at NSF that might be good fits for you, and remember to look around for other thoughts on these topics.  A few that I bumped into while I was writing this are included below for your initial bonus amusement.

#30#

[1] There are also various administrative Offices at this level, but these don’t usually field many programs or solicitations, so I ignore them for simplicity.

[2] One of the things I’ve learned coming here as a rotating program officer is that NSF is less monolithic than you’d think. The high level structure of proposals, panels, etc., is mostly the same, and we have high level policy guidance, but practices can be quite different at every level from directorates to individual program officers.

[3] Yes, it’s awkward that the acronyms are close. There are a lot of acronyms here.

[4] DCLs vary widely; here a couple of (expired) examples I’ve been involved with, one that solicited interdisciplinary SaTC proposals, and one that looked to advance citizen science research.

[5] For what it’s worth, I was not very good at this as a PI; I just submitted to CHS’s predecessor (Human-Centered Computing) a lot, although I had collaborators who were better at this game and wound up with some collaborative submissions to other solicitations. More generally, you should also look beyond NSF to other agencies, foundations, and industry; I wasn’t particularly good at that either so I won’t discuss that here.

[6] Or, names of PIs in your community who do the kind of research you do. Finding out where they get NSF funding could be pretty useful, and PIs are sometimes willing to share proposals, which can be super-helpful for understanding the genre of proposal writing [6′].

[6′] As can reviewing, which is good for both you and the community. See my post on how to become a reviewer for more.

[7] Another interesting aspect about new solicitations is that NSF solicitations in general have a bottom-up component. There’s also definitely a top-down strategic leadership idea behind them that the solicitation descriptions work to capture, but the proposals submitted and the panelists who review them help define them in practice. New solicitations may have a little more wiggle room in this sense because they don’t have this historical “in practice” momentum.

[8] Being involved in visioning workshops funded by NSF, the Computing Community Consortium (CCC), and other places that generate whitepapers, workshop reports, etc., about the state and future of a field or topic can be a way to have your own strategic impact along these lines.

[9] I wouldn’t spend space in your proposal, however, talking about how it aligns with some NSF goal or solicitation, and I especially wouldn’t quote solicitations. Whenever I see this, I think about how that space could be used to instead give compelling details about the project that could help convince panelists that the proposal is strong.

[10] Note that program officers often cover a broad range of topics, so although they will generally have a sense of the areas where they manage proposals, they will often not have personal deep research experience with specific topics. Two corollaries of that are (1) POs will be good at giving feedback about fit, but less well-positioned on average to give feedback about content, and (2) you should ask colleagues in the area for feedback on the content as you’re preparing proposals. Better to find out about something you missed before the panel than after.

[11] But, just as NSF program officers don’t know everything about every topic they manage proposals on, they also won’t know everything about the rest of NSF. It’s not so unlike being asked if you know a particular faculty member at your own institution. If they’re not close to your own department or research interests, probably not, unless you’re fairly senior or fairly outgoing/engaged and interact with other folks outside of the context of your own research.

[12] Sending a writeup in advance trades time explaining an idea on the phone for time discussing/getting feedback on ideas. Program officers aren’t infinitely busy, but they’re busy, and these explanations sometimes sound more like sales pitches, which are not very helpful. If the fit with a particular PO is not good, the writeup can help them suggest more appropriate folks to contact right away without you having to waste time waiting for an ultimately unproductive chat. If the fit is reasonable, seeing the writeup in advance lets them have more considered reactions than hearing it explained and reacting on the spot. Some program officers are also more comfortable and responsive responding in email than on the phone.

[13] At least in CHS and SaTC, two solicitations I’ve done a lot of work with, proposals often focus too much on an applied problem they’re looking to solve, or talk about general hoped-for impacts from the work, rather than the underlying research questions, contributions beyond existing knowledge, and specific impacts the project might achieve. Proposals that don’t make the research contributions clear are both usually dead in the water for panelists and very hard to reason about program fit for.

[14] Like academic life in general, program officer schedules can be bursty and time-bound. In addition to panels, which consume the better part of 5 days to organize and run and which some program officers organize a couple dozen of a year, POs travel to conferences, do internal and external service, and have other deadlines and responsibilities. A corollary of this is that it’s a good idea to make inquiries well in advance of submission deadlines.

[15] I had a pretty bad first couple of attempts to contact folks. What I now think happened in my case is that I mailed a program officer whose NSF rotation was ending, and they didn’t respond before they left and lost access to their mail, and the mail dropped on the floor. People also accidentally delete emails (I estimate my personal rate is about 1 of 300), and mail servers sometimes fail (a program officer tried to mail me once as a PI to make an award recommendation very late in the fiscal year, meaning there was little time to put it together, and Cornell’s email system spam filtered it away. Fortunately for me they also called on the phone.)

An idiosyncratic trip report from CHI 2017

I wanted to give some shout-outs and observations from stuff I saw [1] during my trip to CHI 2017 as an NSF rotating program officer. [2] I didn’t get to see that many talks because I spent a lot of time in NSF advice mode (including the NSF session that Chia Shen organized and Amy Baylor and I helped out with; slides from that are available.), and those I did see tended to be in spaces where I’m not yet expert but where I am managing some proposals. That way of choosing sessions turned out to be productive: Ron Burt argued at CSCW 2013 that one should occasionally go into other communities, and I did get some interesting insights that I wanted to put out there for other people to consider. Stories below are in roughly chronological order.

Barry Brown gave a nice talk about the social and semantic shortcomings of self-driving cars. The high level point is that in driving, people send signals to each other all the time (not just middle fingers) that help coordinate driving behavior. These signals get sent both with the car’s body — we drift, we leave gaps, we close gaps, we turn the wheel just a little at a stop — and with our own — gaze, nods, frowns, waves (and sometimes those middle fingers). Further, we have driving norms that differ by road condition, location, and culture. His claim is that self-driving cars neither read nor send these signals well, and don’t obey these norms, because the way they “see” driving is primarily in terms of finding where to drive and avoiding collisions. This, in turn, will cause coordination problems with other drivers as well as lead the self-driving cars to tend to be taken advantage of because they are relatively cautious compared to human drivers. It made me think about a self-driving car trained in Texas (very accommodating drivers, on average) taking a trip to New York (not so much), about whether self-driving cars could cope with India city traffic, and about just how you’d give a self-driving car a little more semantic signaling and social grace. [3]

Huiyuan Zhao‘s talk about their system “Block Party” also made a nice point about how common map interfaces (in particular, Google Maps) emphasize place and route selection at the expense of other use cases. In particular, Block Party aims at use cases like tourism and moving that require exploration, sensemaking, and discovery of places, which in turn benefit from the use of pictoral, situated views and tools for orientation. Google Maps has tools like Street View that support these activities, but the talk claimed they are too tucked away in the interface behind the primary tasks, so people tend not to use them. Evidence for this comes from a comparison between the features people use in Google Maps versus Block Party (which foregrounds these exploration-related features) when completing sensemaking tasks; Block Party users were more likely to explore situated views and remembered more about the neighborhoods they explored. This has some straight-up design implications about map interfaces having multiple modes. They also had an interesting speculation about cases where people are exploring a place together (such as a CHI lunch group trying to figure out where to go), that suggest interactions where multiple phones are yoked to present different views or support different parts of the sensemaking task. [4]

There was another paper in the same session that Nancy Smith presented around environment designs that are less centered around human needs and goals (in particular, there was motivation from the apparently-growing Animal-Computer Interaction community). I am less personally attuned to this paper, though it had some plausibly interesting theoretical grounding, but when Nancy claimed that human environments are over-engineered for human safety with respect to animals, it made me think about the Brown and Laurier paper’s claim that autonomous cars’ focus on safety might lead to other negative consequences. The parallel was interesting and I wonder if it would be useful to think about other places where we’re doing that as well, either in the specific about safety, or other values that are consistently over- or under-emphasized in design. [5]

One such value, which I think is over-claimed and under-implemented in general in CHI work, is that of human agency. [6] Thus, it was nice to see agency get center stage in Amanda Lazar‘s double-feature on designing tangible and sharing interfaces for people with cognitive impairments. Using a “critical dementia” theoretical framing that encourages us to think less of loss and impairment [7] and more of experiences and strengths, she’s done a lot of work to develop toolkits aimed at supporting dementia sufferers’ self-expression and connection with both family and formal caregivers. I wish there had been a stronger statement of how agency was reasoned about during the design process, as well as some discussion about possible risks to agency, but it was still cool and moving work. [8] [9]

There were also a couple of other little nice themes in that session. First, both Amanda’s talks and Anthony Hornof‘s work to design for people with Rett’s Syndrome (who have very severe cognitive and motor impairments) wound up pointing to worlds where flexible tooling might allow therapists, caregivers, and/or family to explore simple systems that could improve experiences and maybe agency for people with very individual needs that mainstream assistive technologies don’t address well. [10] Second, and related, is a theme about designing for caregivers and not just for the cared-for; this came out pretty strongly in Kellie Morrissey et al.’s paper about their attempt to build a mapping system that asked people to contribute information about the suitability of places for people with dementia. [11] It was a really nice session.

I also dropped in on a usable security session that was fun, if slightly wacky. [12] Yomna Abdelrahman and Mohamed Khamis gave a cute little talk about guessing phone pins and lock patterns using thermal imaging. It’s unclear if it’s a practical attack (especially if people immediately use the phone, messing up the thermal signature), but at least for simple pins and patterns it’s pretty effective if you can get a thermal image within 30 seconds or so. [14] Sauvick Das presented an interaction technique that used rhythmic tapping as a shared group password that identify particular individuals in the group while rejecting attackers. I’m not sure I believe it’s the next big thing in authentication, as it feels like a lot of work for the benefit in a low-security situation. I did, however like his the underlying framing of “socially intelligent” security that calls attention to security requirements and goals in families and small groups. [15][16] Joshua Tan‘s paper also had a fun element, using a unicorn avatar generator to create pictoral rather than textual hashes of public keys with the hope that this would lead to more effective detection of adversarial imposters when using cryptophones. Not so much it turns out, at least in this implementation and experimental context, but the problem of helping people reliably and easily verify key hashes is a good one. [17]

The Tan paper, along with one by Yun Huang, called an important point I’ve been thinking about, about how the way we frame problems shapes our ability to work on them and the impact we might have. The main problem in the Tan paper, for instance, wasn’t a security problem: whether people can reliably detect differences between a reference picture or text and a communicated one is a perception and cognition problem. They hadn’t really thought about it this way, and it might have been productive to get a cognitive psychologist in on this to help design the representations, the comparison interaction, or both. [18] Yun’s talk was about leveraging diverse abilities in crowds to support video captioning. The emphasis in the talk was on solving the video captioning problem, and it was a reasonable talk and approach: people with different levels of hearing and English fluency tend on average to do different captioning tasks, so divvy them up appropriately. For me, though, the general problem of developing good systems that maximize people’s ability to contribute is the more interesting bit, and a focus on that aspect might have made the talk more memorable. That might have also changed the methods from ones where people were binned fairly coarsely to one where people’s actual behaviors were observed and used for maximizing outcomes. [20]

The Huang paper was part of a session on crowdsourcing where the first two papers invited plausibly-interesting parallels between crowdwork and other forms of work. Lynn Dombrowski talked about the problem of “wage theft”, i.e., low-income workers being systematically unpaid for work through employer practice or neglect. The paper was not itself about crowdwork, and in the talk there was some reasonable speculation about what technologies might do to support low-wage workers; still, it would be useful to make explicit a number of implicit parallels to crowdwork platforms and how employer power and platform/legal policy increase these risks. [21] The second talk by Ali Alkhatib did look to make some explicit parallels between crowdwork and piecework. I was really happy that this talk did some definitional work (“crowdwork” is often used to mean everything from Wikipedia contribution to Turk to TaskRabbit), and I appreciated the laying out of the history of piecework [22] [23]. The talk was less clear about just how piecework should inform our thinking about crowdwork and other on-demand markets (there were some discussions of complexity that didn’t quite come through), but overall it was nice to see these papers trying to deconstruct work markets — and very relevant to NSF’s push on Work at the Human-Technology Frontier; see also a related Dear Colleague Letter soliciting workshops and research coordination networks on the topic.

Finally, I’d like to think about getting rid of conference keynotes. [24] In general I have pretty tepid responses to them, and the two I saw were no exception — especially frustrating since I thought both had promise but then left me a little empty. The Monday one, by Neri Oxman, started with a great premise: we’ve spent so much time thinking about parts and assembly, but HCI in general and the maker/fabrication/prototyping movement could really benefit from thinking about materials and form instead (including ones that are inspired by natural forms). I was excited to hear some deep thoughts about this, but the talk itself was more a portfolio of a lot of visually appealing projects without enough synthesis or useful takeaways for my taste. [26] The Wednesday one, by Wael Ghonim, had the key point that we need to take seriously the values that algorithms promote and design them to promote the values we care about. That’s a point I can get behind, but the talk was much too much about the problem, which I think this audience has some sense of already, and didn’t have much concrete thoughts on ways forward: how might Quora or Facebook or Google News restructure algorithms and interactions to be better? [27] Even wrong or incomplete speculations I think would have gotten people’s juices flowing.

And that is most of what I have to say about CHI this year (plus this post is impossibly long), so I’ll stop. It was big fun and I want to thank the organizers, sponsors, authors, and other participants for making it possible, and I imagine I’ll be back next year.

# 30 #

[1] I encourage other folks to write similar reports to call attention to things they liked at the conference. Asking people to pay attention to your own stuff isn’t bad (I wrote a note asking people to read this, mea culpa!), but there’s real personal, relational, and community value in highlighting good stuff from other people.

[2] The views expressed in this post are solely my own and do not represent those of my Foundational overlords.

[3] Barry Brown and Eric Laurier. 2017. The Trouble with Autopilots: Assisted and Autonomous Driving on the Social Road. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 416-429. DOI: https://doi.org/10.1145/3025453.3025462

[4] Huiyuan Zhou, Aisha Edrah, Bonnie MacKay, and Derek Reilly. 2017. Block Party: Synchronized Planning and Navigation Views for Neighbourhood Expeditions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1702-1713. DOI: https://doi.org/10.1145/3025453.3026035

[5] Nancy Smith, Shaowen Bardzell, and Jeffrey Bardzell. 2017. Designing for Cohabitation: Naturecultures, Hybrids, and Decentering the Human in Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1714-1725. DOI: https://doi.org/10.1145/3025453.3025948

[6] Systems such as recommender systems and other filtering technologies, or behavioral support and persuasive technologies, or machine learning decision-making and interactive agents, should be thinking a lot more about agency than they are. If I were ejected back into the research world tomorrow, I’m pretty sure that thinking about how to better define and reason about agency in both design processes and algorithms would be my big next research direction.

[7] I’ve always had a soft spot for assistive technology work, though have always been afraid to do it myself because I’m not sure I’d have the emotional chutzpah to work closely with folks who live with these impairments. This critical dementia framing is a useful counter to that.

[8] Amanda Lazar, Caroline Edasis, and Anne Marie Piper. 2017. Supporting People with Dementia in Digital Social Sharing. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2149-2162. DOI: https://doi.org/10.1145/3025453.3025586

[9] Amanda Lazar, Caroline Edasis, and Anne Marie Piper. 2017. A Critical Lens on Dementia and Design in HCI. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2175-2188. DOI: https://doi.org/10.1145/3025453.3025522

[10] Anthony Hornof, Haley Whitman, Marah Sutherland, Samuel Gerendasy, and Joanna McGrenere. 2017. Designing for the “Universe of One”: Personalized Interactive Media Systems for People with the Severe Cognitive Impairment Associated with Rett Syndrome. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2137-2148. DOI: https://doi.org/10.1145/3025453.3025904

[11] Kellie Morrissey, Andrew Garbett, Peter Wright, Patrick Olivier, Edward Ian Jenkins, and Katie Brittain. 2017. Care and Connect: Exploring Dementia-Friendliness Through an Online Community Commissioning Platform. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2163-2174. DOI: https://doi.org/10.1145/3025453.3025732

[12] Blase Ur‘s talk about designing well-grounded and educational password meters was less wacky but quite solid. There’s a well-justified decision to estimate password strength using a relatively compact neural net that can run on the client, but since those are often hard to interpret, to provide educational explanations and suggestions using a rule-based password parser; this ‘rationalization’ type of explanation can make a lot of sense. The experimental design was solid and the finding that people created stronger but just as memorable passwords was nice, though at a slight cost of user satisfaction because the feedback imposed cognitive load. It was also one of the clearest and best-designed talks I’ve seen in a while. [13]

[13] Blase Ur, Felicia Alfieri, Maung Aung, Lujo Bauer, Nicolas Christin, Jessica Colnago, Lorrie Faith Cranor, Henry Dixon, Pardis Emami Naeini, Hana Habib, Noah Johnson, and William Melicher. 2017. Design and Evaluation of a Data-Driven Password Meter. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3775-3786. DOI: https://doi.org/10.1145/3025453.3026050

[14] Yomna Abdelrahman, Mohamed Khamis, Stefan Schneegass, and Florian Alt. 2017. Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3751-3763. DOI: https://doi.org/10.1145/3025453.3025461

[15] Thinking about usable privacy and security above the level of individuals but below the level of large organizations is one of former NSF/SaTC program officer Heng Xu‘s big pushes, a good one I think.

[16] Sauvik Das, Gierad Laput, Chris Harrison, and Jason I. Hong. 2017. Thumprint: Socially-Inclusive Local Group Authentication Through Shared Secret Knocks. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3764-3774. DOI: https://doi.org/10.1145/3025453.3025991

[17] Joshua Tan, Lujo Bauer, Joseph Bonneau, Lorrie Faith Cranor, Jeremy Thomas, and Blase Ur. 2017. Can Unicorns Help Users Compare Crypto Key Fingerprints?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3787-3798. DOI: https://doi.org/10.1145/3025453.3025733

[18] There’s a pernicious problem in CHI (and, many other disciplines) about not effectively engaging other domains. Joe Marshall had an alt.chi talk (which I did not see) about this [19], and Liz Murnane focused in on it for her dissertation. I’ll add one observation to this, which is that a number of our favorite methods (including grounded theory, user centered design, and machine learning) are often badly applied in ways that encourage us to ignore what is already known in our own and other fields, which in turn limits our ability to advance the conversation. Hopefully there will be a useful blog post about this down the road.

[19] Joe Marshall, Conor Linehan, Jocelyn C. Spence, and Stefan Rennick Egglestone. 2017. A Little Respect: Four Case Studies of HCI’s Disregard for Other Disciplines. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’17). ACM, New York, NY, USA, 848-857. DOI: https://doi.org/10.1145/3027063.3052752

[20] Yun Huang, Yifeng Huang, Na Xue, and Jeffrey P. Bigham. 2017. Leveraging Complementary Contributions of Different Workers for Efficient Crowdsourcing of Video Captions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4617-4626. DOI: https://doi.org/10.1145/3025453.3026032

[21] Lynn Dombrowski, Adriana Alvarado Garcia, and Jessica Despard. 2017. Low-Wage Precarious Workers’ Sociotechnical Practices Working Towards Addressing Wage Theft. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4585-4598. DOI: https://doi.org/10.1145/3025453.3025633

[22] I was, in fact, essentially a pieceworker for about 3 years during and after undergrad, working for a bank typing dollar amounts onto checks just as fast and accurately as I could and getting my hourly rate set by my typing rate. I’ve also seen some amount of wage theft as an hourly employee at a wide variety of jobs (3 years at McDonalds, 6 months as a dishwasher, another 6 as a weekend night auditor at a hotel, 4 months taking phone orders for pizza, credit cards, and most incongruously given how little I knew/know about lingerie, Victoria’s Secret).

[23] Ali Alkhatib, Michael S. Bernstein, and Margaret Levi. 2017. Examining Crowd Work and Gig Work Through The Historical Lens of Piecework. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4599-4616. DOI: https://doi.org/10.1145/3025453.3025974

[24] Maybe one could start the average conference instead with a welcome and a big poster session where everyone started interacting with and meeting each other right away. [25]

[25] I do realize this means less drinking at the poster sessions.

[26] This talk style of too many projects, not enough synthesis tends to be more common with people who are either (a) from portfolio-oriented disciplines, where I think this is a little more of a norm, or (b) senior folks who have done a lot of work and are more of a mind to show breadth rather than carve out a deep path through it. Both are hard on talk consumers, who could really use work by the speakers to carve out the takeaways. It’ll be interesting to see if I inflict the same pain as I get (even) older.

[27] There were also some inconsistencies here around paternalism, values, and agency: there’s a delicate balancing act between not wanting platforms to arbitrate truth but also wanting them to encourage discussions that are “objective”. This goes back to the need to think about agency and whose values are being supported. I’ll also point out that focusing on “fake news” or “misinformation” risks leading us toward positions that if we just find the true news and mitigate misinformation, everything is going to be All Better. Unlikely. These stories take the form of news, but what they’re really doing is expressing and reinforcing values, claiming and recruiting group membership, and defending friends and attacking opponents. Serious work in this space is going to have to engage with the idea that not all policy discourse or political or personal values are grounded in fact-based deliberation.

Increasing your chance of serving on an NSF panel

tl/dr: How do you get on an NSF panel? Ask:

  • Program officers who often review proposals in your area.
  • With enough info about yourself to help them think about your expertise.
  • At times that they’re looking for panelists so that it’s salient.
  • (And, let your senior colleagues know you’re interested too.)

More details:

One of the questions I get as an NSF program officer [1] is “how do I get invited to be on a panel?” [2] One high level answer is that you ask [4] — easy, right? But there are some aspects about how you ask that might increase your chances, and that’s what this post is about.

First, you should ask the right people and programs at NSF [5]. You can get some feel for this through asking your own colleagues. Another strategy is to use NSF’s Award Search tools to find programs and program officers who tend to administer awards close to your own heart. Search using terms you’d expect to see, and click through to find details about the awarding program and managing program officer. [6]

Once you’ve found a good candidate or two, drop them an email. Tell them you’re interested in paneling, along with a bit about you: who and where you are, how long you’ve been there, your expertise (some keywords, a short bio para about your research interests, and your home conferences/journals/research communities are all useful) [7]. Listing a web page and attaching a CV can also help people think about who you are and how you fit.

The timing of the mail may also help. My own evolving observation is that I’m looking for panelists — and know roughly what other program officers might be looking for — about a week after any given submission deadline [9]. Sending such a mail right after the deadline for a solicitation you have something in common with (so you’re a potentially qualified reviewer) but didn’t submit to this year (so you don’t have a conflict of interest [10]) might make your request especially salient [11].

Finally, it doesn’t hurt to let your intellectually related senior colleagues know that you’re itching to serve on a panel [12]. In practice, we often ask more senior folks to serve first [13], and they often decline [14]. They’ll sometimes volunteer alternate names (or tell me some when I ask if they know anyone who might be a good panelist instead), so that’s one more route to Arlington [15].

That’s what I’ve got about how to increase your chances of serving on a panel. Included below the footnotes are a few links from other program officers and panelists about how to get on a panel, how they work, and why you should; hopefully, they (and this) are useful. And, hope to see you on a panel sometime soon.

# 30 #

[1] Disclaimer: my thoughts and opinions in this post are entirely my own and in no way are meant to represent official positions of my NSF overlords or NSF itself.

[2] You absolutely should serve on panels [3]: it provides insight into and confidence in the reviewing process that can help your own proposing; it’s good service to the intellectual community and the country; you get to meet other interesting folks and chat with program officers; and like other kinds of reviewing, it’s one of the ways you are part of the conversation about how your field evolves and the gift economy of academia.

[3] Not too many, though. My panel experience is that it’s 3-4 hours per proposal I review (a common load in CISE is ~7-9 reviews) plus another 3-4 hours of logistics, plus 2-3 full days of travel + panel work. One per year is probably good, solid service; more if you’re a frequent submitter, less if less.

[4] Some programs/solicitations will send out broad surveys of availability and expertise to a large set of candidate panelists. Filling those out is another way to be on the radar.

[5] Everything in this paragraph also applies to thinking about where to _submit_ proposals, BTW.

[6] This will also help you get a broad picture of “what gets funded”, as well as discovering programs that might fit you but that you never knew about. I was pretty bad at this as a PI.

[7] Different people at NSF use different kinds of info [8] to help classify people and proposals — some use keywords, some think about main contribution venues, some use self-descriptions. So having a bit of each is not bad.

[8] One of my great surprises when I came here was that different directorates, divisions, programs, and people do things differently. There’s some high level agreement and policy, but lots of local variation.

[9] Other program officers may have different practices — see [8]!

[10] Conflict rules vary depending on the solicitation; in general, the more money or fewer proposals are involved, the more strict the conflict rules. For instance, for the CISE Research Infrastructure (CRI) program, you can’t have panelists from any institution involved with any proposal on a given panel.

[11] Our tools for finding panelists are not great, which induces a temptation to rely on your existing knowledge and network, which often leads to choosing repeat panelists and awardees you are familiar with at the expense of newer folks.

[12] Also worth doing this to encourage them to suggest you for program committees and conference organizing committees, which is good for burrowing into your research community.

[13] Particularly for larger competitions and things like CAREER and CRII proposals, where their relatively broader vision and greater experience are a win. And, [11]. And [8].

[14] My overall hit rate so far is that maybe 33% of folks I ask say yes.

[15] Or, starting in late 2017 assuming all goes as planned, Alexandria — NSF is moving.

A couple of thoughts from NSF itself and from former program officers:

Community members discussing the why (and sometimes the how) of panels — but remember [8]:

NSF is not the only funding agency, so a couple more guides that talk about other agencies:

Thoughts on #recsys2016 people recommendation tutorial

Rather than tweetstorm, one post with some reactions to the people recommendation tutorial at #recsys2016 by Ido Guy (Yahoo Research, Israel) and Luiz Pizzato (Commonwealth Bank of Australia, Australia), primarily Luiz’ part.

Not sure people recommendation has to be symmetrical; it would be interesting to ponder use cases where that’s not true (Twitter is the one that comes first to mind).

The point that success often gets measured outside the system is a good one — the example of a dating site being successful when people stop using it because they matched was cute. Victoria Sosik, Steve Ibara, and Lindsay Reynolds thought about that in persuasive systems, and the Suhonen et al. Kassi system in barter and social exchange systems.

Random side thought: wonder about designing not just a dating site but a relationship one, where you might use the site to help continue and develop the relationship long term. Doug Zytko was thinking about this a while ago.

Claim that successful people recommendations are ones that lead to interaction is a little overstated maybe. For instance, one potential benefit of “unsuccessful” recs that don’t lead to interaction is learning more about the space of possible people — what is this social circle or company or population of mates _like_?

Point that unsuccessful people recommendations can have psychological effects was nice, as was the idea that people might be more inclined to be makers or receivers of people recommendations, and that varies by context…

The point that people might become less picky over time if early attempts on dates, jobs, etc. don’t pay off (or more picky if you have much success) was interesting, but is that algorithm-actionable? Or is that more about how far people are willing to go down a list of recommendations?

Fraud concern feels a little overstated, even with the legitimate threats of fudgers, liars, scammers… Reminds me a little of the obligatory preventing collusion section in crowdsourcing papers.

Why I’m rotating at NSF

tl/dr: Being a temporary program officer at NSF comes with real job and life tradeoffs; for me, the job tradeoffs around learning, service, and impact felt good, and the life timing turned out to be surprisingly good. So, I took a chance, and I’ll see some of you at NSF as panelists and others at conferences wearing my NSF hat over the next couple of years.

More details:

Last month I started a new academic adventure, as a rotating (temporary) program director (PD) at the National Science Foundation (NSF) [1], in a program called Cyber-Human Systems (CHS) [2]. Some people might wonder why and how this came to be, either out of curiosity or because they, too, might consider it someday [3].

First, some background on being a rotator: NSF regularly brings in outside folks for fresh ideas, energy, and connections to emerging/priority intellectual communities and fields. These assignments typically run about two years [4], occasionally longer, and you apparently do most everything permanent folks do: run panels, make funding recommendations, administer existing grants, collaborate with other programs and other funding agencies, and presumably things I’m still not aware of.

Now, the “why”. One answer is that I’ve thought about doing this for a long time. I found panels and proposal reviewing fun, and former program officers suggested that I might be good at it [5], so it’s been floating around in my head. I put together my own experiences as a panelist with readings of NSF’s own materials [3] and the testimonials of former officers [6], then had a number of conversations with former PDs, folks in the greater CHI/CSCW community, and people at Cornell. This all added up to me seeing real benefits (with some tradeoffs) around learning, service, and impact as a program officer.

I was pretty sure I would learn a lot, both about the broader field and about NSF itself. There’s a lot of territory in CHI and CSCW I don’t see so much, and I figured this would put me on a collision course with new spaces and give me a great bigger-picture view. Further, having more intimate knowledge of how the sausage gets made [7] at NSF was appealing both for its own sake and as a practical tool that, along with the perspective, would benefit both me and Cornell down the road. The downside risk for me is that I’m pretty broad already, and in some worlds the job responsibilities might encourage an awkward breadth-depth tradeoff.

Service. I like helping others — reviewing, commenting, advising, organizing, providing opportunity — and this is clearly a venue for that. Good reviews and process sometimes help PIs push on their ideas [8]; I’ll have plenty of chances to interact with PIs directly [9]; choosing and mentoring junior panelists can help them grow in their careers [10]. All of this has real import for a lot of people. The downside risk here is that the responsibilities trade off with doing your own research, and pretty much everyone said that productivity goes down during (and if you’re not careful, after) the rotation. NSF does give PDs up to 50 days a year of Individual Research and Development (IR/D) time to work on your own stuff, but that’s still much less than I was spending over the last several years.

Impact. I think I have the chance to have real impact, both in the small around particular proposal decisions and in the medium about encouraging kinds of work that I think are important. In the small, there are usually more awesome proposals to fund than dollars to fund them; panel input is taken quite seriously but program directors still make plenty of decisions about which of the good set to recommend for funding [11]. In the medium, you get to interact with folks at NSF and other agencies and try to convince them to allocate money in directions you think are important [12]. This might require more people skills than I have, but we’ll see. On the downside, as described above your direct research impact is likely to go down for a while. A few people also suggested that there might be value instead of rotating in waiting and taking a more senior temporary position (perhaps as a division director rather than a program director, or in agencies where individual program directors have more individual power).

In the end, I think the benefits beat the costs for me in the abstract, which brings me to the concrete “how it happened”. I’ve been told that there is an NSF policy that you need to be at least six years research-active past the PhD so although I had pondered it, it didn’t become seriously plausible until about 2013.

In fall of 2014 Kevin Crowston‘s rotation [13] was scheduled to end, and someone asked if I might be interested in trying out for the team. At the time I said no; Lindsay and I had just bought a house, we were getting married in a couple of months, and I was seriously thinking about what to do for a sabbatical after not planning one before reaching tenure in 2013 [14].

There are lots of other reasons why someone would be interested-but-not-willing to do the job at any given time. People I’ve discussed this with mention a number of other good reasons: kids and schools; geographical preferences and spousal prospects; having a lot of students, collaborators, or projects; timing around promotions or lack of support at the home institution [15]; and general risk aversion or different weighings of the values and risks I discussed earlier all sound like good reasons to try it out later, or never.

But in July 2015 at the CSST summer institute [16], I heard that they hadn’t yet found someone to replace Kevin. When I ran the decision process again a few new things bubbled up that made it seem much more plausible.

On the personal side, the new home argument didn’t seem as critical after having lived there for a bit. Talking about the sabbatical had gotten Lindsay excited about trying a new place [17] and her job is portable, making that an upcheck. We also wondered if a longish move now would be better than, say, in 5-10-15 years when our hypothetical kids were in school [18].

On the job side, Cornell Information Science has been growing, meaning me leaving for a while would be less heavy of a burden for the department [19]. A couple of students had recently graduated and most of the others were pretty far along, so on balance I didn’t feel like I would be leaving them in the lurch [20]. I had a pretty positive outlook on the cost-benefit tradeoffs described above. Finally, I was just “called” to it [21].

And there you have it. There was an interview and administrative process that are vaguely interesting, and the finances also are mysterious-but-possibly-beneficial [22], but I still don’t fully understand them so I’m going to wait for a while to write about them — and this post is quite long enough. So I’m done, except to remind you that if you’re interested in serving on a panel — or talking about rotating — at some point, let me know.

#30#

[1] In this, and in all future posts, the views represented are entirely my own, not those of NSF itself nor my NSF overlords. For instance, when I say “my NSF overlords”, I’m pretty sure that’s not how they’d put it. Pretty… sure.

[2] I’m still getting used to NSF structure myself, so, the full story: NSF has directorates that oversee major scientific and administrative responsibilities at the Foundation; the directorate CHS is in is called Computer & Information Science & Engineering (CISE). Inside of directorates are divisions; the division CHS is in is called Information and Intelligent Systems. So, CHS -> IIS -> CISE -> NSF.

[3] NSF has a part of the website devoted to info about being a rotator.

[4] I won’t talk much about the logistics of how it works because there are different paths; my own is a program called the Intergovernmental Personnel Act (IPA), in which you’re technically still employed by your home institution and you return to it after you leave.

[5] Some general criteria former rotators have mentioned, if you want to run a self-test: scientifically good and well-connected to one’s research community (i.e., some street cred); open to but also willing to critically evaluate many ideas and methods (i.e., no axes to grind); putting in real effort at the reviewing task and executing it with competence and timeliness (i.e., you’re reliable); a proven record of community service and working reasonably well with others (i.e., you’re not an asshole).

[6] See, for instance, stories from Doug Fisher and Michelle Elekonich.

[7] Mmm, sausage.

[8] It doesn’t always feel that way when the decline letter comes around. I’ll be curious how it feels to be on the other side; former PD Wayne Lutters described real sadness around declines of worthy proposals.

[9] Here, I’ll need to be careful to keep boundaries, both for fairness and for sanity reasons. One boundary is around investing too much time giving advice about proposals; I think I will find this fun, but that makes it in turn a bit dangerous. Another is around managing cases when people who are angry at reviews, reviewing, and/or me. I’ve been told this is not so common, but that it does happen.

[10] Hopefully, I’ll be able to have a positive impact here around diversity of demographics, perspectives, institutions, etc. And if you want to serve on a panel sometime, let me know; doing and seeing proposal reviews can be really helpful in your own proposal writing — and is also great service and a chance for impact.

[11] So, going back to what makes for an effective rotator: although you shouldn’t have an axe to grind, I’ve been told that you should have some level of vision, and be open to opportunities to encourage it.

[12] Note “recommend”: program directors make recommendations, usually in consultation with other program directors in their programs, about which proposals to fund. Division directors actually have to approve the recommendations, and the final award is actually made by a different part of the Foundation. So when you get that mail about being recommended for funding, remember that it’s “quite probably” but not “slam dunk”.

[13] Who replaced Sue Fussell, who replaced David McDonald, who replaced Wayne Lutters… it’s somewhat Biblical in that way. I’m told that a common tradition is to be called “the new X” where X = you minus 1. So, I’m “the new Kevin Crowston“, I suppose.

[14] I really wasn’t counting my tenure chickens, and there was a lot of luck along the way.

[15] I am really grateful to Cornell, in particular to Jon Kleinberg and now Thorsten Joachims in IS chair and Greg Morrisett in Computing and Information Science Dean roles, for having my back on this.

[16] Nerd camp for people who think that both the social and technical aspects of systems are important to consider when doing either research or development; see the CSST website.

[17] DC is not California, which was the original plan, but so far she still seems excited and happy.

[18] Much less hypothetical now that Gracie has arrived. The timing was pretty funny: I interviewed September 10, got a tentative expression of interest from NSF on the 17th, Lindsay peed on the stick on the 21st, and the CHI deadline was the 25th. So, that was two eventful weeks.

[19] It still wasn’t a light decision on this front for me. Departmental service is awkward as a program officer both because you’re not at your university (with the logistical and interactional issues that come with that) and because you can’t use your NSF-allotted research time for service (i.e., you do it on evenings and weekends). Many academics are no stranger to either of these, but Jon pointed out that leaves really are leaves, and that there’s value in fully committing to new things.

[20] I hadn’t been bringing in students the last couple of years because of a funding dry spell. Conspiracy theorists might speculate that this was an NSF plot, but that’s wrong: I just didn’t have the right proposal sauce for a bit.

[21] Jon described a similar feeling about the Networks, Crowds, and Markets textbook he co-wrote with David Easley a few years back, which I think is part of what led him to support this adventure.

[22] At a high level, your salary is annualized (i.e., I get 3 months of summer salary) and you get a good amount of financial support for expenses for research travel, including to your home institution while you’re there, as well as for moving to/residing in the DC area. It’s not bad.

Tenure and luck

I recently got official notice that I have tenure from Cornell [1]. With competition fierce for tenure-track jobs, I’m keenly aware that someone else might be writing this blog post right now [2]. And, though skill and hard work played a role, I want to acknowledge and call out the role of luck, circumstance, and coincidence in how I got here [3] — much of which was the result of other people.

I wouldn’t be so happy at Cornell or willing to stay if Lindsay Benoit [4] didn’t like Ithaca so much after moving here (2011).

I wouldn’t be as well known in my research community as a contributing member except for François Guimbretière and Sue Fussell inviting me to serve on PCs they were running shortly after they got hired here [5]. (2009-2010)

I wouldn’t have been hired by Information Science at Cornell except that my postdoc here gave me the chance to work with tons of folks in the Networks Project at the Institute for Social Sciences [6]. (2008)

I might have been hired in Communication instead of IS if Sue Fussell hadn’t applied to Comm the same year I did [7]. (2007)

I wouldn’t have applied for the postdoc, except that labmate Sean McNee from Minnesota met Sadat Shami, PhD student with Geri Gay at Cornell, at a late night CHI party where Sadat told Sean I should apply [8]. (2006)

I wouldn’t have even been able to apply for that postdoc with Geri, except that Louise Barkhuus had to turn it down late in the game to manage a two-body problem [9]. (2006)

I wouldn’t have moved into my niche in the socio-technical gap [10] without John Riedl and Joe Konstan at Minnesota, Paul Resnick and Yan Chen at Michigan, and Bob Kraut and Sara Kiesler at Carnegie Mellon collaborating on a grant while I was a student [11] that brought social science, design, recommender systems, and online communities together. (2003)

I wouldn’t have had a CV that looked postdoc-worthy if I hadn’t been lucky to have a high hit rate of papers as a student [12] and an awesome group of folks to collaborate with at Minnesota [13]. (2000-2006)

I wouldn’t have applied at Minnesota except that I had bumped into recommender systems as part of my masters thesis research [14] and thought they were cool. (1998-1999)

I wouldn’t have applied for a PhD at all except that James Madison University needed a CS instructor right after I graduated from the masters and they trusted me to do it [15]. (1998-2000)

I wouldn’t have thought of James Madison except that Sue Bender [16] had gone there for her undergrad, and wouldn’t have been able to go except that they were willing to fund an untested music ed major as a CS grad student [17]. (1996)

I wouldn’t have gone back to school for a CS degree if I hadn’t gotten a job as the one-man computer band for Progressive Medical Inc.: hardware, helpdesk, and network guy, plus maintaining a custom COBOL database [18]. (1995)

I wouldn’t have gotten that job except that David Bianconi (of Progressive Medical) got a recommendation to ask me from someone at Fifth Third Bank who I tried to help with installing a modem [19], and who remembered that when David was looking for someone to take over the tech side of the business a year later. (1994)

I wouldn’t have been working at Fifth Third except that in student teaching, seventh graders proved to be too dangerous for me to handle when armed with musical instruments [20] — and that Sue had gotten me interested in temp jobs, which is how I got hired there. (1993)

I wouldn’t have met Sue except that a traveling concert band at Ohio State needed two replacements for an overnight trip, who were me and her [21]. (1991)

And, I would never have had the skills to be interested in CS except that my dad somehow knew that he should buy me [22] a TRS-80 Model 1 [23] when I was 7. (1978)

There are also tons of people [24] and groups to acknowledge: parents for putting me in a position to be able to do this [25]; immediate family, notably Sue and Lindsay, for putting up with all the irregular schedule crap that comes along with having both great flexibility and responsibility in academic jobs; collaborators, co-authors, and mentors around research and teaching [26]; the folks who make the computational and bureaucratic systems that I worked with run well; students who testified that I’m not a total teaching loser; people who’ve trusted me with money along the way (largely NSF); participants who made the studies possible and organizations like Facebook and Wikipedia that have given me interesting contexts to study and tinker with.

I’ve probably left both some people and some luck out, but I think these are the highlights. Not all of these are necessarily for the better. In 2006 if I hadn’t gotten this postdoc I might have wound up at PARC or Drexel [27] and those could have wound up great too; maybe I would have been super-successful in Comm; teaching music might have been an even better life.

But it’s been a good ride, and to go back to my original point, a lucky and contingent one. My list is pretty long but I bet if you asked around, a lot of successful people would have their own stories of coincidence, luck, opportunity, and timing [28]. If you have some of your own to share, I’d be happy to hear them.

It’s probably not much comfort in the moment of a paper rejection, a turn-down from a school, an interview that goes badly [29] — but I’ve found that as I’ve become more mindful of the role circumstances play in life, I’ve mostly been happier about things no matter how they turn out. Hopefully reading this was useful for you, too.

#30#

[1] It says so right in our Workday system, which I checked on the day the letter promised it would be official. Even at the end I figured it might all be a mistake.

[2] My guess is that many successful people in academia have similar non-linear, luck-filled trajectories; we have a tendency to attribute good to ourselves and bad to the world but it’s nice to be honest sometimes.

[3] I am also influenced to do this by stories about the prevalence of adjunct and alt-academic jobs in the world. I don’t know what my orientation should be toward this, but it’s a real issue that many folks who come to grad school picturing a R1 position don’t wind up there.

[4] Current fiancee, to be married in November (in Austin, largely because we liked it as a mini-vacation after CHI 2012. Circumstance, indeed.)

[5] Serving on PCs and reviewer, by the way, is a real eye-opener if you haven’t done this already.

[6] There were a ton of good candidates in the IS search that year: Krzysztof Gajos, Tovi Grossman, Richard Davis, and Julie Kientz. All of them looked at least as good as me on paper, and without both the learning from and the collaboration with the folks at ISS it’s unclear I would have even gotten an interview. Plus, I met Ted Welser and Laura Black through that and, among other things, learned about poker from them. Geri hooked me up with that group, another thing to be thankful for.

[7] I still remember Gilly Leshed telling me that she’d heard someone senior was applying for the comm job that year and being pretty sad. And, as with [6], this is a “probably” (I might not have gotten the comm job either way).

[8] I had seen the ad for the postdoc at the conference, but figured I wouldn’t be good enough for Cornell. I still have serious issues with impostor syndrome.

[9] I remember chatting with her about this last year at CHI and thinking that I was pretty lucky, and also about how we all have to make choices around balancing family and career on a regular basis.

[10] $1 to Mark Ackerman.

[11] My only regret from that is that I wish I’d gotten to spend a semester at one of the other places to see a different look at things.

[12] You need to be lucky enough to get some papers accepted and becoming well-known in the community as a grad student. I had more the first than the second, largely because I was pretty bad at meeting people and networking. Students: read Phil Agre’s Networking on the Network. Soon.

[13] The fact that GroupLens was structured around a set of common problems and encouraged collaboration between grad students was a perfect fit for how I do things (though, I suppose it also shaped it).

[14] I still remember getting the comments back on the draft from Christopher Fox, that the work was good but “the tone was inappropriate for a scholarly monograph”. Judge for yourself (section 1.4 is particularly choice). And, some things don’t change: I got essentially the same comment from our grant office about an internal pre-proposal for a National Research Traineeship grant. It’s too bad: the grant would have been in part about the management, method, and ethics around doing social science research with social media datasets. Timely, that.

[15] And then realized that if you want to teach at a university in the long term, you more or less need a PhD except for some smaller places — and even that has become much less common than it was in 2000.

[16] Sue and I were married for 17 years.

[17] The princely sum of $5,500 a year, which was not quite enough to live on in Harrisonburg, Virginia, but pretty close.

[18] I still have a fond place in my heart for both COBOL and maintenance programming.

[19] Failing miserably, it turned out. I did also help them with some custom Access database development that must have gone better, although I didn’t know any more about Access than I did about networking or COBOL when I started.

[20] The high schoolers weren’t that much better for me. Trumpet divas and the “suck band”. I think I’d have a fighting chance now but at 22 I was no match.

[21] I played one of the loudest wrong notes in recorded history in Jackson, Ohio.

[22] To be fair, he might have bought it in part for himself, too; he had some gadget in him.

[23] Four K of memory and a cassette drive. Feel the power of the TRS-80 Model I!

[24] Plus all the people already mentioned, and others who I have not for narrative or memory failure reasons. To folks I miss: I am sorry for not listing you.

[25] Going bankrupt in the process.

[26] Special academic shouts out to Geri, Jeff Hancock, and Jon Kleinberg at Cornell; John, Loren Terveen, and Joe at Minnesota; and Mark Lattanzi and Chris at James Madison.

[27] Where I’d be The Senior HCI dude now, which is a little scary. They’ve really built some nice momentum there.

[28] I wish I could write a good blog post about how to increase your chances of those things; maybe someday.

[29] I have some stories about that, too. A future post, perhaps.