Personal trip report thoughts on SOUPS 2018

I wrote a trip report on SOUPS 2018 (the Symposium on Usable Security and Privacy) for other folks at NSF since NSF paid for it, and I thought I would go ahead and share a lightly edited version of it more widely because I like to call out other people’s interesting work with the hope that more people see it. As always, the views in this post are mine alone and do not represent those of my NSF overlords.

SOUPS, founded and oft-hosted by Carnegie Mellon, is historically a good conference focused toward on the human and design side of security and privacy in systems; here’s the 2018 SOUPS program, for reference.  I’m a relative newcomer to SOUPS, having only come since 2017 in my role in NSF’s Secure and Trustworthy Cyberspace program. So, this may be a bit of an outsider view — perhaps not so bad to get from time to time. I’ll structure the report in three main bits: first, to highlight a couple of themes I liked that were represented well by particular sessions; second, to note some other papers I saw that triggered pleasant paper-specific reactions; and third, to gripe a bit about a wider CHI problem that I also felt some of at SOUPS this year and last, that many papers are too focused on particular new/novel contexts and not enough on learning from past work and building generalizable, cumulative, fundamental knowledge upon it.

Some cool sessions on risks close to home, inclusiveness, and organizational aspects

Gripe aside, I liked a number of the sessions I saw. The last session of the first day was the highlight for me, with a clear theme around the privacy risks of those close to us (friends, family, associates) versus risks imposed from outsiders (strangers, companies, governments). The first paper, by Nithya Sambasivan et al., looked at this in the context of phone sharing among females in South Asia, and how technical novelty and cultural norms combined to shape attitudes about and actions toward privacy risks. The talk had some interesting bits about trying to increase the discoverability of privacy-enhancing behaviors and mechanisms such as deleting web cookies/history or private browsing modes.

The second paper in that session, by Yasmeen Rashidi et al., focused on how college students deal with pervasive, casual photography by those around them (mostly, as Anita Sarma pointed out, focusing on overt rather than covert photography, which I thought was a nice observation). The study used a method I hadn’t bumped into before called an “experience model” that summarized key moments/decisions/possible actions before, during, and after photo sharing; I thought it was an interesting representation of ethnographic data with an eye toward design. The beneficial aspects of surveillance in college fraternities reminded me of Sarah Vieweg and Adam Hodges’ 2016 CSCW paper about Qatari families experiencing social/participatory surveillance as largely positive — surveillance is generally cast as pure negative, but there are contexts where it’s appropriate and meaningful.

The third paper by Hana Habib et al. compared public and private browsing behavior using data from the CMU Security Behavior Observatory. Perhaps not surprisingly, people do more private/sensitive stuff in private modes, but maybe more surprisingly, self-reports aligned reasonably well with logged data. Here, too, there was evidence that people were at least as concerned about threats from co-located/shared users versus external users. There’s also evidence that people assume private browsing does more privacy-related work than it really does (for instance, some folks believed it automatically enables encryption or IP hiding), possibly to people’s detriment.

The fourth paper in the session, by Reham Ebada Mohamed and Sonia Chiasson, was close to my own heart and research, with connections to Xuan Zhao, Rebecca Gulotta, and Bin Xu’s work on making sense of past digital media. It focused on effective communication of digital aging online through different interface prototypes (shrinking, pixellation, fading), which made me think straightway of Gulotta et al.’s thinking about digital artifacts as legacy.  But unlike that work, which was more about people’s reaction to their own content fading, this paper was more about using indicators of age to make the pastness of a photo more salient in order to evoke norms and empathy about the idea that things in the past are in the past and thus, as Zhao et al. argued, often worth keeping for personal reasons but not necessarily congruent with one’s current public face. The talk also explicitly put this kind of analog, gradual aging in opposition to common ways of talking about information forgetting as digital, binary, absolute deletion, and that was fun as well (and well-aligned with Bin Xu, Pamara Chang, et al.’s  Snapchat analysis and design thinking).

Another nice first-day session was a set of lightning talks that clustered, broadly, around inclusion and empowerment in security and privacy issues. These included a general call toward the problem from Yang Wang, a focus on biased effectiveness of authentication systems for people of various demographic categories from Becky Scollan, a discussion of empowering versus restricting youth access online from Mariel Garcia-Montes, and a transtheoretical model-based call to develop personalized, stage-appropriate strategies to encourage self-protective privacy and security behavior from Cori Faklaris. On balance these were interesting, and more generally I like the move toward thinking about inclusive privacy/privacy for particular populations, both for their own sake and as edge/extreme cases that might speak back to more general notions of privacy.

On the second day there were also some fun talks I saw in the last session (detailed notes, alas, lost in a phone crash).  These included Julie Haney and Wayne Lutters on how cybersecurity advocates go about their work of evangelizing security in corporations; James Nicholson et al. on developing a “cybersecurity survival” task, parallel the NASA Moon Survival Task, to get insight into IT department versus general company attitudes toward security that looked both promising and well-put-together; and a paper by an REU site team, presented by Elissa Redmiles, about co-designing a code of ethics with VR developers around privacy, security, and safety. It was nice to see an example of a successful REU site experience, and it highlighted a framing of people’s desire for “safety” in cyberspace that I think might make for a root goal concept that “private”, “secure”, and “trustworthy” each capture some aspects of as a means.

Some cool papers

There were also a number of individual papers that caught my eye, including one by Sowmya Karunakaran et al. from Google about what people see as acceptable uses of data from data breaches. They had some interesting stories about both cross-cultural and cross-scenario comparisons (being able to survey 10K folks from six countries has its advantages); probably the most surprising tidbit was that people were least happy about the idea of academic researchers using these data–less so than targeted advertising, and much less so than notifications/warnings/threat intelligence sharing. I say surprising because some folks have observed that Amazon Mechanical Turk workers are more comfortable sharing personal data in tasks posted by academics than by others because academics are perceived as both more trustworthy and more legitimate (though Turk is different than breaches since Turkers have the choice of whether to participate or withhold data, which they don’t in the case of the breaches).  The ordering also roughly paralleled the amount of personal benefit the breach victims perceived for each use, which makes sense; it might be interesting to run a comparable parallel study around appropriate uses and users of non-breached, but openly released, datasets of social trace data.

There was a nice null-results paper by Eyal Peer et al. on whether face morphing — blending two or more faces into a composite — can influence decision-making by blending a person’s own face subliminally into the face of a person in an advertisement or communication campaign. This had a lot of theoretical juice behind it based on the prior face morphing literature and more general work around influence and cognitive psychology, so it was surprising that it didn’t work at all when tested. This caused the team to go back and do a mostly-failed replication study of some of the original work on face morphing’s impacts on people’s likability and trust ratings of images that included their faces. I admire the really dogged work by the team to chase down what was going on, and one more data point in the general story of research replicability; it might be a nice read for folks wanting to teach on that topic.

Susan McGregor’s keynote on user-centered privacy and security design had a couple of cool pieces for me. First, there was a bit about how standards for defining “usability” talk in terms of “specified” users and contexts, which raises cool questions about both who gets to do the specifying, and how to think about things as they move outside of the specified boundaries. Not a novel observation, but one worth highlighting in this context and related to the inclusive privacy discussion earlier. Second, there was a nice articulation of the distinction between usability and utility, and how scales/questions for measuring usability can accidentally conflate the two. For instance, something that might be rated “easy” to use might really be not that easy, but so worth it that people didn’t mind the cost (or vice versa; I remember a paper by Andrew Turpin and William Hersh in 2001 about batch versus interactive information retrieval system evaluation that suggested that a usable-enough interface can make up for some deficits in functionality). This raises ideas around how to develop scales that account for utility: rather than “usable”/”not usable”, what if we ask about “worth it”/”not worth it”. Some posters in the poster session had moves toward this idea, trying to measure the economic value of paying more attention to security warnings or of space/time/accuracy tradeoffs in a secure, searchable email archive.

I also liked Elham Al Qahtani et al.’s paper about translating a security fear appeal across cultures. There’s been some interesting work in the Information and Communication Technologies for Development (ICTD/HCI4D) communities showing that peers and people one can identify with are seen as much more credible information sources. This implies that you might want to shoot custom videos for each culture or context, and that turned out to be the case here as well — though just dubbing audio over an existing video with other-culture technologies and actors turned out to be surprisingly effective, raising cost-benefit tradeoff questions. Sunny Consolvo noted that Power Rangers appears to be able to use a relatively small amount of video in a wide variety of contexts, and that there might be strategies for optimizing the choice of shooting a small number of videos, the closest-fitting of which for a given culture/context could then be dubbed into local languages. Wayne Lutters had an alternate suggestion, to explore using some of the up-and-coming “DeepFake” audio and video creation technologies to quickly and locally customize videos — presumably, including one about the dangers of simulated actors in online content. 🙂

Norbert Nthala and Ivan Flechais’ paper about informal support networks’ role in home consultations for security reminded me quite a bit of some of Erika Poole’s work around family and friends’ role in general home tech support. The finding that people valued perceived caringness of the support source at least as much as technical prowess was both surprising and maybe not-surprising at the same time, but was good to have called out for its implications around designing support agents and ecosystems around security, privacy, and configuration.

There was also a nice, clean paper by Cheul Young Park et al. about how account sharing tends to increase in relationships over time, a kind of entangling that to some extent accords with theories of media multiplexity (gloss: people tend to use a wider variety of media in stronger relationships, though it’s not clear what the causal direction is). The findings had nice face validity around the practicalities of merging lives, ranging from saving money on redundant subscription service accounts such as Netflix to questions of intimacy around sharing more sensitive accounts. It also raises the question (in parallel with Dan Herron’s talk at Designing Interactive Systems 2017 about how to design account systems that can robustly handle relationships ending and disentangling.

A call for more generalizable, cumulative work

Now, to the gripe. The highest level thing I liked least, based on my experiences there both last year and this year, is that too much of SOUPS focuses on descriptive/analytic work around specific new security and privacy contexts, without enough consideration of underlying principles about how people think about security and privacy, and how studying the new contexts adds to that. It’s important, for instance, to study topics such as Cara Bloom et al.’s 2017 paper on people’s risk perceptions of self-driving cars or Yixin Zou et al.’s paper on consumers’ reactions to the Equifax account breach (which won a Distinguished Paper award). These are relevant contexts to address, and from what I remember the presentations/posters I saw about them were pretty good in and of themselves.

But for my taste, on average I don’t think we do enough work to connect the findings from the specific domains and studies at hand to more general models of how people think about trustworthy cyberspace, and how properties of the contexts and designs they encounter affect that thinking. For example, what do we learn about studying the risks of self-driving cars relative to other autonomous systems, or drones versus social media photo sharing versus (surveillance) cameras, or new IoT setups versus more classic ubiquitous computing contexts, or to point back at myself a bit, how Turkers’ privacy experiences add to our understanding of privacy and labor power dynamics more broadly? To what extent are there underlying principles, phenomena, models that could help us connect these studies and develop broadly-applicable models?

This is related to a more general concern I have in the human-computer interaction (HCI) community about how methods that encourage deep attention to one context or dataset — including but not limited to many instances of user-centered design, ethnography, grounded theory, and machine learning modeling — can lead researchers to ignore relevant theoretical and empirical research that could guide their inquiries, improve their models, and more rapidly advance knowledge. (Anyone who wants an extended version of this rant, which I call “our methods make us dumb”, is free to ask.) I also see a lot of related work sections whose main point appears to be to claim that this exact thing hasn’t been done exactly yet, rather than trying to illustrate how the work is looking to move the conversation forward. This, also, is not SOUPS-specific; you see it in many CHI papers (and, it turns out, CHS proposals).

Okay, gripe over and post over as well [1].  Hopefully there were some useful pointers here that help you with your own specific topics, and that your thinking and findings are broad and useful. 🙂

#30#

[1] For once, no footnotes. [2]

[2] Oops.

Finding NSF programs and program officers for your research

tl/dr: Figuring out where to send proposals at NSF can be confusing. Understanding NSF’s org structure and solicitation mechanisms, using NSF’s award search tool (and colleagues) to look for programs and program officers that manage awards related to you work, and effectively working with program officers to find good fits can help you out.

More detail:

Getting started with applying for funding can be pretty confusing, even if you have good mentors, and as both a mentor and now a three-year rotating program officer at the National Science Foundation I’ve answered versions of this question many times. So, I figured it was time to write down some of the things I often say, though as always, these views represent my personal opinion and experience and not those of my NSF overlords. Further, there are many folks with many opinions on the topic, so ask and search around (though I was surprised not to find too many posts about this when I was putting this together).

I’ll organize the post around three main themes/tasks: (1) understanding NSF’s organizational and solicitation structure, (2) finding places in that structure that might fit your work, and (3) investigating those places through contacts with program officers and panel/review service.

First, structure, because it’s helpful to understand the basic mechanisms through which NSF solicits proposals. The root organizational structure is a hierarchy that broadly aligns with a swath of academia’s own organization of fields, with the top level being Directorates: CISE (Computer and Information Science and Engineering), SBE (Social, Behavioral, and Economic Sciences), ENG (Engineering), EHR (Education and Human Resources) and so on. [1] Directorates contain Divisions; inside of CISE, for instance, are three — CCF (Computing and Communication Foundations), CNS (Computer and Network Systems), and IIS (Intelligent Information Systems) — along with OAC (the Office of Advanced Cyberinfrastructure). Then inside of Divisions are typically Programs; IIS, for instance, contains RI (Robust Intelligence), III (Information Integration and Informatics), and CHS (Cyber-Human Systems).

Most of the core programs have some kind of core solicitation attached to which you can submit proposals. So, for instance, you wouldn’t submit to CISE or to IIS, you might submit instead to one of the core programs inside of it. This isn’t NSF-wide (in EHR, the EHR Core Research solicitation crosses the whole directorate, for instance), but programs that field solicitations it’s the general structure [2].

There are also cross-cutting solicitations that as the name implies cut across the hierarchical structure, that multiple organizational units at NSF fund and administer together. Some are foundation-wide things like CAREER; some are broad cross-cutting ones like SaTC (Secure and Trustworthy Cyberspace) that multiple directorates participate in; some are cross-cutting but within individual directorates like CRII (CISE Research Initiation Initiative) and CCRI (CISE Community Research Infrastructure) [3]. You’ll also sometimes see a Dear Colleague Letter come out that asks for proposals in a specific topic or area, or that invites supplements to existing awards for a specific purpose [4].

Now that we understand solicitations can come from many places and take several forms (core solicitations, cross-cutting solicitations, and dear colleague letters that contain requests for proposals), the next trick is finding ones that might fit you [5].

To that end, NSF’s award database has a lot of value. Using various keywords that sound like your research [6] will bring back award abstracts that show you what’s being funded (pay attention to the award dates, though — sometimes you will get pretty old awards) as well as the programs and program officers who are managing those awards. Those are places and people that you should be aware of as possible funding targets.

NSF also has tools for searching funding opportunities and finding about about announcements from programs (which often contain information about funding opportunities). For instance, this sample search looking for CISE program announcements will give you a list of communications, including solicitations, FAQs, and Dear Colleague Letters that someone believed were relevant to the CISE community. The volume can be pretty high, but it’s an easy scan/filter task, and finding a relevant opportunity you didn’t know about can be high value. In particular, new opportunities sometimes crop up. Being aware of ones that might fit you can give you a leg up versus people who are not aware of them [7].

I’ve also seen that it’s useful to be aware of executive branch research priorities, often articulated by the Office of Science and Technology Policy (OSTP), as well as NSF’s own strategic plans, activities, and announcements [8]. It turns out that many cross-cutting solicitations — often the larger ones in terms of dollars — come out subsequent to OSTP and NSF Director-level initiatives, suggesting that it makes sense to keep an eye out for new solicitations related to those topics [9].

Finally, asking colleagues in your intellectual spaces where they submit can also give you a sense of potentially interesting programs and program officers. Said colleagues will often have useful experience with and advice about interacting with them. More generally, junior folks often think they should figure everything out for themselves, but there’s a ton of value in working with more senior mentors on funding. This ranges from collaborating on proposals, to asking for thoughts on finding opportunities and fit of ideas to them, to getting specific feedback on specific proposal ideas and even drafts. People are busy but also often generous, and getting advice from colleagues and mentors is the number one thing I think junior faculty could do to get better faster at proposal writing.

Okay, now that you’ve identified some potential targets using the methods above, it’s time to dig more deeply into whether they really are fits.  Even if you’ve done the homework to look up official NSF program descriptions and awards made by that program in the past, and even if you ask colleagues, it can be hard to tell how well a particular proposal idea is going to fit a particular program because the official text of a solicitation only gives so much information.

One way to learn more about what a solicitation is about in practice is to search for (recent) awards made under it, assuming it’s not brand new. Many solicitations will have a link near the bottom of the page to help with this; there’s also an advanced search tool that can help you (among other things) find all the proposals funded by a specific solicitation, although you’ll need to find the right Program Element Code to narrow to a particular program/solicitation.

Your most likely source of information, though, is to email/talk with relevant program officers about whether your project ideas fit the programs they work with. They probably have the clearest sense of what a program’s goals are and how a project idea might fit them, often have a high level sense of how panelists might react to some aspect of a project idea, sometimes have deep expertise of their own they can bring to bear [10], and may also know other parts of NSF that could be interesting homes for a project idea [11]. Most program officers are also genuinely interested in mentoring, especially for junior researchers, so you should feel empowered to reach out to them.

It’s helpful to ground conversations with program officers in specific 1-2 page project writeups. Having a writeup in advance helps focus your own thinking and will also make interactions with program officers more efficient and effective [12]. These writeups might not be too different from an expanded project summary of the kind you might submit with a proposal, but focusing more on the specific questions, contributions, activities, and evaluations you’re considering, and less on generic “why it’s important” text. Thinking about Heilmeier’s Catechism for proposing research can be helpful here [13].

Once you have a passable version of that (it doesn’t have to be perfect), email it to the most relevant program officer you can think of in the most relevant program or two, based on the homework you’ve already done as described above. Note that solicitations often list multiple program officers, and different folks usually handle different subtopics/panels within a given solicitation.  So, best if you can identify one who handles awards related to your idea (whether in this solicitation or in general) and mail them. If you can’t tell who is best, the first person listed is often a “lead” for the solicitation and it’s reasonable to mail them and ask them who to ask. Don’t email all of them, especially individually; that’s wasteful and inconsiderate of time.

You might ask them about their thoughts on fit to their own program(s) and other programs or program officers they might recommend, as well as any thoughts they have on the proposal itself or on framing it for panelists in their program. If you’re new enough to a program or to NSF that you don’t have a good feel for it, it might make sense to ask if you could have a talk where you ask more general questions as well as talk about the writeup.

Program officers will have different levels of responsiveness to these questions. Some are more willing to talk general program or NSF issues than others. Some try hard not to inject their own opinions on proposal content both to increase fairness (relative to other PIs not getting feedback) and in case their opinions are wrong. Some prefer to reduce their contact with PIs during the proposal process in general, with the goal of avoiding biases induced by having such contact, and may want to interact by email versus calls or in-person visits.

But, you should at least get a response about program fit, and my general sense is that NSF program officers are pretty generous in interacting with PIs. If you’ve been waiting more than a week [14], it’s legitimate to re-send the mail, or try a different program officer associated with the program. Don’t take it personally, or give up on the idea of contacting POs [15].

Another way to get a sense of a program, and connect to its program officers and reviewing community, is to serve as a panelist. I’ve written a separate blog post about that so I won’t say much here, except that serving is a great way to learn a lot about proposal writing and evaluation while both serving and representing your intellectual communities while meeting both folks in those communities and program officers.

And I think that’s my story on this.  Hopefully this was useful for thinking about how to find places and people at NSF that might be good fits for you, and remember to look around for other thoughts on these topics.  A few that I bumped into while I was writing this are included below for your initial bonus amusement.

#30#

[1] There are also various administrative Offices at this level, but these don’t usually field many programs or solicitations, so I ignore them for simplicity.

[2] One of the things I’ve learned coming here as a rotating program officer is that NSF is less monolithic than you’d think. The high level structure of proposals, panels, etc., is mostly the same, and we have high level policy guidance, but practices can be quite different at every level from directorates to individual program officers.

[3] Yes, it’s awkward that the acronyms are close. There are a lot of acronyms here.

[4] DCLs vary widely; here a couple of (expired) examples I’ve been involved with, one that solicited interdisciplinary SaTC proposals, and one that looked to advance citizen science research.

[5] For what it’s worth, I was not very good at this as a PI; I just submitted to CHS’s predecessor (Human-Centered Computing) a lot, although I had collaborators who were better at this game and wound up with some collaborative submissions to other solicitations. More generally, you should also look beyond NSF to other agencies, foundations, and industry; I wasn’t particularly good at that either so I won’t discuss that here.

[6] Or, names of PIs in your community who do the kind of research you do. Finding out where they get NSF funding could be pretty useful, and PIs are sometimes willing to share proposals, which can be super-helpful for understanding the genre of proposal writing [6′].

[6′] As can reviewing, which is good for both you and the community. See my post on how to become a reviewer for more.

[7] Another interesting aspect about new solicitations is that NSF solicitations in general have a bottom-up component. There’s also definitely a top-down strategic leadership idea behind them that the solicitation descriptions work to capture, but the proposals submitted and the panelists who review them help define them in practice. New solicitations may have a little more wiggle room in this sense because they don’t have this historical “in practice” momentum.

[8] Being involved in visioning workshops funded by NSF, the Computing Community Consortium (CCC), and other places that generate whitepapers, workshop reports, etc., about the state and future of a field or topic can be a way to have your own strategic impact along these lines.

[9] I wouldn’t spend space in your proposal, however, talking about how it aligns with some NSF goal or solicitation, and I especially wouldn’t quote solicitations. Whenever I see this, I think about how that space could be used to instead give compelling details about the project that could help convince panelists that the proposal is strong.

[10] Note that program officers often cover a broad range of topics, so although they will generally have a sense of the areas where they manage proposals, they will often not have personal deep research experience with specific topics. Two corollaries of that are (1) POs will be good at giving feedback about fit, but less well-positioned on average to give feedback about content, and (2) you should ask colleagues in the area for feedback on the content as you’re preparing proposals. Better to find out about something you missed before the panel than after.

[11] But, just as NSF program officers don’t know everything about every topic they manage proposals on, they also won’t know everything about the rest of NSF. It’s not so unlike being asked if you know a particular faculty member at your own institution. If they’re not close to your own department or research interests, probably not, unless you’re fairly senior or fairly outgoing/engaged and interact with other folks outside of the context of your own research.

[12] Sending a writeup in advance trades time explaining an idea on the phone for time discussing/getting feedback on ideas. Program officers aren’t infinitely busy, but they’re busy, and these explanations sometimes sound more like sales pitches, which are not very helpful. If the fit with a particular PO is not good, the writeup can help them suggest more appropriate folks to contact right away without you having to waste time waiting for an ultimately unproductive chat. If the fit is reasonable, seeing the writeup in advance lets them have more considered reactions than hearing it explained and reacting on the spot. Some program officers are also more comfortable and responsive responding in email than on the phone.

[13] At least in CHS and SaTC, two solicitations I’ve done a lot of work with, proposals often focus too much on an applied problem they’re looking to solve, or talk about general hoped-for impacts from the work, rather than the underlying research questions, contributions beyond existing knowledge, and specific impacts the project might achieve. Proposals that don’t make the research contributions clear are both usually dead in the water for panelists and very hard to reason about program fit for.

[14] Like academic life in general, program officer schedules can be bursty and time-bound. In addition to panels, which consume the better part of 5 days to organize and run and which some program officers organize a couple dozen of a year, POs travel to conferences, do internal and external service, and have other deadlines and responsibilities. A corollary of this is that it’s a good idea to make inquiries well in advance of submission deadlines.

[15] I had a pretty bad first couple of attempts to contact folks. What I now think happened in my case is that I mailed a program officer whose NSF rotation was ending, and they didn’t respond before they left and lost access to their mail, and the mail dropped on the floor. People also accidentally delete emails (I estimate my personal rate is about 1 of 300), and mail servers sometimes fail (a program officer tried to mail me once as a PI to make an award recommendation very late in the fiscal year, meaning there was little time to put it together, and Cornell’s email system spam filtered it away. Fortunately for me they also called on the phone.)