Getting and giving more out of NSF reporting

tl/dr: Treat NSF reports as a required structural opportunity to celebrate, reflect, and plan. Give program officers (just) enough info to understand, share, and think about the cool outcomes and real impacts of the projects.

More detail: My goal with this post is to help you get more personal value out of writing annual NSF reports and also to make them more useful to NSF. I am writing this in my personal role as a faculty member who happens to have experiences as an NSF program officer, not in any official capacity; that said, I’ll talk about my perspective on this both as a PI and based on my experience as a program officer. [A]

Let’s start with the PI side of this, about developing a positive attitude toward report-writing and what you can get out of it. Reports have real value to NSF [B], but the value often isn’t apparent to PIs themselves. My own experience for my first few reports was a little negative. Poking around on the web for report-writing advice turned up phrases like “grit your teeth” and “those darn annual reports”, and they weren’t very helpful. I apparently wrote ok reports and haven’t had one returned, but early on I wasn’t sure why I was doing it except it had to be done.

Then, around year three, I started thinking about the reports as a chance to celebrate, reflect, and plan. It felt good to talk about people I worked with, mentored, and taught, and the knowledge they gained and discovered. It was cool to see how my thinking evolved over the course of the project given circumstances and people, to step outside of the activity of research and do a little bit of meta-level thinking about it [D], and to consider where the work was going and what it meant for the field.

A number of folks have come to a similar framing about proposal writing as a chance to step back and think about what’s important; I encourage you to do that for reporting too [E] — treat reports as a required structural opportunity to celebrate, reflect, and plan. These — especially the first two — are activities that I don’t spend enough time on as a faculty member.

I’m not going to talk much about the requirements or individual sections of the report, as plenty of documents do this [F]. In particular, the Community for Advancing Discovery Research in Education has a nice description of the requirements and most of what I would say would be redundant.

Instead, I’m going to switch gears and talk about my personal experience reading the reports as a program officer, and what made reports more useful and satisfying to consume [G]. Frankly, my early experiences reading annual reports were similar to writing them as a PI: guidance and rationale were minimal [H]. Then an experienced program officer and deputy division director pointed out that beyond the general NSF rationales, reports are the main structural opportunity for program officers and PIs to communicate about awards in progress.

As with the celebrate-reflect-plan framing on the PI side, communicate-engage on the program officer side made reviewing reports a lot more rewarding. It was nice to be able to drop people positive comments on their projects, including occasionally sharing ideas the reports sparked, and it definitely helped me understand areas that weren’t in my wheelhouse [I].

This works best when the report does a good job in the Accomplishments section of reminding me about the key goals of the project and reporting period. Then a thoughtful-but-brief summary of activities is helpful to know how you’re attacking the problem; rambling descriptions are less useful. The most useful reports say a little more about the interesting outcomes and how they contribute to the field. Emphasizing findings is valuable because these reports are a main way program officers stay up on a broad range of projects and fields; we can’t attend/read all the conferences and journals our PIs engage with [J]. Good reports also given useful highlights about the education, outreach, and broader impacts aspects of the project.

I also checked the products and participants sections. NSF wants the products to be correctly uploaded — and papers to acknowledge support — so they can be tracked and associated with the awards. In particular, publications properly entered will appear along with your award abstract in the NSF award database, and that’s useful for us, you, and future folks looking at awards. I often saw issues in the participants section, with PIs failing to list the folks contributed to the project and describe their contributions (see question 4 in the Division of Environmental Biology’s blog post on the topic for useful thoughts on this); this often caused me to send reports back for revisions.

The impacts section is another place I often saw problems. I think this is part because it’s hard to articulate concrete impacts especially early in a project’s lifespan, in part because impact tends to be cumulative in a way annual (or “final”, which is really just “the last annual report” [K]) reports aren’t, and in part because we often don’t spend enough time thinking about the impact of our work beyond the papers. Too many reports default to the same generic, hopeful language proposals often use about potential impacts — in the worst case, cutting and pasting from the proposal. Generalities are not useful, and as a program officer I preferred a report say “nothing to report” on aspects of the impact section rather than make stuff up, or just repeat findings from the accomplishments section (another common approach).

Instead, compelling impact sections give specific descriptions of, evidence for, and/or concrete plans to increase the impact of the project and the underlying research. Are other people reacting to the work, in the main discipline or others, as shown through citations, awards, invited talks, syllabus use, new collaborations, or other concrete evidence that they are thinking about the work? Are students getting valuable experiences and outcomes from project activities, both as research participants and students in courses? Are educational, dataset, source code, implementation, infrastructure, and other materials released to the public, documented, maintained, evangelized, and used by others? Are there concrete possibilities for tech transfer or actual impacts on society beyond “this might be useful, someday”? And for any or all of these, does it make sense to plan activities to increase the chances of having these kinds of impacts? Going back to the lead for this post, report writing should have some benefits for you — and taking a chance to think about how to increase the impact of your work is one of those [L].

And that’s where I think I’ll leave it. As a reminder, this is my own thinking about reports from both the PI and program officer side, and not official NSF policy or prescription, but hopefully it’s useful in helping you both in the writing of your reports and thinking about your work.

#30#

[A] And, as always when I mention NSF, this is my own thinking and does not represent in any official way the opinions of NSF.

[B] NSF offers lots of good reasons to do reporting from an NSF perspective, e.g., accountability for the funded PIs, as well as tracking research and educational impacts and specific outcomes. These are good things to do. Not completing reports in a timely manner also impacts one’s ability to get future funding. That said, these talk mostly about why annual reports are good for NSF, not for you.

[D] For what it’s worth, “go meta” is my number one piece of generalizable advice about being an academic. Don’t just read the paper or go to the talk or listen to the lecture; think about the genre, and what works and doesn’t work for you, and why, and use that going forward. Don’t just review the paper/proposal or do the program committee/review panel; use it as a chance to think about quality science and how people think and talk about it. Then use these meta-insights to be a better reader, writer, teacher, reviewer, community member.

[E] Not everyone buys in; I remember advocating for this at a faculty meeting and being called “Panglossian”. Perhaps true, but it still helps me both feel better about report writing and write better reports.

[F] These include official guidance on NSF’s take on annual reporting (as of 2016 but still current as I write in early 2019), including special instructions for writing reports for conferences/workshops/doctoral consortia and the like, and more info on the mechanics of process and using research.gov to do the reporting.

[G] There are a couple of documents from NSF itself that also have somewhat more detailed thoughts on good report writing, including one from the Brain and Cognitive Sciences division and another from the Division of Environmental Biology. Some of the things I say in this document are based in part on these, along with conversations with other program officers, largely in the Division of Information and Intelligent Systems.

[H] That said, NSF has pretty good high-level training for program officers and a good community of practice that includes both other program officers and especially deputy division directors, the unsung heroes of NSF management who absorb an enormous amount of both corner cases and institutional memory. But it’s got many of the same apprenticeship model characteristics that doing a PhD (or really, being a faculty member) has.

[I] Program officers cover a lot of territory, not all of which is their specific expertise. Further, program officers (especially permanent ones) sometimes wind up adopting awards pretty far from their own areas, for example, when a rotating program officer in charge of certain topics leaves.

[J] Interesting outcomes are also fun to share with other program officers and NSF’s outreach people.

[K] A related issue is that for NSF, a “final report” is not cumulative; it’s just a final “annual report”, and should only cover the last year of activity. This confuses many PIs, and I found I had to return some number of “final” reports for this.

[L] Thinking about providing evidence of impact was also important in my post on writing research statements, so that might be worth a read (and contains pointers to other notions of impact and people who’ve spoken about it as well, including Elizabeth Churchill’s thoughts and Judy Olson’s Athena Award talk).

Personal trip report thoughts on SOUPS 2018

I wrote a trip report on SOUPS 2018 (the Symposium on Usable Security and Privacy) for other folks at NSF since NSF paid for it, and I thought I would go ahead and share a lightly edited version of it more widely because I like to call out other people’s interesting work with the hope that more people see it. As always, the views in this post are mine alone and do not represent those of my NSF overlords.

SOUPS, founded and oft-hosted by Carnegie Mellon, is historically a good conference focused toward on the human and design side of security and privacy in systems; here’s the 2018 SOUPS program, for reference.  I’m a relative newcomer to SOUPS, having only come since 2017 in my role in NSF’s Secure and Trustworthy Cyberspace program. So, this may be a bit of an outsider view — perhaps not so bad to get from time to time. I’ll structure the report in three main bits: first, to highlight a couple of themes I liked that were represented well by particular sessions; second, to note some other papers I saw that triggered pleasant paper-specific reactions; and third, to gripe a bit about a wider CHI problem that I also felt some of at SOUPS this year and last, that many papers are too focused on particular new/novel contexts and not enough on learning from past work and building generalizable, cumulative, fundamental knowledge upon it.

Some cool sessions on risks close to home, inclusiveness, and organizational aspects

Gripe aside, I liked a number of the sessions I saw. The last session of the first day was the highlight for me, with a clear theme around the privacy risks of those close to us (friends, family, associates) versus risks imposed from outsiders (strangers, companies, governments). The first paper, by Nithya Sambasivan et al., looked at this in the context of phone sharing among females in South Asia, and how technical novelty and cultural norms combined to shape attitudes about and actions toward privacy risks. The talk had some interesting bits about trying to increase the discoverability of privacy-enhancing behaviors and mechanisms such as deleting web cookies/history or private browsing modes.

The second paper in that session, by Yasmeen Rashidi et al., focused on how college students deal with pervasive, casual photography by those around them (mostly, as Anita Sarma pointed out, focusing on overt rather than covert photography, which I thought was a nice observation). The study used a method I hadn’t bumped into before called an “experience model” that summarized key moments/decisions/possible actions before, during, and after photo sharing; I thought it was an interesting representation of ethnographic data with an eye toward design. The beneficial aspects of surveillance in college fraternities reminded me of Sarah Vieweg and Adam Hodges’ 2016 CSCW paper about Qatari families experiencing social/participatory surveillance as largely positive — surveillance is generally cast as pure negative, but there are contexts where it’s appropriate and meaningful.

The third paper by Hana Habib et al. compared public and private browsing behavior using data from the CMU Security Behavior Observatory. Perhaps not surprisingly, people do more private/sensitive stuff in private modes, but maybe more surprisingly, self-reports aligned reasonably well with logged data. Here, too, there was evidence that people were at least as concerned about threats from co-located/shared users versus external users. There’s also evidence that people assume private browsing does more privacy-related work than it really does (for instance, some folks believed it automatically enables encryption or IP hiding), possibly to people’s detriment.

The fourth paper in the session, by Reham Ebada Mohamed and Sonia Chiasson, was close to my own heart and research, with connections to Xuan Zhao, Rebecca Gulotta, and Bin Xu’s work on making sense of past digital media. It focused on effective communication of digital aging online through different interface prototypes (shrinking, pixellation, fading), which made me think straightway of Gulotta et al.’s thinking about digital artifacts as legacy.  But unlike that work, which was more about people’s reaction to their own content fading, this paper was more about using indicators of age to make the pastness of a photo more salient in order to evoke norms and empathy about the idea that things in the past are in the past and thus, as Zhao et al. argued, often worth keeping for personal reasons but not necessarily congruent with one’s current public face. The talk also explicitly put this kind of analog, gradual aging in opposition to common ways of talking about information forgetting as digital, binary, absolute deletion, and that was fun as well (and well-aligned with Bin Xu, Pamara Chang, et al.’s  Snapchat analysis and design thinking).

Another nice first-day session was a set of lightning talks that clustered, broadly, around inclusion and empowerment in security and privacy issues. These included a general call toward the problem from Yang Wang, a focus on biased effectiveness of authentication systems for people of various demographic categories from Becky Scollan, a discussion of empowering versus restricting youth access online from Mariel Garcia-Montes, and a transtheoretical model-based call to develop personalized, stage-appropriate strategies to encourage self-protective privacy and security behavior from Cori Faklaris. On balance these were interesting, and more generally I like the move toward thinking about inclusive privacy/privacy for particular populations, both for their own sake and as edge/extreme cases that might speak back to more general notions of privacy.

On the second day there were also some fun talks I saw in the last session (detailed notes, alas, lost in a phone crash).  These included Julie Haney and Wayne Lutters on how cybersecurity advocates go about their work of evangelizing security in corporations; James Nicholson et al. on developing a “cybersecurity survival” task, parallel the NASA Moon Survival Task, to get insight into IT department versus general company attitudes toward security that looked both promising and well-put-together; and a paper by an REU site team, presented by Elissa Redmiles, about co-designing a code of ethics with VR developers around privacy, security, and safety. It was nice to see an example of a successful REU site experience, and it highlighted a framing of people’s desire for “safety” in cyberspace that I think might make for a root goal concept that “private”, “secure”, and “trustworthy” each capture some aspects of as a means.

Some cool papers

There were also a number of individual papers that caught my eye, including one by Sowmya Karunakaran et al. from Google about what people see as acceptable uses of data from data breaches. They had some interesting stories about both cross-cultural and cross-scenario comparisons (being able to survey 10K folks from six countries has its advantages); probably the most surprising tidbit was that people were least happy about the idea of academic researchers using these data–less so than targeted advertising, and much less so than notifications/warnings/threat intelligence sharing. I say surprising because some folks have observed that Amazon Mechanical Turk workers are more comfortable sharing personal data in tasks posted by academics than by others because academics are perceived as both more trustworthy and more legitimate (though Turk is different than breaches since Turkers have the choice of whether to participate or withhold data, which they don’t in the case of the breaches).  The ordering also roughly paralleled the amount of personal benefit the breach victims perceived for each use, which makes sense; it might be interesting to run a comparable parallel study around appropriate uses and users of non-breached, but openly released, datasets of social trace data.

There was a nice null-results paper by Eyal Peer et al. on whether face morphing — blending two or more faces into a composite — can influence decision-making by blending a person’s own face subliminally into the face of a person in an advertisement or communication campaign. This had a lot of theoretical juice behind it based on the prior face morphing literature and more general work around influence and cognitive psychology, so it was surprising that it didn’t work at all when tested. This caused the team to go back and do a mostly-failed replication study of some of the original work on face morphing’s impacts on people’s likability and trust ratings of images that included their faces. I admire the really dogged work by the team to chase down what was going on, and one more data point in the general story of research replicability; it might be a nice read for folks wanting to teach on that topic.

Susan McGregor’s keynote on user-centered privacy and security design had a couple of cool pieces for me. First, there was a bit about how standards for defining “usability” talk in terms of “specified” users and contexts, which raises cool questions about both who gets to do the specifying, and how to think about things as they move outside of the specified boundaries. Not a novel observation, but one worth highlighting in this context and related to the inclusive privacy discussion earlier. Second, there was a nice articulation of the distinction between usability and utility, and how scales/questions for measuring usability can accidentally conflate the two. For instance, something that might be rated “easy” to use might really be not that easy, but so worth it that people didn’t mind the cost (or vice versa; I remember a paper by Andrew Turpin and William Hersh in 2001 about batch versus interactive information retrieval system evaluation that suggested that a usable-enough interface can make up for some deficits in functionality). This raises ideas around how to develop scales that account for utility: rather than “usable”/”not usable”, what if we ask about “worth it”/”not worth it”. Some posters in the poster session had moves toward this idea, trying to measure the economic value of paying more attention to security warnings or of space/time/accuracy tradeoffs in a secure, searchable email archive.

I also liked Elham Al Qahtani et al.’s paper about translating a security fear appeal across cultures. There’s been some interesting work in the Information and Communication Technologies for Development (ICTD/HCI4D) communities showing that peers and people one can identify with are seen as much more credible information sources. This implies that you might want to shoot custom videos for each culture or context, and that turned out to be the case here as well — though just dubbing audio over an existing video with other-culture technologies and actors turned out to be surprisingly effective, raising cost-benefit tradeoff questions. Sunny Consolvo noted that Power Rangers appears to be able to use a relatively small amount of video in a wide variety of contexts, and that there might be strategies for optimizing the choice of shooting a small number of videos, the closest-fitting of which for a given culture/context could then be dubbed into local languages. Wayne Lutters had an alternate suggestion, to explore using some of the up-and-coming “DeepFake” audio and video creation technologies to quickly and locally customize videos — presumably, including one about the dangers of simulated actors in online content. 🙂

Norbert Nthala and Ivan Flechais’ paper about informal support networks’ role in home consultations for security reminded me quite a bit of some of Erika Poole’s work around family and friends’ role in general home tech support. The finding that people valued perceived caringness of the support source at least as much as technical prowess was both surprising and maybe not-surprising at the same time, but was good to have called out for its implications around designing support agents and ecosystems around security, privacy, and configuration.

There was also a nice, clean paper by Cheul Young Park et al. about how account sharing tends to increase in relationships over time, a kind of entangling that to some extent accords with theories of media multiplexity (gloss: people tend to use a wider variety of media in stronger relationships, though it’s not clear what the causal direction is). The findings had nice face validity around the practicalities of merging lives, ranging from saving money on redundant subscription service accounts such as Netflix to questions of intimacy around sharing more sensitive accounts. It also raises the question (in parallel with Dan Herron’s talk at Designing Interactive Systems 2017 about how to design account systems that can robustly handle relationships ending and disentangling.

A call for more generalizable, cumulative work

Now, to the gripe. The highest level thing I liked least, based on my experiences there both last year and this year, is that too much of SOUPS focuses on descriptive/analytic work around specific new security and privacy contexts, without enough consideration of underlying principles about how people think about security and privacy, and how studying the new contexts adds to that. It’s important, for instance, to study topics such as Cara Bloom et al.’s 2017 paper on people’s risk perceptions of self-driving cars or Yixin Zou et al.’s paper on consumers’ reactions to the Equifax account breach (which won a Distinguished Paper award). These are relevant contexts to address, and from what I remember the presentations/posters I saw about them were pretty good in and of themselves.

But for my taste, on average I don’t think we do enough work to connect the findings from the specific domains and studies at hand to more general models of how people think about trustworthy cyberspace, and how properties of the contexts and designs they encounter affect that thinking. For example, what do we learn about studying the risks of self-driving cars relative to other autonomous systems, or drones versus social media photo sharing versus (surveillance) cameras, or new IoT setups versus more classic ubiquitous computing contexts, or to point back at myself a bit, how Turkers’ privacy experiences add to our understanding of privacy and labor power dynamics more broadly? To what extent are there underlying principles, phenomena, models that could help us connect these studies and develop broadly-applicable models?

This is related to a more general concern I have in the human-computer interaction (HCI) community about how methods that encourage deep attention to one context or dataset — including but not limited to many instances of user-centered design, ethnography, grounded theory, and machine learning modeling — can lead researchers to ignore relevant theoretical and empirical research that could guide their inquiries, improve their models, and more rapidly advance knowledge. (Anyone who wants an extended version of this rant, which I call “our methods make us dumb”, is free to ask.) I also see a lot of related work sections whose main point appears to be to claim that this exact thing hasn’t been done exactly yet, rather than trying to illustrate how the work is looking to move the conversation forward. This, also, is not SOUPS-specific; you see it in many CHI papers (and, it turns out, CHS proposals).

Okay, gripe over and post over as well [1].  Hopefully there were some useful pointers here that help you with your own specific topics, and that your thinking and findings are broad and useful. 🙂

#30#

[1] For once, no footnotes. [2]

[2] Oops.

Finding NSF programs and program officers for your research

tl/dr: Figuring out where to send proposals at NSF can be confusing. Understanding NSF’s org structure and solicitation mechanisms, using NSF’s award search tool (and colleagues) to look for programs and program officers that manage awards related to you work, and effectively working with program officers to find good fits can help you out.

More detail:

Getting started with applying for funding can be pretty confusing, even if you have good mentors, and as both a mentor and now a three-year rotating program officer at the National Science Foundation I’ve answered versions of this question many times. So, I figured it was time to write down some of the things I often say, though as always, these views represent my personal opinion and experience and not those of my NSF overlords. Further, there are many folks with many opinions on the topic, so ask and search around (though I was surprised not to find too many posts about this when I was putting this together).

I’ll organize the post around three main themes/tasks: (1) understanding NSF’s organizational and solicitation structure, (2) finding places in that structure that might fit your work, and (3) investigating those places through contacts with program officers and panel/review service.

First, structure, because it’s helpful to understand the basic mechanisms through which NSF solicits proposals. The root organizational structure is a hierarchy that broadly aligns with a swath of academia’s own organization of fields, with the top level being Directorates: CISE (Computer and Information Science and Engineering), SBE (Social, Behavioral, and Economic Sciences), ENG (Engineering), EHR (Education and Human Resources) and so on. [1] Directorates contain Divisions; inside of CISE, for instance, are three — CCF (Computing and Communication Foundations), CNS (Computer and Network Systems), and IIS (Intelligent Information Systems) — along with OAC (the Office of Advanced Cyberinfrastructure). Then inside of Divisions are typically Programs; IIS, for instance, contains RI (Robust Intelligence), III (Information Integration and Informatics), and CHS (Cyber-Human Systems).

Most of the core programs have some kind of core solicitation attached to which you can submit proposals. So, for instance, you wouldn’t submit to CISE or to IIS, you might submit instead to one of the core programs inside of it. This isn’t NSF-wide (in EHR, the EHR Core Research solicitation crosses the whole directorate, for instance), but programs that field solicitations it’s the general structure [2].

There are also cross-cutting solicitations that as the name implies cut across the hierarchical structure, that multiple organizational units at NSF fund and administer together. Some are foundation-wide things like CAREER; some are broad cross-cutting ones like SaTC (Secure and Trustworthy Cyberspace) that multiple directorates participate in; some are cross-cutting but within individual directorates like CRII (CISE Research Initiation Initiative) and CCRI (CISE Community Research Infrastructure) [3]. You’ll also sometimes see a Dear Colleague Letter come out that asks for proposals in a specific topic or area, or that invites supplements to existing awards for a specific purpose [4].

Now that we understand solicitations can come from many places and take several forms (core solicitations, cross-cutting solicitations, and dear colleague letters that contain requests for proposals), the next trick is finding ones that might fit you [5].

To that end, NSF’s award database has a lot of value. Using various keywords that sound like your research [6] will bring back award abstracts that show you what’s being funded (pay attention to the award dates, though — sometimes you will get pretty old awards) as well as the programs and program officers who are managing those awards. Those are places and people that you should be aware of as possible funding targets.

NSF also has tools for searching funding opportunities and finding about about announcements from programs (which often contain information about funding opportunities). For instance, this sample search looking for CISE program announcements will give you a list of communications, including solicitations, FAQs, and Dear Colleague Letters that someone believed were relevant to the CISE community. The volume can be pretty high, but it’s an easy scan/filter task, and finding a relevant opportunity you didn’t know about can be high value. In particular, new opportunities sometimes crop up. Being aware of ones that might fit you can give you a leg up versus people who are not aware of them [7].

I’ve also seen that it’s useful to be aware of executive branch research priorities, often articulated by the Office of Science and Technology Policy (OSTP), as well as NSF’s own strategic plans, activities, and announcements [8]. It turns out that many cross-cutting solicitations — often the larger ones in terms of dollars — come out subsequent to OSTP and NSF Director-level initiatives, suggesting that it makes sense to keep an eye out for new solicitations related to those topics [9].

Finally, asking colleagues in your intellectual spaces where they submit can also give you a sense of potentially interesting programs and program officers. Said colleagues will often have useful experience with and advice about interacting with them. More generally, junior folks often think they should figure everything out for themselves, but there’s a ton of value in working with more senior mentors on funding. This ranges from collaborating on proposals, to asking for thoughts on finding opportunities and fit of ideas to them, to getting specific feedback on specific proposal ideas and even drafts. People are busy but also often generous, and getting advice from colleagues and mentors is the number one thing I think junior faculty could do to get better faster at proposal writing.

Okay, now that you’ve identified some potential targets using the methods above, it’s time to dig more deeply into whether they really are fits.  Even if you’ve done the homework to look up official NSF program descriptions and awards made by that program in the past, and even if you ask colleagues, it can be hard to tell how well a particular proposal idea is going to fit a particular program because the official text of a solicitation only gives so much information.

One way to learn more about what a solicitation is about in practice is to search for (recent) awards made under it, assuming it’s not brand new. Many solicitations will have a link near the bottom of the page to help with this; there’s also an advanced search tool that can help you (among other things) find all the proposals funded by a specific solicitation, although you’ll need to find the right Program Element Code to narrow to a particular program/solicitation.

Your most likely source of information, though, is to email/talk with relevant program officers about whether your project ideas fit the programs they work with. They probably have the clearest sense of what a program’s goals are and how a project idea might fit them, often have a high level sense of how panelists might react to some aspect of a project idea, sometimes have deep expertise of their own they can bring to bear [10], and may also know other parts of NSF that could be interesting homes for a project idea [11]. Most program officers are also genuinely interested in mentoring, especially for junior researchers, so you should feel empowered to reach out to them.

It’s helpful to ground conversations with program officers in specific 1-2 page project writeups. Having a writeup in advance helps focus your own thinking and will also make interactions with program officers more efficient and effective [12]. These writeups might not be too different from an expanded project summary of the kind you might submit with a proposal, but focusing more on the specific questions, contributions, activities, and evaluations you’re considering, and less on generic “why it’s important” text. Thinking about Heilmeier’s Catechism for proposing research can be helpful here [13].

Once you have a passable version of that (it doesn’t have to be perfect), email it to the most relevant program officer you can think of in the most relevant program or two, based on the homework you’ve already done as described above. Note that solicitations often list multiple program officers, and different folks usually handle different subtopics/panels within a given solicitation.  So, best if you can identify one who handles awards related to your idea (whether in this solicitation or in general) and mail them. If you can’t tell who is best, the first person listed is often a “lead” for the solicitation and it’s reasonable to mail them and ask them who to ask. Don’t email all of them, especially individually; that’s wasteful and inconsiderate of time.

You might ask them about their thoughts on fit to their own program(s) and other programs or program officers they might recommend, as well as any thoughts they have on the proposal itself or on framing it for panelists in their program. If you’re new enough to a program or to NSF that you don’t have a good feel for it, it might make sense to ask if you could have a talk where you ask more general questions as well as talk about the writeup.

Program officers will have different levels of responsiveness to these questions. Some are more willing to talk general program or NSF issues than others. Some try hard not to inject their own opinions on proposal content both to increase fairness (relative to other PIs not getting feedback) and in case their opinions are wrong. Some prefer to reduce their contact with PIs during the proposal process in general, with the goal of avoiding biases induced by having such contact, and may want to interact by email versus calls or in-person visits.

But, you should at least get a response about program fit, and my general sense is that NSF program officers are pretty generous in interacting with PIs. If you’ve been waiting more than a week [14], it’s legitimate to re-send the mail, or try a different program officer associated with the program. Don’t take it personally, or give up on the idea of contacting POs [15].

Another way to get a sense of a program, and connect to its program officers and reviewing community, is to serve as a panelist. I’ve written a separate blog post about that so I won’t say much here, except that serving is a great way to learn a lot about proposal writing and evaluation while both serving and representing your intellectual communities while meeting both folks in those communities and program officers.

And I think that’s my story on this.  Hopefully this was useful for thinking about how to find places and people at NSF that might be good fits for you, and remember to look around for other thoughts on these topics.  A few that I bumped into while I was writing this are included below for your initial bonus amusement.

#30#

[1] There are also various administrative Offices at this level, but these don’t usually field many programs or solicitations, so I ignore them for simplicity.

[2] One of the things I’ve learned coming here as a rotating program officer is that NSF is less monolithic than you’d think. The high level structure of proposals, panels, etc., is mostly the same, and we have high level policy guidance, but practices can be quite different at every level from directorates to individual program officers.

[3] Yes, it’s awkward that the acronyms are close. There are a lot of acronyms here.

[4] DCLs vary widely; here a couple of (expired) examples I’ve been involved with, one that solicited interdisciplinary SaTC proposals, and one that looked to advance citizen science research.

[5] For what it’s worth, I was not very good at this as a PI; I just submitted to CHS’s predecessor (Human-Centered Computing) a lot, although I had collaborators who were better at this game and wound up with some collaborative submissions to other solicitations. More generally, you should also look beyond NSF to other agencies, foundations, and industry; I wasn’t particularly good at that either so I won’t discuss that here.

[6] Or, names of PIs in your community who do the kind of research you do. Finding out where they get NSF funding could be pretty useful, and PIs are sometimes willing to share proposals, which can be super-helpful for understanding the genre of proposal writing [6′].

[6′] As can reviewing, which is good for both you and the community. See my post on how to become a reviewer for more.

[7] Another interesting aspect about new solicitations is that NSF solicitations in general have a bottom-up component. There’s also definitely a top-down strategic leadership idea behind them that the solicitation descriptions work to capture, but the proposals submitted and the panelists who review them help define them in practice. New solicitations may have a little more wiggle room in this sense because they don’t have this historical “in practice” momentum.

[8] Being involved in visioning workshops funded by NSF, the Computing Community Consortium (CCC), and other places that generate whitepapers, workshop reports, etc., about the state and future of a field or topic can be a way to have your own strategic impact along these lines.

[9] I wouldn’t spend space in your proposal, however, talking about how it aligns with some NSF goal or solicitation, and I especially wouldn’t quote solicitations. Whenever I see this, I think about how that space could be used to instead give compelling details about the project that could help convince panelists that the proposal is strong.

[10] Note that program officers often cover a broad range of topics, so although they will generally have a sense of the areas where they manage proposals, they will often not have personal deep research experience with specific topics. Two corollaries of that are (1) POs will be good at giving feedback about fit, but less well-positioned on average to give feedback about content, and (2) you should ask colleagues in the area for feedback on the content as you’re preparing proposals. Better to find out about something you missed before the panel than after.

[11] But, just as NSF program officers don’t know everything about every topic they manage proposals on, they also won’t know everything about the rest of NSF. It’s not so unlike being asked if you know a particular faculty member at your own institution. If they’re not close to your own department or research interests, probably not, unless you’re fairly senior or fairly outgoing/engaged and interact with other folks outside of the context of your own research.

[12] Sending a writeup in advance trades time explaining an idea on the phone for time discussing/getting feedback on ideas. Program officers aren’t infinitely busy, but they’re busy, and these explanations sometimes sound more like sales pitches, which are not very helpful. If the fit with a particular PO is not good, the writeup can help them suggest more appropriate folks to contact right away without you having to waste time waiting for an ultimately unproductive chat. If the fit is reasonable, seeing the writeup in advance lets them have more considered reactions than hearing it explained and reacting on the spot. Some program officers are also more comfortable and responsive responding in email than on the phone.

[13] At least in CHS and SaTC, two solicitations I’ve done a lot of work with, proposals often focus too much on an applied problem they’re looking to solve, or talk about general hoped-for impacts from the work, rather than the underlying research questions, contributions beyond existing knowledge, and specific impacts the project might achieve. Proposals that don’t make the research contributions clear are both usually dead in the water for panelists and very hard to reason about program fit for.

[14] Like academic life in general, program officer schedules can be bursty and time-bound. In addition to panels, which consume the better part of 5 days to organize and run and which some program officers organize a couple dozen of a year, POs travel to conferences, do internal and external service, and have other deadlines and responsibilities. A corollary of this is that it’s a good idea to make inquiries well in advance of submission deadlines.

[15] I had a pretty bad first couple of attempts to contact folks. What I now think happened in my case is that I mailed a program officer whose NSF rotation was ending, and they didn’t respond before they left and lost access to their mail, and the mail dropped on the floor. People also accidentally delete emails (I estimate my personal rate is about 1 of 300), and mail servers sometimes fail (a program officer tried to mail me once as a PI to make an award recommendation very late in the fiscal year, meaning there was little time to put it together, and Cornell’s email system spam filtered it away. Fortunately for me they also called on the phone.)

An idiosyncratic trip report from CHI 2017

I wanted to give some shout-outs and observations from stuff I saw [1] during my trip to CHI 2017 as an NSF rotating program officer. [2] I didn’t get to see that many talks because I spent a lot of time in NSF advice mode (including the NSF session that Chia Shen organized and Amy Baylor and I helped out with; slides from that are available.), and those I did see tended to be in spaces where I’m not yet expert but where I am managing some proposals. That way of choosing sessions turned out to be productive: Ron Burt argued at CSCW 2013 that one should occasionally go into other communities, and I did get some interesting insights that I wanted to put out there for other people to consider. Stories below are in roughly chronological order.

Barry Brown gave a nice talk about the social and semantic shortcomings of self-driving cars. The high level point is that in driving, people send signals to each other all the time (not just middle fingers) that help coordinate driving behavior. These signals get sent both with the car’s body — we drift, we leave gaps, we close gaps, we turn the wheel just a little at a stop — and with our own — gaze, nods, frowns, waves (and sometimes those middle fingers). Further, we have driving norms that differ by road condition, location, and culture. His claim is that self-driving cars neither read nor send these signals well, and don’t obey these norms, because the way they “see” driving is primarily in terms of finding where to drive and avoiding collisions. This, in turn, will cause coordination problems with other drivers as well as lead the self-driving cars to tend to be taken advantage of because they are relatively cautious compared to human drivers. It made me think about a self-driving car trained in Texas (very accommodating drivers, on average) taking a trip to New York (not so much), about whether self-driving cars could cope with India city traffic, and about just how you’d give a self-driving car a little more semantic signaling and social grace. [3]

Huiyuan Zhao‘s talk about their system “Block Party” also made a nice point about how common map interfaces (in particular, Google Maps) emphasize place and route selection at the expense of other use cases. In particular, Block Party aims at use cases like tourism and moving that require exploration, sensemaking, and discovery of places, which in turn benefit from the use of pictoral, situated views and tools for orientation. Google Maps has tools like Street View that support these activities, but the talk claimed they are too tucked away in the interface behind the primary tasks, so people tend not to use them. Evidence for this comes from a comparison between the features people use in Google Maps versus Block Party (which foregrounds these exploration-related features) when completing sensemaking tasks; Block Party users were more likely to explore situated views and remembered more about the neighborhoods they explored. This has some straight-up design implications about map interfaces having multiple modes. They also had an interesting speculation about cases where people are exploring a place together (such as a CHI lunch group trying to figure out where to go), that suggest interactions where multiple phones are yoked to present different views or support different parts of the sensemaking task. [4]

There was another paper in the same session that Nancy Smith presented around environment designs that are less centered around human needs and goals (in particular, there was motivation from the apparently-growing Animal-Computer Interaction community). I am less personally attuned to this paper, though it had some plausibly interesting theoretical grounding, but when Nancy claimed that human environments are over-engineered for human safety with respect to animals, it made me think about the Brown and Laurier paper’s claim that autonomous cars’ focus on safety might lead to other negative consequences. The parallel was interesting and I wonder if it would be useful to think about other places where we’re doing that as well, either in the specific about safety, or other values that are consistently over- or under-emphasized in design. [5]

One such value, which I think is over-claimed and under-implemented in general in CHI work, is that of human agency. [6] Thus, it was nice to see agency get center stage in Amanda Lazar‘s double-feature on designing tangible and sharing interfaces for people with cognitive impairments. Using a “critical dementia” theoretical framing that encourages us to think less of loss and impairment [7] and more of experiences and strengths, she’s done a lot of work to develop toolkits aimed at supporting dementia sufferers’ self-expression and connection with both family and formal caregivers. I wish there had been a stronger statement of how agency was reasoned about during the design process, as well as some discussion about possible risks to agency, but it was still cool and moving work. [8] [9]

There were also a couple of other little nice themes in that session. First, both Amanda’s talks and Anthony Hornof‘s work to design for people with Rett’s Syndrome (who have very severe cognitive and motor impairments) wound up pointing to worlds where flexible tooling might allow therapists, caregivers, and/or family to explore simple systems that could improve experiences and maybe agency for people with very individual needs that mainstream assistive technologies don’t address well. [10] Second, and related, is a theme about designing for caregivers and not just for the cared-for; this came out pretty strongly in Kellie Morrissey et al.’s paper about their attempt to build a mapping system that asked people to contribute information about the suitability of places for people with dementia. [11] It was a really nice session.

I also dropped in on a usable security session that was fun, if slightly wacky. [12] Yomna Abdelrahman and Mohamed Khamis gave a cute little talk about guessing phone pins and lock patterns using thermal imaging. It’s unclear if it’s a practical attack (especially if people immediately use the phone, messing up the thermal signature), but at least for simple pins and patterns it’s pretty effective if you can get a thermal image within 30 seconds or so. [14] Sauvick Das presented an interaction technique that used rhythmic tapping as a shared group password that identify particular individuals in the group while rejecting attackers. I’m not sure I believe it’s the next big thing in authentication, as it feels like a lot of work for the benefit in a low-security situation. I did, however like his the underlying framing of “socially intelligent” security that calls attention to security requirements and goals in families and small groups. [15][16] Joshua Tan‘s paper also had a fun element, using a unicorn avatar generator to create pictoral rather than textual hashes of public keys with the hope that this would lead to more effective detection of adversarial imposters when using cryptophones. Not so much it turns out, at least in this implementation and experimental context, but the problem of helping people reliably and easily verify key hashes is a good one. [17]

The Tan paper, along with one by Yun Huang, called an important point I’ve been thinking about, about how the way we frame problems shapes our ability to work on them and the impact we might have. The main problem in the Tan paper, for instance, wasn’t a security problem: whether people can reliably detect differences between a reference picture or text and a communicated one is a perception and cognition problem. They hadn’t really thought about it this way, and it might have been productive to get a cognitive psychologist in on this to help design the representations, the comparison interaction, or both. [18] Yun’s talk was about leveraging diverse abilities in crowds to support video captioning. The emphasis in the talk was on solving the video captioning problem, and it was a reasonable talk and approach: people with different levels of hearing and English fluency tend on average to do different captioning tasks, so divvy them up appropriately. For me, though, the general problem of developing good systems that maximize people’s ability to contribute is the more interesting bit, and a focus on that aspect might have made the talk more memorable. That might have also changed the methods from ones where people were binned fairly coarsely to one where people’s actual behaviors were observed and used for maximizing outcomes. [20]

The Huang paper was part of a session on crowdsourcing where the first two papers invited plausibly-interesting parallels between crowdwork and other forms of work. Lynn Dombrowski talked about the problem of “wage theft”, i.e., low-income workers being systematically unpaid for work through employer practice or neglect. The paper was not itself about crowdwork, and in the talk there was some reasonable speculation about what technologies might do to support low-wage workers; still, it would be useful to make explicit a number of implicit parallels to crowdwork platforms and how employer power and platform/legal policy increase these risks. [21] The second talk by Ali Alkhatib did look to make some explicit parallels between crowdwork and piecework. I was really happy that this talk did some definitional work (“crowdwork” is often used to mean everything from Wikipedia contribution to Turk to TaskRabbit), and I appreciated the laying out of the history of piecework [22] [23]. The talk was less clear about just how piecework should inform our thinking about crowdwork and other on-demand markets (there were some discussions of complexity that didn’t quite come through), but overall it was nice to see these papers trying to deconstruct work markets — and very relevant to NSF’s push on Work at the Human-Technology Frontier; see also a related Dear Colleague Letter soliciting workshops and research coordination networks on the topic.

Finally, I’d like to think about getting rid of conference keynotes. [24] In general I have pretty tepid responses to them, and the two I saw were no exception — especially frustrating since I thought both had promise but then left me a little empty. The Monday one, by Neri Oxman, started with a great premise: we’ve spent so much time thinking about parts and assembly, but HCI in general and the maker/fabrication/prototyping movement could really benefit from thinking about materials and form instead (including ones that are inspired by natural forms). I was excited to hear some deep thoughts about this, but the talk itself was more a portfolio of a lot of visually appealing projects without enough synthesis or useful takeaways for my taste. [26] The Wednesday one, by Wael Ghonim, had the key point that we need to take seriously the values that algorithms promote and design them to promote the values we care about. That’s a point I can get behind, but the talk was much too much about the problem, which I think this audience has some sense of already, and didn’t have much concrete thoughts on ways forward: how might Quora or Facebook or Google News restructure algorithms and interactions to be better? [27] Even wrong or incomplete speculations I think would have gotten people’s juices flowing.

And that is most of what I have to say about CHI this year (plus this post is impossibly long), so I’ll stop. It was big fun and I want to thank the organizers, sponsors, authors, and other participants for making it possible, and I imagine I’ll be back next year.

# 30 #

[1] I encourage other folks to write similar reports to call attention to things they liked at the conference. Asking people to pay attention to your own stuff isn’t bad (I wrote a note asking people to read this, mea culpa!), but there’s real personal, relational, and community value in highlighting good stuff from other people.

[2] The views expressed in this post are solely my own and do not represent those of my Foundational overlords.

[3] Barry Brown and Eric Laurier. 2017. The Trouble with Autopilots: Assisted and Autonomous Driving on the Social Road. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 416-429. DOI: https://doi.org/10.1145/3025453.3025462

[4] Huiyuan Zhou, Aisha Edrah, Bonnie MacKay, and Derek Reilly. 2017. Block Party: Synchronized Planning and Navigation Views for Neighbourhood Expeditions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1702-1713. DOI: https://doi.org/10.1145/3025453.3026035

[5] Nancy Smith, Shaowen Bardzell, and Jeffrey Bardzell. 2017. Designing for Cohabitation: Naturecultures, Hybrids, and Decentering the Human in Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1714-1725. DOI: https://doi.org/10.1145/3025453.3025948

[6] Systems such as recommender systems and other filtering technologies, or behavioral support and persuasive technologies, or machine learning decision-making and interactive agents, should be thinking a lot more about agency than they are. If I were ejected back into the research world tomorrow, I’m pretty sure that thinking about how to better define and reason about agency in both design processes and algorithms would be my big next research direction.

[7] I’ve always had a soft spot for assistive technology work, though have always been afraid to do it myself because I’m not sure I’d have the emotional chutzpah to work closely with folks who live with these impairments. This critical dementia framing is a useful counter to that.

[8] Amanda Lazar, Caroline Edasis, and Anne Marie Piper. 2017. Supporting People with Dementia in Digital Social Sharing. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2149-2162. DOI: https://doi.org/10.1145/3025453.3025586

[9] Amanda Lazar, Caroline Edasis, and Anne Marie Piper. 2017. A Critical Lens on Dementia and Design in HCI. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2175-2188. DOI: https://doi.org/10.1145/3025453.3025522

[10] Anthony Hornof, Haley Whitman, Marah Sutherland, Samuel Gerendasy, and Joanna McGrenere. 2017. Designing for the “Universe of One”: Personalized Interactive Media Systems for People with the Severe Cognitive Impairment Associated with Rett Syndrome. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2137-2148. DOI: https://doi.org/10.1145/3025453.3025904

[11] Kellie Morrissey, Andrew Garbett, Peter Wright, Patrick Olivier, Edward Ian Jenkins, and Katie Brittain. 2017. Care and Connect: Exploring Dementia-Friendliness Through an Online Community Commissioning Platform. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2163-2174. DOI: https://doi.org/10.1145/3025453.3025732

[12] Blase Ur‘s talk about designing well-grounded and educational password meters was less wacky but quite solid. There’s a well-justified decision to estimate password strength using a relatively compact neural net that can run on the client, but since those are often hard to interpret, to provide educational explanations and suggestions using a rule-based password parser; this ‘rationalization’ type of explanation can make a lot of sense. The experimental design was solid and the finding that people created stronger but just as memorable passwords was nice, though at a slight cost of user satisfaction because the feedback imposed cognitive load. It was also one of the clearest and best-designed talks I’ve seen in a while. [13]

[13] Blase Ur, Felicia Alfieri, Maung Aung, Lujo Bauer, Nicolas Christin, Jessica Colnago, Lorrie Faith Cranor, Henry Dixon, Pardis Emami Naeini, Hana Habib, Noah Johnson, and William Melicher. 2017. Design and Evaluation of a Data-Driven Password Meter. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3775-3786. DOI: https://doi.org/10.1145/3025453.3026050

[14] Yomna Abdelrahman, Mohamed Khamis, Stefan Schneegass, and Florian Alt. 2017. Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3751-3763. DOI: https://doi.org/10.1145/3025453.3025461

[15] Thinking about usable privacy and security above the level of individuals but below the level of large organizations is one of former NSF/SaTC program officer Heng Xu‘s big pushes, a good one I think.

[16] Sauvik Das, Gierad Laput, Chris Harrison, and Jason I. Hong. 2017. Thumprint: Socially-Inclusive Local Group Authentication Through Shared Secret Knocks. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3764-3774. DOI: https://doi.org/10.1145/3025453.3025991

[17] Joshua Tan, Lujo Bauer, Joseph Bonneau, Lorrie Faith Cranor, Jeremy Thomas, and Blase Ur. 2017. Can Unicorns Help Users Compare Crypto Key Fingerprints?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3787-3798. DOI: https://doi.org/10.1145/3025453.3025733

[18] There’s a pernicious problem in CHI (and, many other disciplines) about not effectively engaging other domains. Joe Marshall had an alt.chi talk (which I did not see) about this [19], and Liz Murnane focused in on it for her dissertation. I’ll add one observation to this, which is that a number of our favorite methods (including grounded theory, user centered design, and machine learning) are often badly applied in ways that encourage us to ignore what is already known in our own and other fields, which in turn limits our ability to advance the conversation. Hopefully there will be a useful blog post about this down the road.

[19] Joe Marshall, Conor Linehan, Jocelyn C. Spence, and Stefan Rennick Egglestone. 2017. A Little Respect: Four Case Studies of HCI’s Disregard for Other Disciplines. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’17). ACM, New York, NY, USA, 848-857. DOI: https://doi.org/10.1145/3027063.3052752

[20] Yun Huang, Yifeng Huang, Na Xue, and Jeffrey P. Bigham. 2017. Leveraging Complementary Contributions of Different Workers for Efficient Crowdsourcing of Video Captions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4617-4626. DOI: https://doi.org/10.1145/3025453.3026032

[21] Lynn Dombrowski, Adriana Alvarado Garcia, and Jessica Despard. 2017. Low-Wage Precarious Workers’ Sociotechnical Practices Working Towards Addressing Wage Theft. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4585-4598. DOI: https://doi.org/10.1145/3025453.3025633

[22] I was, in fact, essentially a pieceworker for about 3 years during and after undergrad, working for a bank typing dollar amounts onto checks just as fast and accurately as I could and getting my hourly rate set by my typing rate. I’ve also seen some amount of wage theft as an hourly employee at a wide variety of jobs (3 years at McDonalds, 6 months as a dishwasher, another 6 as a weekend night auditor at a hotel, 4 months taking phone orders for pizza, credit cards, and most incongruously given how little I knew/know about lingerie, Victoria’s Secret).

[23] Ali Alkhatib, Michael S. Bernstein, and Margaret Levi. 2017. Examining Crowd Work and Gig Work Through The Historical Lens of Piecework. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4599-4616. DOI: https://doi.org/10.1145/3025453.3025974

[24] Maybe one could start the average conference instead with a welcome and a big poster session where everyone started interacting with and meeting each other right away. [25]

[25] I do realize this means less drinking at the poster sessions.

[26] This talk style of too many projects, not enough synthesis tends to be more common with people who are either (a) from portfolio-oriented disciplines, where I think this is a little more of a norm, or (b) senior folks who have done a lot of work and are more of a mind to show breadth rather than carve out a deep path through it. Both are hard on talk consumers, who could really use work by the speakers to carve out the takeaways. It’ll be interesting to see if I inflict the same pain as I get (even) older.

[27] There were also some inconsistencies here around paternalism, values, and agency: there’s a delicate balancing act between not wanting platforms to arbitrate truth but also wanting them to encourage discussions that are “objective”. This goes back to the need to think about agency and whose values are being supported. I’ll also point out that focusing on “fake news” or “misinformation” risks leading us toward positions that if we just find the true news and mitigate misinformation, everything is going to be All Better. Unlikely. These stories take the form of news, but what they’re really doing is expressing and reinforcing values, claiming and recruiting group membership, and defending friends and attacking opponents. Serious work in this space is going to have to engage with the idea that not all policy discourse or political or personal values are grounded in fact-based deliberation.

Increasing your chance of serving on an NSF panel

tl/dr: How do you get on an NSF panel? Ask:

  • Program officers who often review proposals in your area.
  • With enough info about yourself to help them think about your expertise.
  • At times that they’re looking for panelists so that it’s salient.
  • (And, let your senior colleagues know you’re interested too.)

More details:

One of the questions I get as an NSF program officer [1] is “how do I get invited to be on a panel?” [2] One high level answer is that you ask [4] — easy, right? But there are some aspects about how you ask that might increase your chances, and that’s what this post is about.

First, you should ask the right people and programs at NSF [5]. You can get some feel for this through asking your own colleagues. Another strategy is to use NSF’s Award Search tools to find programs and program officers who tend to administer awards close to your own heart. Search using terms you’d expect to see, and click through to find details about the awarding program and managing program officer. [6]

Once you’ve found a good candidate or two, drop them an email. Tell them you’re interested in paneling, along with a bit about you: who and where you are, how long you’ve been there, your expertise (some keywords, a short bio para about your research interests, and your home conferences/journals/research communities are all useful) [7]. Listing a web page and attaching a CV can also help people think about who you are and how you fit.

The timing of the mail may also help. My own evolving observation is that I’m looking for panelists — and know roughly what other program officers might be looking for — about a week after any given submission deadline [9]. Sending such a mail right after the deadline for a solicitation you have something in common with (so you’re a potentially qualified reviewer) but didn’t submit to this year (so you don’t have a conflict of interest [10]) might make your request especially salient [11].

Finally, it doesn’t hurt to let your intellectually related senior colleagues know that you’re itching to serve on a panel [12]. In practice, we often ask more senior folks to serve first [13], and they often decline [14]. They’ll sometimes volunteer alternate names (or tell me some when I ask if they know anyone who might be a good panelist instead), so that’s one more route to Arlington [15].

That’s what I’ve got about how to increase your chances of serving on a panel. Included below the footnotes are a few links from other program officers and panelists about how to get on a panel, how they work, and why you should; hopefully, they (and this) are useful. And, hope to see you on a panel sometime soon.

# 30 #

[1] Disclaimer: my thoughts and opinions in this post are entirely my own and in no way are meant to represent official positions of my NSF overlords or NSF itself.

[2] You absolutely should serve on panels [3]: it provides insight into and confidence in the reviewing process that can help your own proposing; it’s good service to the intellectual community and the country; you get to meet other interesting folks and chat with program officers; and like other kinds of reviewing, it’s one of the ways you are part of the conversation about how your field evolves and the gift economy of academia.

[3] Not too many, though. My panel experience is that it’s 3-4 hours per proposal I review (a common load in CISE is ~7-9 reviews) plus another 3-4 hours of logistics, plus 2-3 full days of travel + panel work. One per year is probably good, solid service; more if you’re a frequent submitter, less if less.

[4] Some programs/solicitations will send out broad surveys of availability and expertise to a large set of candidate panelists. Filling those out is another way to be on the radar.

[5] Everything in this paragraph also applies to thinking about where to _submit_ proposals, BTW.

[6] This will also help you get a broad picture of “what gets funded”, as well as discovering programs that might fit you but that you never knew about. I was pretty bad at this as a PI.

[7] Different people at NSF use different kinds of info [8] to help classify people and proposals — some use keywords, some think about main contribution venues, some use self-descriptions. So having a bit of each is not bad.

[8] One of my great surprises when I came here was that different directorates, divisions, programs, and people do things differently. There’s some high level agreement and policy, but lots of local variation.

[9] Other program officers may have different practices — see [8]!

[10] Conflict rules vary depending on the solicitation; in general, the more money or fewer proposals are involved, the more strict the conflict rules. For instance, for the CISE Research Infrastructure (CRI) program, you can’t have panelists from any institution involved with any proposal on a given panel.

[11] Our tools for finding panelists are not great, which induces a temptation to rely on your existing knowledge and network, which often leads to choosing repeat panelists and awardees you are familiar with at the expense of newer folks.

[12] Also worth doing this to encourage them to suggest you for program committees and conference organizing committees, which is good for burrowing into your research community.

[13] Particularly for larger competitions and things like CAREER and CRII proposals, where their relatively broader vision and greater experience are a win. And, [11]. And [8].

[14] My overall hit rate so far is that maybe 33% of folks I ask say yes.

[15] Or, starting in late 2017 assuming all goes as planned, Alexandria — NSF is moving.

A couple of thoughts from NSF itself and from former program officers:

Community members discussing the why (and sometimes the how) of panels — but remember [8]:

NSF is not the only funding agency, so a couple more guides that talk about other agencies:

Thoughts on #recsys2016 people recommendation tutorial

Rather than tweetstorm, one post with some reactions to the people recommendation tutorial at #recsys2016 by Ido Guy (Yahoo Research, Israel) and Luiz Pizzato (Commonwealth Bank of Australia, Australia), primarily Luiz’ part.

Not sure people recommendation has to be symmetrical; it would be interesting to ponder use cases where that’s not true (Twitter is the one that comes first to mind).

The point that success often gets measured outside the system is a good one — the example of a dating site being successful when people stop using it because they matched was cute. Victoria Sosik, Steve Ibara, and Lindsay Reynolds thought about that in persuasive systems, and the Suhonen et al. Kassi system in barter and social exchange systems.

Random side thought: wonder about designing not just a dating site but a relationship one, where you might use the site to help continue and develop the relationship long term. Doug Zytko was thinking about this a while ago.

Claim that successful people recommendations are ones that lead to interaction is a little overstated maybe. For instance, one potential benefit of “unsuccessful” recs that don’t lead to interaction is learning more about the space of possible people — what is this social circle or company or population of mates _like_?

Point that unsuccessful people recommendations can have psychological effects was nice, as was the idea that people might be more inclined to be makers or receivers of people recommendations, and that varies by context…

The point that people might become less picky over time if early attempts on dates, jobs, etc. don’t pay off (or more picky if you have much success) was interesting, but is that algorithm-actionable? Or is that more about how far people are willing to go down a list of recommendations?

Fraud concern feels a little overstated, even with the legitimate threats of fudgers, liars, scammers… Reminds me a little of the obligatory preventing collusion section in crowdsourcing papers.

Why I’m rotating at NSF

tl/dr: Being a temporary program officer at NSF comes with real job and life tradeoffs; for me, the job tradeoffs around learning, service, and impact felt good, and the life timing turned out to be surprisingly good. So, I took a chance, and I’ll see some of you at NSF as panelists and others at conferences wearing my NSF hat over the next couple of years.

More details:

Last month I started a new academic adventure, as a rotating (temporary) program director (PD) at the National Science Foundation (NSF) [1], in a program called Cyber-Human Systems (CHS) [2]. Some people might wonder why and how this came to be, either out of curiosity or because they, too, might consider it someday [3].

First, some background on being a rotator: NSF regularly brings in outside folks for fresh ideas, energy, and connections to emerging/priority intellectual communities and fields. These assignments typically run about two years [4], occasionally longer, and you apparently do most everything permanent folks do: run panels, make funding recommendations, administer existing grants, collaborate with other programs and other funding agencies, and presumably things I’m still not aware of.

Now, the “why”. One answer is that I’ve thought about doing this for a long time. I found panels and proposal reviewing fun, and former program officers suggested that I might be good at it [5], so it’s been floating around in my head. I put together my own experiences as a panelist with readings of NSF’s own materials [3] and the testimonials of former officers [6], then had a number of conversations with former PDs, folks in the greater CHI/CSCW community, and people at Cornell. This all added up to me seeing real benefits (with some tradeoffs) around learning, service, and impact as a program officer.

I was pretty sure I would learn a lot, both about the broader field and about NSF itself. There’s a lot of territory in CHI and CSCW I don’t see so much, and I figured this would put me on a collision course with new spaces and give me a great bigger-picture view. Further, having more intimate knowledge of how the sausage gets made [7] at NSF was appealing both for its own sake and as a practical tool that, along with the perspective, would benefit both me and Cornell down the road. The downside risk for me is that I’m pretty broad already, and in some worlds the job responsibilities might encourage an awkward breadth-depth tradeoff.

Service. I like helping others — reviewing, commenting, advising, organizing, providing opportunity — and this is clearly a venue for that. Good reviews and process sometimes help PIs push on their ideas [8]; I’ll have plenty of chances to interact with PIs directly [9]; choosing and mentoring junior panelists can help them grow in their careers [10]. All of this has real import for a lot of people. The downside risk here is that the responsibilities trade off with doing your own research, and pretty much everyone said that productivity goes down during (and if you’re not careful, after) the rotation. NSF does give PDs up to 50 days a year of Individual Research and Development (IR/D) time to work on your own stuff, but that’s still much less than I was spending over the last several years.

Impact. I think I have the chance to have real impact, both in the small around particular proposal decisions and in the medium about encouraging kinds of work that I think are important. In the small, there are usually more awesome proposals to fund than dollars to fund them; panel input is taken quite seriously but program directors still make plenty of decisions about which of the good set to recommend for funding [11]. In the medium, you get to interact with folks at NSF and other agencies and try to convince them to allocate money in directions you think are important [12]. This might require more people skills than I have, but we’ll see. On the downside, as described above your direct research impact is likely to go down for a while. A few people also suggested that there might be value instead of rotating in waiting and taking a more senior temporary position (perhaps as a division director rather than a program director, or in agencies where individual program directors have more individual power).

In the end, I think the benefits beat the costs for me in the abstract, which brings me to the concrete “how it happened”. I’ve been told that there is an NSF policy that you need to be at least six years research-active past the PhD so although I had pondered it, it didn’t become seriously plausible until about 2013.

In fall of 2014 Kevin Crowston‘s rotation [13] was scheduled to end, and someone asked if I might be interested in trying out for the team. At the time I said no; Lindsay and I had just bought a house, we were getting married in a couple of months, and I was seriously thinking about what to do for a sabbatical after not planning one before reaching tenure in 2013 [14].

There are lots of other reasons why someone would be interested-but-not-willing to do the job at any given time. People I’ve discussed this with mention a number of other good reasons: kids and schools; geographical preferences and spousal prospects; having a lot of students, collaborators, or projects; timing around promotions or lack of support at the home institution [15]; and general risk aversion or different weighings of the values and risks I discussed earlier all sound like good reasons to try it out later, or never.

But in July 2015 at the CSST summer institute [16], I heard that they hadn’t yet found someone to replace Kevin. When I ran the decision process again a few new things bubbled up that made it seem much more plausible.

On the personal side, the new home argument didn’t seem as critical after having lived there for a bit. Talking about the sabbatical had gotten Lindsay excited about trying a new place [17] and her job is portable, making that an upcheck. We also wondered if a longish move now would be better than, say, in 5-10-15 years when our hypothetical kids were in school [18].

On the job side, Cornell Information Science has been growing, meaning me leaving for a while would be less heavy of a burden for the department [19]. A couple of students had recently graduated and most of the others were pretty far along, so on balance I didn’t feel like I would be leaving them in the lurch [20]. I had a pretty positive outlook on the cost-benefit tradeoffs described above. Finally, I was just “called” to it [21].

And there you have it. There was an interview and administrative process that are vaguely interesting, and the finances also are mysterious-but-possibly-beneficial [22], but I still don’t fully understand them so I’m going to wait for a while to write about them — and this post is quite long enough. So I’m done, except to remind you that if you’re interested in serving on a panel — or talking about rotating — at some point, let me know.

#30#

[1] In this, and in all future posts, the views represented are entirely my own, not those of NSF itself nor my NSF overlords. For instance, when I say “my NSF overlords”, I’m pretty sure that’s not how they’d put it. Pretty… sure.

[2] I’m still getting used to NSF structure myself, so, the full story: NSF has directorates that oversee major scientific and administrative responsibilities at the Foundation; the directorate CHS is in is called Computer & Information Science & Engineering (CISE). Inside of directorates are divisions; the division CHS is in is called Information and Intelligent Systems. So, CHS -> IIS -> CISE -> NSF.

[3] NSF has a part of the website devoted to info about being a rotator.

[4] I won’t talk much about the logistics of how it works because there are different paths; my own is a program called the Intergovernmental Personnel Act (IPA), in which you’re technically still employed by your home institution and you return to it after you leave.

[5] Some general criteria former rotators have mentioned, if you want to run a self-test: scientifically good and well-connected to one’s research community (i.e., some street cred); open to but also willing to critically evaluate many ideas and methods (i.e., no axes to grind); putting in real effort at the reviewing task and executing it with competence and timeliness (i.e., you’re reliable); a proven record of community service and working reasonably well with others (i.e., you’re not an asshole).

[6] See, for instance, stories from Doug Fisher and Michelle Elekonich.

[7] Mmm, sausage.

[8] It doesn’t always feel that way when the decline letter comes around. I’ll be curious how it feels to be on the other side; former PD Wayne Lutters described real sadness around declines of worthy proposals.

[9] Here, I’ll need to be careful to keep boundaries, both for fairness and for sanity reasons. One boundary is around investing too much time giving advice about proposals; I think I will find this fun, but that makes it in turn a bit dangerous. Another is around managing cases when people who are angry at reviews, reviewing, and/or me. I’ve been told this is not so common, but that it does happen.

[10] Hopefully, I’ll be able to have a positive impact here around diversity of demographics, perspectives, institutions, etc. And if you want to serve on a panel sometime, let me know; doing and seeing proposal reviews can be really helpful in your own proposal writing — and is also great service and a chance for impact.

[11] So, going back to what makes for an effective rotator: although you shouldn’t have an axe to grind, I’ve been told that you should have some level of vision, and be open to opportunities to encourage it.

[12] Note “recommend”: program directors make recommendations, usually in consultation with other program directors in their programs, about which proposals to fund. Division directors actually have to approve the recommendations, and the final award is actually made by a different part of the Foundation. So when you get that mail about being recommended for funding, remember that it’s “quite probably” but not “slam dunk”.

[13] Who replaced Sue Fussell, who replaced David McDonald, who replaced Wayne Lutters… it’s somewhat Biblical in that way. I’m told that a common tradition is to be called “the new X” where X = you minus 1. So, I’m “the new Kevin Crowston“, I suppose.

[14] I really wasn’t counting my tenure chickens, and there was a lot of luck along the way.

[15] I am really grateful to Cornell, in particular to Jon Kleinberg and now Thorsten Joachims in IS chair and Greg Morrisett in Computing and Information Science Dean roles, for having my back on this.

[16] Nerd camp for people who think that both the social and technical aspects of systems are important to consider when doing either research or development; see the CSST website.

[17] DC is not California, which was the original plan, but so far she still seems excited and happy.

[18] Much less hypothetical now that Gracie has arrived. The timing was pretty funny: I interviewed September 10, got a tentative expression of interest from NSF on the 17th, Lindsay peed on the stick on the 21st, and the CHI deadline was the 25th. So, that was two eventful weeks.

[19] It still wasn’t a light decision on this front for me. Departmental service is awkward as a program officer both because you’re not at your university (with the logistical and interactional issues that come with that) and because you can’t use your NSF-allotted research time for service (i.e., you do it on evenings and weekends). Many academics are no stranger to either of these, but Jon pointed out that leaves really are leaves, and that there’s value in fully committing to new things.

[20] I hadn’t been bringing in students the last couple of years because of a funding dry spell. Conspiracy theorists might speculate that this was an NSF plot, but that’s wrong: I just didn’t have the right proposal sauce for a bit.

[21] Jon described a similar feeling about the Networks, Crowds, and Markets textbook he co-wrote with David Easley a few years back, which I think is part of what led him to support this adventure.

[22] At a high level, your salary is annualized (i.e., I get 3 months of summer salary) and you get a good amount of financial support for expenses for research travel, including to your home institution while you’re there, as well as for moving to/residing in the DC area. It’s not bad.

Tenure and luck

I recently got official notice that I have tenure from Cornell [1]. With competition fierce for tenure-track jobs, I’m keenly aware that someone else might be writing this blog post right now [2]. And, though skill and hard work played a role, I want to acknowledge and call out the role of luck, circumstance, and coincidence in how I got here [3] — much of which was the result of other people.

I wouldn’t be so happy at Cornell or willing to stay if Lindsay Benoit [4] didn’t like Ithaca so much after moving here (2011).

I wouldn’t be as well known in my research community as a contributing member except for François Guimbretière and Sue Fussell inviting me to serve on PCs they were running shortly after they got hired here [5]. (2009-2010)

I wouldn’t have been hired by Information Science at Cornell except that my postdoc here gave me the chance to work with tons of folks in the Networks Project at the Institute for Social Sciences [6]. (2008)

I might have been hired in Communication instead of IS if Sue Fussell hadn’t applied to Comm the same year I did [7]. (2007)

I wouldn’t have applied for the postdoc, except that labmate Sean McNee from Minnesota met Sadat Shami, PhD student with Geri Gay at Cornell, at a late night CHI party where Sadat told Sean I should apply [8]. (2006)

I wouldn’t have even been able to apply for that postdoc with Geri, except that Louise Barkhuus had to turn it down late in the game to manage a two-body problem [9]. (2006)

I wouldn’t have moved into my niche in the socio-technical gap [10] without John Riedl and Joe Konstan at Minnesota, Paul Resnick and Yan Chen at Michigan, and Bob Kraut and Sara Kiesler at Carnegie Mellon collaborating on a grant while I was a student [11] that brought social science, design, recommender systems, and online communities together. (2003)

I wouldn’t have had a CV that looked postdoc-worthy if I hadn’t been lucky to have a high hit rate of papers as a student [12] and an awesome group of folks to collaborate with at Minnesota [13]. (2000-2006)

I wouldn’t have applied at Minnesota except that I had bumped into recommender systems as part of my masters thesis research [14] and thought they were cool. (1998-1999)

I wouldn’t have applied for a PhD at all except that James Madison University needed a CS instructor right after I graduated from the masters and they trusted me to do it [15]. (1998-2000)

I wouldn’t have thought of James Madison except that Sue Bender [16] had gone there for her undergrad, and wouldn’t have been able to go except that they were willing to fund an untested music ed major as a CS grad student [17]. (1996)

I wouldn’t have gone back to school for a CS degree if I hadn’t gotten a job as the one-man computer band for Progressive Medical Inc.: hardware, helpdesk, and network guy, plus maintaining a custom COBOL database [18]. (1995)

I wouldn’t have gotten that job except that David Bianconi (of Progressive Medical) got a recommendation to ask me from someone at Fifth Third Bank who I tried to help with installing a modem [19], and who remembered that when David was looking for someone to take over the tech side of the business a year later. (1994)

I wouldn’t have been working at Fifth Third except that in student teaching, seventh graders proved to be too dangerous for me to handle when armed with musical instruments [20] — and that Sue had gotten me interested in temp jobs, which is how I got hired there. (1993)

I wouldn’t have met Sue except that a traveling concert band at Ohio State needed two replacements for an overnight trip, who were me and her [21]. (1991)

And, I would never have had the skills to be interested in CS except that my dad somehow knew that he should buy me [22] a TRS-80 Model 1 [23] when I was 7. (1978)

There are also tons of people [24] and groups to acknowledge: parents for putting me in a position to be able to do this [25]; immediate family, notably Sue and Lindsay, for putting up with all the irregular schedule crap that comes along with having both great flexibility and responsibility in academic jobs; collaborators, co-authors, and mentors around research and teaching [26]; the folks who make the computational and bureaucratic systems that I worked with run well; students who testified that I’m not a total teaching loser; people who’ve trusted me with money along the way (largely NSF); participants who made the studies possible and organizations like Facebook and Wikipedia that have given me interesting contexts to study and tinker with.

I’ve probably left both some people and some luck out, but I think these are the highlights. Not all of these are necessarily for the better. In 2006 if I hadn’t gotten this postdoc I might have wound up at PARC or Drexel [27] and those could have wound up great too; maybe I would have been super-successful in Comm; teaching music might have been an even better life.

But it’s been a good ride, and to go back to my original point, a lucky and contingent one. My list is pretty long but I bet if you asked around, a lot of successful people would have their own stories of coincidence, luck, opportunity, and timing [28]. If you have some of your own to share, I’d be happy to hear them.

It’s probably not much comfort in the moment of a paper rejection, a turn-down from a school, an interview that goes badly [29] — but I’ve found that as I’ve become more mindful of the role circumstances play in life, I’ve mostly been happier about things no matter how they turn out. Hopefully reading this was useful for you, too.

#30#

[1] It says so right in our Workday system, which I checked on the day the letter promised it would be official. Even at the end I figured it might all be a mistake.

[2] My guess is that many successful people in academia have similar non-linear, luck-filled trajectories; we have a tendency to attribute good to ourselves and bad to the world but it’s nice to be honest sometimes.

[3] I am also influenced to do this by stories about the prevalence of adjunct and alt-academic jobs in the world. I don’t know what my orientation should be toward this, but it’s a real issue that many folks who come to grad school picturing a R1 position don’t wind up there.

[4] Current fiancee, to be married in November (in Austin, largely because we liked it as a mini-vacation after CHI 2012. Circumstance, indeed.)

[5] Serving on PCs and reviewer, by the way, is a real eye-opener if you haven’t done this already.

[6] There were a ton of good candidates in the IS search that year: Krzysztof Gajos, Tovi Grossman, Richard Davis, and Julie Kientz. All of them looked at least as good as me on paper, and without both the learning from and the collaboration with the folks at ISS it’s unclear I would have even gotten an interview. Plus, I met Ted Welser and Laura Black through that and, among other things, learned about poker from them. Geri hooked me up with that group, another thing to be thankful for.

[7] I still remember Gilly Leshed telling me that she’d heard someone senior was applying for the comm job that year and being pretty sad. And, as with [6], this is a “probably” (I might not have gotten the comm job either way).

[8] I had seen the ad for the postdoc at the conference, but figured I wouldn’t be good enough for Cornell. I still have serious issues with impostor syndrome.

[9] I remember chatting with her about this last year at CHI and thinking that I was pretty lucky, and also about how we all have to make choices around balancing family and career on a regular basis.

[10] $1 to Mark Ackerman.

[11] My only regret from that is that I wish I’d gotten to spend a semester at one of the other places to see a different look at things.

[12] You need to be lucky enough to get some papers accepted and becoming well-known in the community as a grad student. I had more the first than the second, largely because I was pretty bad at meeting people and networking. Students: read Phil Agre’s Networking on the Network. Soon.

[13] The fact that GroupLens was structured around a set of common problems and encouraged collaboration between grad students was a perfect fit for how I do things (though, I suppose it also shaped it).

[14] I still remember getting the comments back on the draft from Christopher Fox, that the work was good but “the tone was inappropriate for a scholarly monograph”. Judge for yourself (section 1.4 is particularly choice). And, some things don’t change: I got essentially the same comment from our grant office about an internal pre-proposal for a National Research Traineeship grant. It’s too bad: the grant would have been in part about the management, method, and ethics around doing social science research with social media datasets. Timely, that.

[15] And then realized that if you want to teach at a university in the long term, you more or less need a PhD except for some smaller places — and even that has become much less common than it was in 2000.

[16] Sue and I were married for 17 years.

[17] The princely sum of $5,500 a year, which was not quite enough to live on in Harrisonburg, Virginia, but pretty close.

[18] I still have a fond place in my heart for both COBOL and maintenance programming.

[19] Failing miserably, it turned out. I did also help them with some custom Access database development that must have gone better, although I didn’t know any more about Access than I did about networking or COBOL when I started.

[20] The high schoolers weren’t that much better for me. Trumpet divas and the “suck band”. I think I’d have a fighting chance now but at 22 I was no match.

[21] I played one of the loudest wrong notes in recorded history in Jackson, Ohio.

[22] To be fair, he might have bought it in part for himself, too; he had some gadget in him.

[23] Four K of memory and a cassette drive. Feel the power of the TRS-80 Model I!

[24] Plus all the people already mentioned, and others who I have not for narrative or memory failure reasons. To folks I miss: I am sorry for not listing you.

[25] Going bankrupt in the process.

[26] Special academic shouts out to Geri, Jeff Hancock, and Jon Kleinberg at Cornell; John, Loren Terveen, and Joe at Minnesota; and Mark Lattanzi and Chris at James Madison.

[27] Where I’d be The Senior HCI dude now, which is a little scary. They’ve really built some nice momentum there.

[28] I wish I could write a good blog post about how to increase your chances of those things; maybe someday.

[29] I have some stories about that, too. A future post, perhaps.

CHI 2014 highlights, 3rd and final

And, finally, a wrap-up of my favorites [0] from CHI paper talks I attended, following up on Part 1 and Part 2. We probably don’t do enough to call attention to other good things and people in our community, so this is a modest attempt at that [1].

I’ll start with a quick nod to former co-conspirator Xuan Zhao and her paper with Siân Lindley about Curation through use: understanding the personal value of social media. At a high level, the talk put the paper at the intersection of the Many Faces of Facebook paper and some of Will Odom‘s stuff on digital possessions, but with a focus on the suitability of social media for personal archiving. I liked the “digital keepsake” with social media content exercise as a way to prime the pump, and some of the suggestions around identifying meaningfulness through use (a la Edit Wear and Read Wear [2]) felt fun. I also like the design implication to use social media content to help people build narratives for self and others [3]: instead of “see friendship”, you might “show friendship”.

Next up was a pair of papers that approached asking for help from friends and neighbors from very different value positions.

The first was Estimating the social costs of friendsourcing by Jeff Rzeszortarski and Merrie Morris. They note that asking for help can impose a burden on receivers and perhaps, via privacy concerns, on askers, then study how people balance those costs with the potential gains in social capital from asking and answering questions. The experimental design was plausible and the work related to parts of Munmun De Choudhury‘s presentation around seeking stigmatized health information online (with Merrie and Ryen White).

The second was my favorite talk at CHI, by Victoria Bellotti [4] on behalf of the authors of Towards community-centered support for peer-to-peer service exchange: rethinking the timebanking metaphor. She took a critical look at the idea that favors might be converted into time-based currencies to trade for later favors, suggesting that the metaphor misses the social meaning associated with doing favors [5] while highlighting largely-negative constructs such as debt. She then proposed a number of design vignettes for emphasizing social values of exchange in the most energetic, fun way I’ve seen in a CHI talk in a couple of years [6].

I found the contrast fascinating, and both papers were thoughtful and worked out. They were also in different sessions, so hopefully bringing them together here will encourage people in this space to read them on a long, lazy summer afternoon and think about how they come together.

I also enjoyed the talk about Alexandra Eveleigh and others’ paper about Designing for Dabblers and Deterring Drop-outs in Citizen Science [7]. The high-level story is that since participation in citizen science (and other peer production systems) follows a power law, much activity is in the tail, the “dabblers”. Thus, you might design to target them, rather than power users. To do this, they went out and asked both high and low contributors about their motivations for participating and came up with a fine set of design ideas that target infrequent contributors. I resonate with this goal — SuggestBot [8] was originally designed to help Wikipedia newbies do more useful work more easily. It was hard to actually get it in front of new editors (who often never return to log in or edit, and if they do, may not have known enough about Wikipedia software to see SuggestBot’s posts). The paper suggests that requests in the moment — to “tempt them to complete ‘just another page'” — may be more effective as a general strategy for engaging the infrequent [9].

Finally, Amy Voida‘s talk about Shared values/conflicting logics: working around e-government systems, a paper she did with several Irvine colleagues, gave me a couple of thoughts. First, the talk made clear that even when high-level values are shared between managers, designers, and workers around systems, the interpretations and instantiations of those values by the parties (“logics”) can lead to problems in practice. Not a totally new story [10] but it highlights the utility of design Processes [11] where communication might reduce the chance of this value drift. It also called out that designing for end user independence is not always appropriate. Even a perfectly capable user of the electronic application system might not be able to effectively get help from the government aid System. Instead of designing to reduce applicants’ reliance on workers, you could imagine a design that helps applicants and workers cooperate to complete applications, providing support for situations when applicants get stuck and really do need help from people who know how the System works.

That is pretty much it for the story of favorites, so let’s be done. But think about doing trip reports yourself and sharing them with the world. It’s good to recognize interesting work, useful for learning more about the community, smart for connecting to the people and work that you call out, and hopefully a service to other people who benefit from your experiences.

#30#

[0] This is a personal view based on my tastes and the pragmatics of session attendance; I’m sure there were lots of other cool things, while other people will have different papers that take them to their own happy places. Another reason for you to do your own trip reports.

[1] Which has the nice side effect of me learning about the community as I put it together.

[2] Still one of the most inspiring papers I’ve ever read.

[3] It’s somewhere between scrapbooking and a “social media mix tape”.

[4] Who, at the time I searched for her on Google Scholar, had exactly 9,000 citations. Soon she will be “over 9000“, as it were.

[5] As a borderline Aspergers kind of guy, when people come to me with problems, I also tend to focus on the problem, rather than the person and their needs around the problem. As you can imagine, this goes over great with my fiancee when she’s seeking support rather than solutions.

[6] Sadly, the paper didn’t have as many vignettes, very few visual. I wonder if there had been napkin sketch interfaces of the kind that were in the talk if it would have triggered “and so does it work?” reactions that system papers often get at CHI.

[7] It’s very cool that they tapped into this “dark matter” of infrequent contributors; we often only study the large, the successful, the vocal, the frequent.

[8] Google search results say “You’ve visited this page many times”. Indeed I have.

[9] Related to this, one of our goals at the CeRI project is to give people feedback about the comments they submit to public civic discussions while they write them, in order to improve quality and engagement.

[10] It reminded me of the idea of “work to rule” as a deliberate way to cause conflict.

[11] In the same way that I am about to use “system” to mean a technological artifact and “System” to refer to a set of concerns, people, and interactions around that artifact, here I am thinking something a little higher-level than the process of just designing the artifact. Maybe participatory design is more like it.

How I review papers

Pernille Bjørn is spearheading a mentoring program for new reviewers as part of CSCW 2015, which I think is awesome. I am mentoring a couple of students, and I figured as long as I was talking to them about how I approach reviews I would share it with others as well [0].

The first question is how close to the deadline to do the review [1]. A lot of people do them near the deadline, partly because academics are somewhat deadline-driven. Also, in processes where there is some kind of discussion among reviewers or a response/rebuttal/revision from the authors, the less time that’s passed between your review and the subsequent action, the more context you will have.

However, I tend to do them as early as practicable given my schedule. I don’t like having outstanding tasks, and although PC members know that many reviews are last minute, it is still nervous-making [2]. I also don’t mind taking a second look at the paper weeks or even months later, in case my take on the paper has changed in the context of new things I’ve learned. And, for folks who are getting review assistance from advisors or mentors, getting the reviews done earlier is better so those people have time to give feedback [3].

I still print papers and make margin written notes whenever I can, because I find I give a little more attention in printed versus screen form [4]. If I’m reading on a screen I’ll take notes in a text editor and save them. I usually read in some comfy place like a coffeeshop (pick your own “this is nice” place: your porch, the beach, a park, whatever) so that I start with good vibes about the paper and also reward myself a little bit for doing reviews [5]. Try to do your reading and reviewing when you’re in a neutral or better mood; it’s not so fair to the authors if you’re trying to just squeeze it in, or you’re miffed about something else.

What I typically do these days is read the paper and take lots of notes on it, wherever I see something that’s smart or interesting or confusing or questionable. Cool ideas, confusing definitions, (un)clear explanations of methods, strong justifications for both design and experimental choices, notation that’s useful or not, good-bad-missing related work, figures that are readable or not, helpful or not, clever turns of phrase, typos, strong and weak arguments, etc. Anything I notice, I note.

The notes are helpful for several reasons. First, actively writing notes helps me engage with the paper more deeply [6]. Second, those notes will be handy later on, when papers are discussed and authors sumbit responses, rebuttals, or revisions. Third, they can themselves be of benefit to authors (see below).

Fourth, taking notes allows me to let it sit for a couple of days before writing the review. Not too long, or else even with the notes I’ll start forgetting some of what was going on [7]. But taking a day or two lets initial reactions and impressions fade away — sometimes you have an immediate visceral reaction either good or bad, and that’s not so fair to the authors either.

Letting it sit also lets me start to sort out what the main contributions and problems are. I make _lots_ of notes and a review that’s just a brain dump of them is not very helpful for the program committee or other reviewers. So, after a couple of days, I look back over my notes and the paper, and write the review. People have a lot of different styles; my own style usually looks something like this [8]:

—-

Summary:

2 sentences or so about the key points I’m thinking about when I’m making my recommendation. This helps the program committee, other reviewers, and authors get a feel for where things are going right away.

Main review:

1 paragraph description of paper’s goals and intended contributions. Here, I’m summarizing and not reviewing, so that other reviewers and authors feel comfortable that I’ve gotten the main points [9]. Sometimes you really will just not get it, and in those cases your review should be weighed appropriately.

1-2 paragraphs laying out the good things. This is important [10]. In a paper that’s rough, it’s still useful to talk about what’s done well: authors can use that info to know where they’re on the right track, plus it is good for morale to not just get a steady dose of criticism. In a medium or good paper, it’s important to say just what the good things are so that PC members can talk about them at the meeting and weigh them in decision-making. Sometimes you see reviews that have a high score but mostly list problems; these are confusing to both PC members and authors.

1 short paragraph listing out important problems. Smaller problems go in the “Other notes from read” section below; the ones that weigh most heavily in my evaluation are the ones that go here.

Then, one paragraph for each problem to talk about it: what and where the issue is, why I think it’s a problem. If I have suggestions on how to address it, I’ll also give those [12]. I try to be pretty sensitive about how I critique; I refer to “the paper” rather than “the authors”, and I look for things that feel mean-spirited or could be taken the wrong way.

A concluding paragraph that expands on the summary: how I weighed the good and bad and what my recommendation is for the program committee. Sometimes I’ll suggest other venues that it might fit and/or audiences I think would appreciate it, if I don’t think it’ll get in [13]. I usually wish people luck going forward, and be as positive as I can for both good and less good papers.

Other thoughts:

Here I go through my notes, page by page, and list anything that I think the authors would benefit from knowing about how a reader processed their paper. I don’t transcribe every note but I do a lot of them; I went to the effort and so I’d rather squeeze all the benefit out of it that I can.

Scores:

Different venues ask for different kinds of ratings; for CSCW, there are multiple scales. The expertise scale runs from 4 (expert) to 1 (no knowledge). I try to be honest about expertise; if I am strong with both domain and methods, I’m “expert”; if I’m okay-to-strong with both, I’m “knowledgeable”; I try not to review papers where I feel weak in either domain or methods, but I will put a “passing knowledge” if I have to, and I try hard to turn down reviews where I’d have to say “no knowledge” unless the editor/PC member/program officer is explicitly asking me to review as an outsider.

The evaluation scales change a bit from year to year. This year, the first round scale is a five-pointer about acceptability to move on to the revise and resubmit [14]: definitely, probably, maybe, probably not, not. The way I would think about it is: given that authors will have 3 weeks or so to revise the paper and respond to review comments, will that revision have a good chance of getting an “accept” rating from me in a month? And, I’d go from there.

——-

Again, not everyone writes reviews this way, but I find that it works pretty well for me and for the most part these kinds of reviews appear to be helpful to PC members and authors based on the feedback I’ve gotten. Hopefully it’s useful to you and I (and other new reviewers) would be happy to hear your own stories and opinions about the process.

Just for fun, below the footnotes are the notes I took on three papers for class last semester. These are on published final versions of papers, so there are fewer negative things than would probably show in an average review. Further, I was noting for class discussions, not reviews, so the level of detail is lower than I’d do if I were reviewing (this is more what would show up in the “other thoughts” section). I don’t want to share actual reviews of actual papers in a review state since that feels a little less clean, but hopefully these will give a good taste.

#30#

[0] Note that many other people have also wrote and thought about review in general. Jennifer Raff has a nice set of thoughts and links.

[1] Well, the first question is whether to do the review at all. Will you have time (I guesstimate 4 hrs/review on average for all the bits)? If no, say no. It’s okay. Are you comfortable reviewing this paper in terms of topic, methods, expertise? If no, say no.

[2] I was papers chair for WikiSym 2012 and although almost everything came in on time, the emphasis was on “on”.

[3] Doing your read early will also help you think about whether this is really a paper you know enough about the related work to review; when I was a student, I was pretty scared to review stuff outside my wheelhouse, and rightly so.

[4] Yes, I’m old. Plus, there’s some evidence that handwritten notes are better than typed.

[5] There’s a fair bit of literature about the value of positive affect. For example, Environmentally Induced Positive Affect: Its Impact on Self‐Efficacy, Task Performance, Negotiation, and Conflict.

[6] See the second half of [4].

[7] See the first half of [4].

[8] Yes, I realize this means that some people will learn that there’s a higher-than-normal chance that a given review is from me despite the shield of anonymity. I’m fine with that.

[9] Save things like “the authors wanted, but failed, to show X” that for the critiquey bits (and probably, say it nicer than that even there).

[10] Especially in CS/HCI, we’re known to “eat our own” in reviewing contexts [11]; program officers at NSF have told me that the average panel review in CISE is about a full grade lower than the average in other places like physics. My physicist friends would say that’s because they’re smarter, but…

[11] For instance, at CHI 2012, I was a PC member on a subcommittee. 800 reviews, total. 8 reviews gave a score of 5. That is, only 1 percent of reviewers would strongly argue that _any_ paper they read should be in the conference.

[12] Done heavy-handedly, this could come off as “I wish you’d written a different paper on a topic I like more or using a method I like more”. So I try to give suggestions that are in the context of the paper’s own goals and methods, unless I have strong reasons to believe the goals and methods are broken.

[13] There’s a version of this that’s “this isn’t really an [insert conference X] paper” that’s sometimes used to recommend rejecting a paper. I tend to be broader rather than narrower in what I’m willing to accept, but there are cases where the right audience won’t see the paper if it’s published in conference X. In those cases it’s not clear whether accepting the paper is actually good for the authors.

[14] I love revise and resubmit because it gives papers in the “flawed but interesting” category a chance to fix themselves; in a process without an R&R these are pretty hard to deal with.

Sharma, A., & Cosley, D. (2013, May). Do social explanations work?: studying and modeling the effects of social explanations in recommender systems. In Proceedings of the 22nd international conference on World Wide Web (pp. 1133-1144). International World Wide Web Conferences Steering Committee.
http://www.cs.cornell.edu/~danco/research/papers/sharma-explanations-www2013.pdf

I don’t know that we ever really pressed on the general framework, unfortunately.

It would have been nice to give explicit examples of social proof and interpersonal influence right up front; the “friends liked a restaurant” is somewhere in between.

p. 2

This whole discussion of informative, likelihood, and consumption, makes assumptions about the goals being served; in particular, it’s pretty user-focused. A retailer, especially for one-off customers (as in a tourism context), might be happy enough to make one sale and move on.

Should probably have made the explicit parallels between likelihood/consumption and the Bilgic and Mooney promotion and satisfaction.

A reasonable job of setting up the question of measuring persuasiveness from the general work (though I wish we’d explicitly compared that to Bilgic and Mooney’s setup). Also unclear that laying out all the dimensions from Tintarev really helped the argument here.

Models based on _which_ theories?

p. 3

Okay, I like the attempt to generalize across different explanation structures/info sources and to connect them to theories related to influence and decision-making.

Wish it had said “and so systems might show friends with similar tastes as well as with high tie strength” as two separate categories (though, in the CHI 06/HP tech report study, ‘friends’ beat ‘similar’ from what I remember).

Okay, mentioning that there might be different goals here.

“Reduce”, not “minimize”. You could imagine a version where you chose completely random artists and lied about which friends liked them… though that has other side effects as an experimental design (suppose, for instance, you chose an artist that a friend actually hated).

p. 4

Yeah, they kind of goofed by missing “similar friend”.

_Very_ loosely inspired. The Gilbert and Karahalios paper is fun.

Seeing all those little empty bins for ‘5’ ratings that start showing up in later figures was a little sad — I wish we’d have caught that people would want to move the slider, and done something else.

We never actually use the surety ratings, I think.

Overall this felt like a pretty clean, competent description of what happened. I wish we’d had a better strategy for collecting more data from the good friend conditions, but…

The idea of identifying with the source of the explanation was interesting to see (and ties back in some ways to Herlocker; one of the most liked explanations was a generic “MovieLens accurately predicts for you %N percent of the time” — in some ways, getting them to identify with the system itself.

p. 5

We kind of got away with not explaining how we did the coding here… probably an artifact of submitting to WWW where the HCI track is relatively new and there aren’t as many qualitative/social science researchers in the reviewing pool compared to CHI.

It’s a little inconsistent that we say that a person might be differently influenced by different explanations, but then go on to cluster people across all explanation types.

p. 6

Should have reminded in the caption, something like “the anomalous 5 (where the slider started)”

Is 5 really a “neutral” rating on the scale we used? Did we have explicit labels for ratings?

I keep seeing a typo every page or so, and it makes me sad. “continous”

Constraining parameters in theoretically meaningful ways is a good thing to do. For instance, if a parameter shouldn’t change between conditions, the models should probably be constrained so it can’t change (it’s kind of cheating to let the models fit better by changing those kinds of params).

p. 7

We talk about parameters for “the user”, but then go on to study these in aggregate. Probably “okay” but a little sloppy.

We really should have removed 5 entirely and scaled down ratings above 5. It probably wouldn’t change things drastically, but it would be mathematically cleaner as well as closer to average behavior.

So, for instance, maybe we should have constrained the discernment parameter to be the same across all models.

Not sure I believe the bit about the receptiveness and variability scores together.

p. 8

There’s an alternate explanation for the clustering, which is that some people are just “ratings tightwads” who are uninterested in giving high ratings to something that they haven’t seen.

I’m only lukewarm about the idea of personalizing explanation type, mostly because I think it’ll take scads of data, more than most systems will get about most users.
The point that likelihood and consumption are different I do like (and that we ack Bilgic and Mooney in finding this as well); and I like the idea of trying to model them separately to support different goals even better (though that too has the “you need data” problem) — we come back to this in the discussion pretty effectively (I think).

p. 9

The discussion starts with a very pithy but useful high-level recap of the findings, which is usually a good thing; you’ve been going through details for a while and so it’s good to zoom back up to the bigger picture.

The flow isn’t quite right; the first and third section headers in the discussion are actually quite similar and so probably would be better moved together.

p. 10

Jamming all the stuff about privacy into the “acceptability of social explanation” part is up and down for me. It’s better than the gratuitous nod to privacy that a lot of papers have, but it’s not as good as having it woven throughout the discussion to give context (and, it’s not connected to theories around impression management, identity work, etc., in a way it probably should be). Some parallels to this year’s 6010 class, where we did a full week on implications of applying computation to personal data last week (and talked about it sometimes as we went along).

I really like that we clearly lay out limitations.

=====

Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing (CSCW ’14). ACM, New York, NY, USA, 248-256. DOI=10.1145/2531602.2531733 http://doi.acm.org/10.1145/2531602.2531733

Unclear that you’d want Turkers doing surveillance camera monitoring, but okay.

I like that they test it in multiple contexts.

The intro kind of begs the question here, that the problem is labeling quickly.

The idea of algorithms that use confidence to make decisions (e.g., about classification, recommendation, when to get extra info) is a good general idea, assuming your algos generate reasonable confidence estimates. There was an AI paper a while go about a crossword puzzle solving system that had a bunch of independent learners who reported confidence, and then the system that combined them used those weights and started to bias them once it saw when learners would over- or under-estimate them. Proverb: The Probabilistic Cruciaverbalist. It was a fun paper.

Okay, some concrete privacy steps, which is good.

I’m less convinced by the categorical argument that fully automated activity recognition systems are “more private” than semi-automated ones. Scaling up surveillance and having labels attached to you without human judgment are potential risks on the automated side.

p. 2
Blurring faces and silhouettes is better than nothing, but there’s a lot of “side leakage” similar to the “what about other people in your pics” kind that Jean pointed out last week: details of the room, stuff lying about, etc., might all give away clues.

I hope this paper is careful about the kinds of activities it claims the system works for, and the accuracy level. I’m happy if they talk about a set of known activities in a restricted domain based on what I’ve seen so far, but the intro is not being very careful about these things.

I usually like clear contribution statements but it feels redundant with the earlier discussion this time.

p. 3

Overall a fairly readable story of the related work and the paper’s doing a better-than-CHI-average job of talking about how it fits in.

p. 4

It’s less clear to me what a system might do with a completely novel activity label — I guess forward it, along with video, along to someone who’s in charge of the care home, etc. (?)

p. 5

I wonder if a version tuned to recognize group activities that didn’t carve the streams up into individuals might be useful/interesting.

One thing that this paper exemplifies is the idea of computers and humans working together to solve tasks in a way that neither can do alone. This isn’t novel to the paper — David McDonald led out on an NSF grant program called SoCS (Social-Computational Systems) where that was the general goal — but this is a reasonable example of that kind of system.

Oh, okay, I was wondering just how the on-demand force was recruited, and apparently it’s by paying a little bit and giving a little fun while you wait. (Worth, maybe, relating to the Quinn and Bederson motivations bit.)

You could imagine using a system like this to get activities at varying levels of detail and having people label relationships between the parts to build kinds of “task models” for different tasks (c.f. Iqbal and Bailey’s mid-2000 stuff on interruption) — I guess they talk a little bit about this on p. 6.

I was confused by the description of the combining inputs part of the algorithm.

p. 6

The references to public squares, etc., add just a touch of Big Brother.

For some reason I thought the system would also use some of the video features in its learning, but at least according to the “Training the learning model” section it’s about activity labels and sensed tags. I guess thinking about ‘sequences of objects’ as a way to identify tags is reasonable, and maybe you won’t need the RFID tags as computer vision improves, but it felt like useful info was being left on the table.

Okay, the paper’s explicitly aware of the side information leak problem and has concrete thoughts about it. This is better than a nominal nod to privacy that often shows up in papers.

p. 7

I’m not sure the evaluation of crowd versus single is that compelling to me. I guess showing that redundancy is useful here, and that it can compare with an expert is as well, but it felt a little hollow.

p. 8

I’m not sure what it means to get 85% correct on average. Not really enough detail about some of these mini-experiments here.

Heh, the whole “5 is a magic number” for user testing thing shows up again here.

I’m guessing if the expert were allowed to watch the video multiple times they too could have more detailed labels. The expert thing feels kind of weak to me. And, on p. 9 they say the expert generated labels offline — maybe they did get to review it. Really not explained enough for confidence in interpreting this.

p. 9

The idea that showing suggestions helped teach people what the desirable kinds of answers were is interesting (parallels Sukumaran et al. 2011 CHI paper on doing this in discussion forums and Solomon and Wash 2012 CSCW paper on templates in wikis). In some ways the ESP game does this as well, but more implicitly.

The intent recognition thing is kind of mysterious.

p. 10

This paper needs a limitations section. No subtlety about broadening the results beyond these domains. Cool idea, but.

===========

Hecht, B., Hong, L., Suh, B., & Chi, E. H. (2011, May). Tweets from Justin Bieber’s heart: the dynamics of the location field in user profiles. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 237-246). ACM. http://extweb-prod.parc.com/content/attachments/tweets-from-justin.pdf

Okay, a paper that looks at actual practice around what we might assume is a pretty boring field and finds that it’s not that boring after all. It’s a little sad that traditional tools get fooled by sarcasm, though.

It’s always fun to read about anachronistic services (Friendster, Buzz, etc.)
I wonder if Facebook behavior is considerably different than Twitter behavior either because of social norms or because the location field lets you (maybe) connect to others in the same location.

This does a nice job of motivating the problem and the assumption here that location data is frooty and tooty.

On the social norms front, it would be cool to see to what extent appropriation of the location field for non-locations follows social networks (i.e., if my friends are from JB’s heart, will I come from Arnold’s bicep?)

p. 2

(And, on the location question, I haven’t read Priedhosrsky et al’s 2014 paper on location inference, but I think they use signal from a user’s friends’ behavior to infer that user’s location. — which I guess the Backstrom et al paper cited here does as well)

Okay, they also do a reasonable job of explicitly motivating the “can we predict location” question, connecting it to the usefulness/privacy question that often comes up around trace data.

Nice explicit contribution statement. They don’t really talk about point 3 in the intro (I guess this is the “fooling” part of the abstract), but I’m guessing we’ll get there.

Another mostly-empty advance organizer, but then an explicit discussion of what the paper _doesn’t_ do, which is kind of cool (though maybe a little out of place in the intro — feels more like a place to launch future work from).

The RW not as exciting here, so far reading more like annotated bibliography again. For instance, I wonder if the Barkhuus et al paper would give fertile ground for speculating about the “whys” explicitly set aside earlier; saying that “our context is the Twittersphere” is not helpful.

At the end they try to talk about how it relates to the RW, but not very deeply.

p. 3

The fact that half the data can be classified as English is interesting — though I’m not sure I’m surprised it’s that little, or that much. (Which is part of why it feels interesting to me.)

Not sure I buy the sampling biases rationale for not studying geolocated info (after all, there are biases in who fills out profile info, too). This feels like one of those kind of “reviewer chow” things where a reviewer was like “what about X” and they had to jam it in somewhere. (The “not studying reasons” para at the end of the intro had a little bit of that feel as well.)

10,000 entries… whoa.

p. 4

Hating on pie charts, although the info is interesting. It does make me want to say more like “of people who enter something, about 80% of it is good” — it’s a little more nuanced.
The “insert clever phrase here” bit suggests that social practice and social proof really do affect these appropriation behaviors — and it is also cool that some themes emerge in them.

It’s tempting to connect the idea of “self report” from the other paper to the idea of “disclosing location” here. The other paper had self-disclosure as part of an experiment, which probably increases compliance — so really, part of the answer about whether data is bias is thinking about how it was collected. Not a surprising thing to say, I guess, but a really nice, clear example coming out here.

So, no info about coding agreement and resolution for the table 1 story. I’m also assuming the percents are of the 16% non-geographic population. Most of the mass isn’t here: I wonder what the “long tail” of this looks like, or if there are other categories (in-jokes, gibberish), etc. that would come out with more analysis.

The identity story is a cool one to come out, and intuitively believable. It would be cool, as with the other paper, to connect to literature that talks about the ways people think about place.

p. 5

So, the description of lat long tags as profiles is a little weird — it means we’re not really looking at 10K users. We’re looking at 9K. That’s still a lot of users, but this is something I’d probably have divulged earlier in the game.

I wonder if one of the big implications on geocoding is that geocoders should report some confidence info along the way so that programs (or people) can decide whether to believe them — oh, okay, they go on to talk about geoparsers as a way of filtering out the junk early.

p. 6

I’m not sure how to think about the implication to replace lat/long with a semantically meaningful location for profile location fields. My main reaction was like “why is Blackberry automatically adding precise location info to a profile field in the first place?”

The idea of machine-readable location + a custom vernacular label is interesting, but it’s more machinery — and it’s not clear that most people who had nonstandard info actually wanted to be locatable. The reverse implication, to force folks to select from a pre-set of places if you want to reduce appropriate, seems fine if that’s your goal. There’s something in between that combines both approaches, where the user picks a point, a geocoder suggests possible place names, and they choose from those.

All of these are a “if your goal is X” set of suggestions, though, rather than a “and this is The Right Thing To Do” kind of suggestion, and it’s good that the paper explicitly ties recommendations to design goals. Sometimes papers make their design implications sound like the voice of God, prescribing appropriate behavior in a context-free manner.

Unclear how many locations people really need; multi-location use seemed pretty rare, and it’s unclear that you want to design for the rare case (especially if it might affect the common one).

It’s often good to look for secondary analysis you can do on data you collect, and study 2 is high-level connected to study 1. Further, I tend to like rich papers that weave stories together well. But here they feel a little distant, and unless there’s some useful discussion connecting them together later I wonder if two separate papers aimed at the appropriate audiences would have increased the impact here.

p. 7

I’m not sure why CALGARI is an appropriate algorithm for picking distinguishing terms. It feels related to just choosing maximally distinguishing terms from our playing with Naive Bayes earlier, and I also wonder if measures of probability inequality (Gini, entropy, etc.) would have more info than just looking at the max probability. (Though, these are likely to be correlated.)

Again, a pretty clear description of data cleaning and ML procedure, which was nice to see.

I’d probably call “RANDOM” either “PRIOR” or “PROPORTIONAL”, which feel more accurate (I’m assuming that UNIFORM also did random selection as it was picking its N users for each category).

p. 8

Also nice that they’re explaining more-or-less arbitrary-looking parameters (such as the selection of # of validation instances).

Note that they’re using validation a little differently than we did in class: their “validation” set is our “test” set, and they’re presumably building their Bayesian classifiers by splitting the training set into training and validation data.

So, what these classifiers are doing are looking at regional differences in language and referents (recall Paul talking about identifying native language by looking at linguistic style features). It looks like, based on p. 9’s table, that referring to local things is more the story here than differences in language style.

p. 9

Not sure about the claims about being able to deceive systems by using a wider portfolio of language… it really is an analog of how spammers would need to defeat Naive Bayes classifiers.

It doesn’t really go back and do the work to tie study 1 and study 2 together for me.