by

Personal trip report thoughts on SOUPS 2018

I wrote a trip report on SOUPS 2018 (the Symposium on Usable Security and Privacy) for other folks at NSF since NSF paid for it, and I thought I would go ahead and share a lightly edited version of it more widely because I like to call out other people’s interesting work with the hope that more people see it. As always, the views in this post are mine alone and do not represent those of my NSF overlords.

SOUPS, founded and oft-hosted by Carnegie Mellon, is historically a good conference focused toward on the human and design side of security and privacy in systems; here’s the 2018 SOUPS program, for reference.  I’m a relative newcomer to SOUPS, having only come since 2017 in my role in NSF’s Secure and Trustworthy Cyberspace program. So, this may be a bit of an outsider view — perhaps not so bad to get from time to time. I’ll structure the report in three main bits: first, to highlight a couple of themes I liked that were represented well by particular sessions; second, to note some other papers I saw that triggered pleasant paper-specific reactions; and third, to gripe a bit about a wider CHI problem that I also felt some of at SOUPS this year and last, that many papers are too focused on particular new/novel contexts and not enough on learning from past work and building generalizable, cumulative, fundamental knowledge upon it.

Some cool sessions on risks close to home, inclusiveness, and organizational aspects

Gripe aside, I liked a number of the sessions I saw. The last session of the first day was the highlight for me, with a clear theme around the privacy risks of those close to us (friends, family, associates) versus risks imposed from outsiders (strangers, companies, governments). The first paper, by Nithya Sambasivan et al., looked at this in the context of phone sharing among females in South Asia, and how technical novelty and cultural norms combined to shape attitudes about and actions toward privacy risks. The talk had some interesting bits about trying to increase the discoverability of privacy-enhancing behaviors and mechanisms such as deleting web cookies/history or private browsing modes.

The second paper in that session, by Yasmeen Rashidi et al., focused on how college students deal with pervasive, casual photography by those around them (mostly, as Anita Sarma pointed out, focusing on overt rather than covert photography, which I thought was a nice observation). The study used a method I hadn’t bumped into before called an “experience model” that summarized key moments/decisions/possible actions before, during, and after photo sharing; I thought it was an interesting representation of ethnographic data with an eye toward design. The beneficial aspects of surveillance in college fraternities reminded me of Sarah Vieweg and Adam Hodges’ 2016 CSCW paper about Qatari families experiencing social/participatory surveillance as largely positive — surveillance is generally cast as pure negative, but there are contexts where it’s appropriate and meaningful.

The third paper by Hana Habib et al. compared public and private browsing behavior using data from the CMU Security Behavior Observatory. Perhaps not surprisingly, people do more private/sensitive stuff in private modes, but maybe more surprisingly, self-reports aligned reasonably well with logged data. Here, too, there was evidence that people were at least as concerned about threats from co-located/shared users versus external users. There’s also evidence that people assume private browsing does more privacy-related work than it really does (for instance, some folks believed it automatically enables encryption or IP hiding), possibly to people’s detriment.

The fourth paper in the session, by Reham Ebada Mohamed and Sonia Chiasson, was close to my own heart and research, with connections to Xuan Zhao, Rebecca Gulotta, and Bin Xu’s work on making sense of past digital media. It focused on effective communication of digital aging online through different interface prototypes (shrinking, pixellation, fading), which made me think straightway of Gulotta et al.’s thinking about digital artifacts as legacy.  But unlike that work, which was more about people’s reaction to their own content fading, this paper was more about using indicators of age to make the pastness of a photo more salient in order to evoke norms and empathy about the idea that things in the past are in the past and thus, as Zhao et al. argued, often worth keeping for personal reasons but not necessarily congruent with one’s current public face. The talk also explicitly put this kind of analog, gradual aging in opposition to common ways of talking about information forgetting as digital, binary, absolute deletion, and that was fun as well (and well-aligned with Bin Xu, Pamara Chang, et al.’s  Snapchat analysis and design thinking).

Another nice first-day session was a set of lightning talks that clustered, broadly, around inclusion and empowerment in security and privacy issues. These included a general call toward the problem from Yang Wang, a focus on biased effectiveness of authentication systems for people of various demographic categories from Becky Scollan, a discussion of empowering versus restricting youth access online from Mariel Garcia-Montes, and a transtheoretical model-based call to develop personalized, stage-appropriate strategies to encourage self-protective privacy and security behavior from Cori Faklaris. On balance these were interesting, and more generally I like the move toward thinking about inclusive privacy/privacy for particular populations, both for their own sake and as edge/extreme cases that might speak back to more general notions of privacy.

On the second day there were also some fun talks I saw in the last session (detailed notes, alas, lost in a phone crash).  These included Julie Haney and Wayne Lutters on how cybersecurity advocates go about their work of evangelizing security in corporations; James Nicholson et al. on developing a “cybersecurity survival” task, parallel the NASA Moon Survival Task, to get insight into IT department versus general company attitudes toward security that looked both promising and well-put-together; and a paper by an REU site team, presented by Elissa Redmiles, about co-designing a code of ethics with VR developers around privacy, security, and safety. It was nice to see an example of a successful REU site experience, and it highlighted a framing of people’s desire for “safety” in cyberspace that I think might make for a root goal concept that “private”, “secure”, and “trustworthy” each capture some aspects of as a means.

Some cool papers

There were also a number of individual papers that caught my eye, including one by Sowmya Karunakaran et al. from Google about what people see as acceptable uses of data from data breaches. They had some interesting stories about both cross-cultural and cross-scenario comparisons (being able to survey 10K folks from six countries has its advantages); probably the most surprising tidbit was that people were least happy about the idea of academic researchers using these data–less so than targeted advertising, and much less so than notifications/warnings/threat intelligence sharing. I say surprising because some folks have observed that Amazon Mechanical Turk workers are more comfortable sharing personal data in tasks posted by academics than by others because academics are perceived as both more trustworthy and more legitimate (though Turk is different than breaches since Turkers have the choice of whether to participate or withhold data, which they don’t in the case of the breaches).  The ordering also roughly paralleled the amount of personal benefit the breach victims perceived for each use, which makes sense; it might be interesting to run a comparable parallel study around appropriate uses and users of non-breached, but openly released, datasets of social trace data.

There was a nice null-results paper by Eyal Peer et al. on whether face morphing — blending two or more faces into a composite — can influence decision-making by blending a person’s own face subliminally into the face of a person in an advertisement or communication campaign. This had a lot of theoretical juice behind it based on the prior face morphing literature and more general work around influence and cognitive psychology, so it was surprising that it didn’t work at all when tested. This caused the team to go back and do a mostly-failed replication study of some of the original work on face morphing’s impacts on people’s likability and trust ratings of images that included their faces. I admire the really dogged work by the team to chase down what was going on, and one more data point in the general story of research replicability; it might be a nice read for folks wanting to teach on that topic.

Susan McGregor’s keynote on user-centered privacy and security design had a couple of cool pieces for me. First, there was a bit about how standards for defining “usability” talk in terms of “specified” users and contexts, which raises cool questions about both who gets to do the specifying, and how to think about things as they move outside of the specified boundaries. Not a novel observation, but one worth highlighting in this context and related to the inclusive privacy discussion earlier. Second, there was a nice articulation of the distinction between usability and utility, and how scales/questions for measuring usability can accidentally conflate the two. For instance, something that might be rated “easy” to use might really be not that easy, but so worth it that people didn’t mind the cost (or vice versa; I remember a paper by Andrew Turpin and William Hersh in 2001 about batch versus interactive information retrieval system evaluation that suggested that a usable-enough interface can make up for some deficits in functionality). This raises ideas around how to develop scales that account for utility: rather than “usable”/”not usable”, what if we ask about “worth it”/”not worth it”. Some posters in the poster session had moves toward this idea, trying to measure the economic value of paying more attention to security warnings or of space/time/accuracy tradeoffs in a secure, searchable email archive.

I also liked Elham Al Qahtani et al.’s paper about translating a security fear appeal across cultures. There’s been some interesting work in the Information and Communication Technologies for Development (ICTD/HCI4D) communities showing that peers and people one can identify with are seen as much more credible information sources. This implies that you might want to shoot custom videos for each culture or context, and that turned out to be the case here as well — though just dubbing audio over an existing video with other-culture technologies and actors turned out to be surprisingly effective, raising cost-benefit tradeoff questions. Sunny Consolvo noted that Power Rangers appears to be able to use a relatively small amount of video in a wide variety of contexts, and that there might be strategies for optimizing the choice of shooting a small number of videos, the closest-fitting of which for a given culture/context could then be dubbed into local languages. Wayne Lutters had an alternate suggestion, to explore using some of the up-and-coming “DeepFake” audio and video creation technologies to quickly and locally customize videos — presumably, including one about the dangers of simulated actors in online content. 🙂

Norbert Nthala and Ivan Flechais’ paper about informal support networks’ role in home consultations for security reminded me quite a bit of some of Erika Poole’s work around family and friends’ role in general home tech support. The finding that people valued perceived caringness of the support source at least as much as technical prowess was both surprising and maybe not-surprising at the same time, but was good to have called out for its implications around designing support agents and ecosystems around security, privacy, and configuration.

There was also a nice, clean paper by Cheul Young Park et al. about how account sharing tends to increase in relationships over time, a kind of entangling that to some extent accords with theories of media multiplexity (gloss: people tend to use a wider variety of media in stronger relationships, though it’s not clear what the causal direction is). The findings had nice face validity around the practicalities of merging lives, ranging from saving money on redundant subscription service accounts such as Netflix to questions of intimacy around sharing more sensitive accounts. It also raises the question (in parallel with Dan Herron’s talk at Designing Interactive Systems 2017 about how to design account systems that can robustly handle relationships ending and disentangling.

A call for more generalizable, cumulative work

Now, to the gripe. The highest level thing I liked least, based on my experiences there both last year and this year, is that too much of SOUPS focuses on descriptive/analytic work around specific new security and privacy contexts, without enough consideration of underlying principles about how people think about security and privacy, and how studying the new contexts adds to that. It’s important, for instance, to study topics such as Cara Bloom et al.’s 2017 paper on people’s risk perceptions of self-driving cars or Yixin Zou et al.’s paper on consumers’ reactions to the Equifax account breach (which won a Distinguished Paper award). These are relevant contexts to address, and from what I remember the presentations/posters I saw about them were pretty good in and of themselves.

But for my taste, on average I don’t think we do enough work to connect the findings from the specific domains and studies at hand to more general models of how people think about trustworthy cyberspace, and how properties of the contexts and designs they encounter affect that thinking. For example, what do we learn about studying the risks of self-driving cars relative to other autonomous systems, or drones versus social media photo sharing versus (surveillance) cameras, or new IoT setups versus more classic ubiquitous computing contexts, or to point back at myself a bit, how Turkers’ privacy experiences add to our understanding of privacy and labor power dynamics more broadly? To what extent are there underlying principles, phenomena, models that could help us connect these studies and develop broadly-applicable models?

This is related to a more general concern I have in the human-computer interaction (HCI) community about how methods that encourage deep attention to one context or dataset — including but not limited to many instances of user-centered design, ethnography, grounded theory, and machine learning modeling — can lead researchers to ignore relevant theoretical and empirical research that could guide their inquiries, improve their models, and more rapidly advance knowledge. (Anyone who wants an extended version of this rant, which I call “our methods make us dumb”, is free to ask.) I also see a lot of related work sections whose main point appears to be to claim that this exact thing hasn’t been done exactly yet, rather than trying to illustrate how the work is looking to move the conversation forward. This, also, is not SOUPS-specific; you see it in many CHI papers (and, it turns out, CHS proposals).

Okay, gripe over and post over as well [1].  Hopefully there were some useful pointers here that help you with your own specific topics, and that your thinking and findings are broad and useful. 🙂

#30#

[1] For once, no footnotes. [2]

[2] Oops.

Write a Comment

Comment

13 Comments

  1. Are you facing problems with choosing writing service help for your research paper work then you don’t need to worry about this? Because now you can discover this info here at no cost. You can choose the right help for you and I hope you will be happy with this.

  2. I had a wonderful time at Soupstock this year. It was the first time that I had attended the event and I’m so glad that it was held in my hometown of Birmingham. Here you check this https://rubygarage.org/blog/how-much-does-it-cost-to-build-a-web-marketplace and get more new tips for mobile development marketing. The event is run by a local charity and takes place every year at Birmingham’s National Exhibition Centre (NEC). It’s a huge, impressive building which is certainly something to behold with over 500 exhibitors and 1000 guests.

  3. When it comes to writing personal trip reports, it’s essential to be thorough and honest about your experiences. A good trip report should provide readers with a detailed account of your travels, including your observations, emotions, and insights. If you’re looking for inspiration or guidance on how to write a compelling trip report, click here for some helpful tips and examples. It’s also helpful to include practical information such as transportation, accommodations, and activities.

  4. It’s so cool that we are back from conferences, trainings and cool offline meetings to learn and develop in our professions.

  5. I always say that you can always learn and get new professions, regardless of your current profession, age and other factors. The modern world provides many times more opportunities. I’d never have thought that I could become a developer, and also find developer jobs with relocation packages on relocateme, thanks to which I moved to another country. Probably, I was convinced once again that everything is possible now

  6. Christmas is one of the biggest festivals that the world of people come together to celebrate. On such an auspicious day, it is ideal to gift your friends something memorable. You can gift anything that you feel like gifting them that could make their Christmas even more special. The gift reflects the effort you have put into getting it and also what they mean to you. Friends are soulmates who travel with us and stay by our side thick and thin. They should always be given a special place in your heart and should be treated well. If you lack ideas on what to gift to your friend on Christmas eve, check out our list of Christmas Gifts for Friends. You will be pleased with the gifts collection we have come up with.

  7. My personal trip report on SOUPS 2018 was a profound experience. The conference offered a rich platform to explore the latest advancements in usable privacy and security. Engaging sessions delved into innovative solutions and best practices. Notably, discussions on interview questions and answers were insightful, providing valuable perspectives on recruiting talent in the field. The diverse range of topics covered made SOUPS 2018 an invaluable opportunity for networking and learning within the realm of human-computer interaction.

  8. Reflecting on SOUPS 2018, my personal trip report highlights the enriching experience of delving into the latest advancements in usable privacy and security. The captivating presentations and engaging discussions fueled my passion for the field. One standout resource that deepened my understanding was the insightful blog at https://magicalkatrina.com/. It offered valuable perspectives, making my SOUPS journey both educational and inspiring, leaving a lasting impact on my approach to privacy and security issues.

  9. SOUPS is a valuable conference for the human and design aspects of security. I’ll highlight themes, papers, and a critique.

  10. You’re right. We need more work connecting findings to general models of trustworthy cyberspace, including how menu prices affect perceptions. Check latest menu prices of different resturants on https://menunprice.co.za/.

  11. For participants of the Symposium on Usable Security and Privacy, the assistance of a specialized essay help service like place 4 papers can be invaluable. These services offer expertise in crafting essays that delve into the intricate intersections of security, privacy, and usability. By leveraging their assistance, attendees can effectively communicate their research findings and insights, contributing to meaningful discussions and advancements in the field.