SOUPS 2019 trip report

I wanted to go ahead and write up a trip report for SOUPS since I went on NSF’s dime and wanted them to get the nectar from it.  (Standard disclaimer that this post is a personal reaction with my Cornell-researcher hat on, this is not the position of my NSF overlords, etc.)

I was sitting next to Heather Richter Lipford, the general chair, and she asked me what I thought of the conference overall. So, my overall reaction was that the conference was Pretty Good.

On the plus side:

  • Nice breadth of topics with good subdivision and theming overall in sessions. This was fun to see, and gave a nice combo of focus and variety.  Even more could have been made of this structurally, with authors encouraged to reference/align their talks with other stuff in their session, but it led to interesting thought packages.
  • I got a really nice sense of experimental rigor in lab and deployment studies and scale development work; sometimes conferences in the broad HCI space are a little more iffy in their quantitative methodology and analysis in ways that make psychologists sad, and I didn’t feel that here.
  • Regardless of method, there were reasonably good descriptions of and details of findings from specific studies, in general clearly presented; I almost always had a good idea of what was going on and what the motivations and procedures were.
  • I also appreciate how many of the papers included supplemental materials used in the studies to support better evaluation of the work, reuse, and reproducibility; I’m a big fan.

But:

  • There wasn’t enough putting findings in the bigger picture of what’s already known from prior work in related contexts, and thus some sense of the work not cumulating versus a series of one-off papers about, e.g., user attitudes toward specific problem/context X.  (This is not just a SOUPS problem, but I’ve felt it here every time I’ve come, so I feel compelled to call it out).
  • Design implications in general followed well enough from the findings, but they were rarely deep or creative, at least as presented in the talks, and many of them didn’t say much about the feasibility of implementing them given all the forces at play around privacy and security problems.
  • The lightning talks were a little up and down because there were so many different types, ranging from dissertation proposals to software presentations to provocations, and they tended to be a little more in “here’s what I’m doing” rather than “here’s what you’ll get from it” mode.

Now that I’ve given the high-level reaction, I’ll give blow-by-blow descriptions and sometimes personal reactions to each of the sessions/talks.  There are based largely on the talk content, with occasional chasing to the paper to fill in cracks. For folks who want to focus on the things I found most interesting, use your browser’s find feature to search for the string +++; this isn’t meant to be an indictment of other work, just an appreciation for the ones that resonated best with me.

It might be an interesting exercise to compare these to the actual paper abstracts, to both get the picture from the authors’ point of view and to think about the value of attending talks versus reading paper abstracts. Here’s a link to the full program for further reading; I also link to each full paper below.

Sunday night poster session

I read most of the posters and talked with maybe a third of the presenters, and on balance it was a pretty fun poster session; the list of poster topics and authors is worth a skim just to see the breadth of the topics, and includes links to the poster extended abstracts.

Probably the biggest theme was around developing security and privacy education content, including:

A couple of other posters stood out for me.

Monday Keynote

Jennifer Valentino-DeVries at the New York Times (and formerly the Wall Street Journal) gave the opening keynote.  The first half of the talk was a nice general romp around how her (and the industry’s) coverage of tech companies has evolved over time, moving more from a basic “what is this” stance with a somewhat positive-to-quirky overall sentiment, toward a “what is wrong/risky about this” stance.  The specific risks discussed were not that new or surprising — bots, disinformation, data aggregation and sharing, and privacy are all things that people in this community talk about — but doing these from a journalistic lens was interesting.

The second half of the talk probed alignments and differences between academic and journalistic analysis of technologies. Alignments include the value of evidence and reproducibility, and the complementary approaches to communication each takes that could work really well together. Those complementary approaches drive some of the main differences, around the nature of a journal versus a newspaper article (nuance, length), and around the goals of an academic audience versus a general audience. There was also a claim that academics are more driven by underlying process/explanation of phenomena, versus journalists focusing more on observation/description of important phenomena, and that investigative journalism has a longer timescale (months) than many academics realize. She also briefly talked about structural possibilities for collaboration, ranging from news reports on papers, to papers with journalistic extensions/applications, to fuller collaboration where the academic side brings expertise and skills on particular technologies that journalists can’t really have.

It made me wonder whether there would be interesting things to do around training journalists to work with technical academics, or vice versa (e.g., what might you tell PhD students about this, and in what contexts/classes?)

Monday late-morning papers: Special populations, and validated scales

The first talk looked at security and privacy issues around visually impaired users, and how these folks interacted with allies/caregivers (sometimes problematically). Nice data collection plan involving relatively intense observations + interviews, across work, public, and home contexts, that sounded potentially rich. However, the talk itself was too focused on things we already have some inklings of, about definitions of privacy and risks of caregivers that didn’t feel that novel or that specific to visual impairment.  The talk made some strong hints about intersectionality and marginalization being important findings, but frustratingly didn’t present much about this — and I wondered whether what is really going on is about aspects of identity (which the paper put its weight behind), versus the needs induced by those aspects (which I think would be a promising way to connect and generalize some of the concerns across needs and contexts). The recommended design approaches around using participatory design and designing for collaborative use also didn’t feel that novel.

The second talk was about the privacy challenges faced by older adults (but not ones with cognitive impairments, who were specifically excluded).  Probably the strongest finding here that was specific to older adults was a clear articulation of privacy-safety and privacy-autonomy tradeoffs that come with the need for support from others — and that these tend to be resolved away from the privacy side if monitoring/surveillance is still the best option for  maintaining autonomy/agency. Another part of the talk focused on whether they saw age as increasing or decreasing threat that came up with an “it depends” kind of answer that resonates pretty well with some of Emilee Rader and Rick Wash’s earlier work around security folk models of threats. Later parts of the talk sounded less focused on older adults’ challenges: hand-me-down technology, misconceptions of how the technology works, strategies for managing risks, and delegating troubleshooting, don’t feel exclusive to older adults and although these might be harder problems for them it’s hard to make strong claims about them (though to its credit the talk noted this and posed that in part as future work).

+++ The third talk I wrote notes on paper for and forgot to transcribe them before losing the paper, but looking back at the published paper reminded me that the idea was to try to better-represent users’ privacy concerns in a validated scale that acknowledges that people have different concerns in different contexts (particularly, social versus institutional contexts tend to trigger different constellations of concerns). The scale development work and justifications were pretty thoughtful, and the validation was not bad, using the scale to collect perceived privacy ratings of pairs of interfaces designed to be more or less privacy-risky. Interestingly, the scale was more sensitive to privacy differences in social than institutional contexts, and the talk was very up front about this, and the possibility that One True Scale is not the way to go, versus scales designed for different kinds of privacy problems. This I can relate to; I think a lot of research spends too much time with concepts that are too broad to really grapple with (sometimes papers propose to tackle all of online hostility, or all of disinformation, or lump all of social influence into one construct).

The fourth talk is also a scale development talk, this one a 6-item 5-point scale focused on security attitudes, in particular, attentiveness to and engagement with cybersecurity measures. They presented a mediation-type model that places this attitudes scale along with an existing SeBIS scale that’s about security intentions, suggesting that doing both scales likely increases predictive power. The work is grounded in the Theory of Reasoned Action, and the candidate scale items were drawn from a broad range of existing scales related to that. The scale development and validation process were described even more clearly than the last talk, and given that several other talks had short-scale-type questionnaires they had developed independently, this kind of validated work seems pretty valuable.

Monday after lunch: Security behaviors and experiences

+++ The first talk was a fun little look at how media representations of hacking both help form and are evaluated in part by people’s mental models of hacking. Perhaps not surprisingly, media affect those models, and usually for the worse: it suggests that hacking is more obvious/overt than it usually is (phishing or virus pop-ups being an exception), that hackers are stronger (and encryption and other defenses are weaker) than they are, and that hackers usually have specific and often important targets (thus, regular users have nothing to fear). The talk also had a nice bit about how people evaluate the realism of the media, in terms of their perception of the technical behavior, their ability to relate to the situation and its match with their existing folk models, and some assessment of the cinematic quality. Some overlap with other folk model work, but a nice take on it from the media lens, and the speaker drew a nice parallel to how Hollywood has medical advisors to reduce the risk that medical representations in film  lead to negative consequences for viewers to argue that maybe we should have something similar for security and privacy behavior.

The second talk uses one of B.J. Fogg’s models of behavior, that posits that behavior is at the nexus of motivation, ability, and a trigger, focusing on triggers of privacy and security behaviors that they claim (correctly, I think) are less well-studied than motivation and ability. Analysis of a number of stories leads to a typology of these triggers, with three main categories: (1) social triggers based on advice from or observation from other people, (2) forced triggers from external stimuli that bring security/privacy front and center such as data breaches or required password changes, and (3) proactive/internal triggers based on routine or habit.  They then asked MTurkers to tell them what motivated recent activities around mobile authentication methods, password updates, uninstalling apps, and Facebook privacy settings. It turns out that trigger type varies based on security behavior (forced, in particular, is more associated with password changes, as one might expect given password expiration policies). It also varies, not surprisingly, based on baseline security behavioral intentions (proactive is most common in folks with high security behavior intention scores), and people are much more likely to share with others behaviors triggered by a social trigger.

The third talk is a fairly close replication study of a SOUPS 2015 paper on how experts and non-experts differ in terms of their security advice and practices (I wish the talk said a bit about why a replication was needed; the paper notes that non-expert practices might have improved over four years).  There were a few methods changes from the original, notably changing one question that originally asked for an overall rating of the goodness of a piece of advice; that question combined effectiveness and realisticness, so the new study splits that out (and finds several examples of practices rated as effective but not so realistic/implementable, including password managers and 2FA, arguing that this means we need better usability for these tools).

+++ The fourth talk addresses the key verification authentication ceremony in the Signal protocol, in which people verify that some computations on their public keys match to avoid man in the middle attacks. Unlike last year, when they presented a paper that tried to persuade everyone to do it, this year they’re looking for an approach informed by risk communication for encouraging this only when it’s needed, in order to avoid imposing unnecessary costs in the face of minimal risks.  This is especially important in Signal since usually when the computations change it’s because the software is reinstalled (and there’s no evidence that there are actual man in the middle attacks in the wild), and many conversations aren’t that risky. It’s also hard for people to understand this because the idea of a safety number, the difficulty of verifying it, and the risk of not doing it are all not scrutable to them (and the dialog boxes in the current version don’t do a good job of explaining things). Redesigning dialogues and workflows to simplify them, give more information about the process, and use clearer terminology helps people make more informed decisions and arguably better mental models of what the ceremony means and what the risks involved are.

The fifth talk is about consumer experiences of ransomware, via a large scale representative sample. About 3% of folks were judged to have experienced it yearly (9% overall) based on guided self-reports. Very few people pay, it turns out, although a fair number of folks change their security behaviors post-attack (notably antivirus stuff,  being more careful in browsing, and backups — though the talk worried that this didn’t change enough). Perhaps surprisingly, demographics don’t affect likelihood of being a victim once you control for people’s security behaviors and their own prior exposure to online scams. I like that they checked this — I wish people did more work to really measure the constructs they care about instead of easier-to-measure ones (personality traits, I’m looking at you), and when pressed on it by a questioner they were very clear that they didn’t think demographics matter, except inasmuch as they correlate with more relevant constructs — kudos for that.

Monday late talks: “New Paradigms” (with an emphasis on design and usability)

+++ The first talk is about how to do privacy by design in the context of data science work, where you’re looking to balance needed access to personally identifiable information for data science with privacy concerns. The claim is that this would be a lot more effective if people thought hard about how to estimate how much and what kind of data they really need to do the work, and carefully scope the work to be able to give meaningful answers to the questions.  This implies moving away from a relatively prevalent all-or-nothing mindset about access toward partial and just in time access, at appropriate and monitored levels, in auditable ways. This leads to an approach to thinking about privacy loss not so much in terms of properties of the full dataset; instead, you might compute actual privacy risks and losses based on actual data access in the course of doing the work, impose a privacy budget on the analyst’s behavior, and make analysts make more decisions about accessing specific bits of data and seeing the effect of this on their privacy budget. The talk describes doing this for real in a record linkage task with real experts in both data analysis and privacy loss. I was curious how the privacy loss calculations might scale/adapt to doing multiple tasks on data over time with different analysts, since some of the losses might be cumulative over time.

The second talk is about “moving from usability to MPC and back again” rooted in a problem of data aggregation of sensitive data — here, gender and race disparities in wage rates in Boston. Half of the talk was about the usability of MPC itself. This is partly about helping non-experts understand the benefits of MPC and trust it; there were some parallels to how the earlier Signal talk addressed creating appropriate explanations aimed at the target audience rather than the developers. It’s also partly about the problems of validating encrypted, distributed data, which they look to address by constraining input based on domain knowledge. The second half addressed how one might use MPC methods to do web analytics analyses without collecting and distributing individual-level data, by computing analytics measures locally and aggregating them using MPC methods.

The third talk looks at the somewhat alarming rise in phishing websites’ use of valid HTTPS certificates, and whether the certificates they create are distinguishable from legitimate sites both in general and in the case of specific targets. Phishing sites tend to have more duplicate and invalid certificates on average, but still plenty of valid ones — and although the measurable features of those certificates vary in terms of type of validation and identity of the certificate authority, they don’t vary enough to be able to build strong models that would predict likelihood of a phishing-based certificate, especially if you care about false positives at all. Once you get to the level of impersonating a specific target’s certificate, phishing sites get a lot worse, rarely being able to populate them with target-specific data (interestingly, unless the phishing website was hosted on a service offered by the target).

Then there were lightning talks. The first one was based on one of the posters, the Star Wars poster mentioned earlier. The second was about “cognitive aspects of socially engineered payment diversion fraud”, and though they didn’t well-define what that meant, it was surprising that victims thought they were less likely to be hit in the future (despite the clear evidence they are vulnerable), and also that phishing training didn’t seem to reduce these risks. The third talk was also based on a poster, around designing more inclusive alerts for privacy and security, focused on calling out a couple of methods for doing so (inclusive design, lead user/extreme/early adopter personas) that might have benefits for the general population as well as the target users. The last one is talking about a “usability, deployability, security” analysis of Android Security Keys for two-factor authentication (2FA) and how it compares to USB-based 2FA, suggesting it’s a little better because you don’t need to do device pairing or device mounting as you often do with USB key-based solutions.

Tuesday morning talks: Developers

+++ The first talk was a survey + interview study of relatively small/independent app developers and their perceptions around using advertising in apps. It turns out they see using ad networks as a practical necessity relative to other monetization models (like paying up front), though largely based on their own perceptions rather than data, and they don’t feel like they make that much money anyways. They also claim to want to protect user security and experience, but choose ad network defaults primarily to minimize time and effort on their end without deep consideration of either the monetary benefits to them or the privacy risks to users. Finally, they don’t do much monitoring of the ad network’s behavior, seeing it as something that’s both the network’s responsibility and something they don’t have much power over. Given these findings, the talk presented reasonable high-level followon research questions/design ideas, and overall it felt like a nice, useful descriptive study of this context.

The second talk worked to apply the idea of “code smells” to look at crypto API library usability (“usability smells”), based on some API usability principles proposed by Green and Smith. Interestingly they started by some work analyzing Stack Overflow to see how well those principles lined up with the actual (usability) problems developers came up with, although I wasn’t so convinced by the analysis that worked to map the issues into a taxonomy. The descriptions of specific issues didn’t feel like great fits the higher-level categories the talk proposed, nor did the mappings of issues to the proposed “usability smells”; those were also more high level and generic than most of the descriptions of code smells I see floating around online. So I wasn’t sure that this would actually be useful for API developers or users to heuristically evaluate API usability even though I like the high level idea.

The third talk was about firewall interfaces for sysadmins. The highest level claim is that at least for this task, survey results show that the common wisdom that sysadmins prefer command line interfaces is wrong. In that survey, sysadmins reported being pretty skilled overall with firewalls and that they only spend so much time per week on that. This means that some of the advantages of CLIs, including flexibility and providing access to all functions, tend to be less salient to many people who didn’t need to get deep into the options too often. Folks who liked the GUIs liked that they tended to present info/status more effectively, and gave enough of the common functionality that it was better than remembering CLI commands for something they don’t use that much. This sounds like it’s related to a more generalizable story about relative advantages of CLIs versus GUIs with respect to task frequency and difficulty; I’m not sure there’s too much specific about firewalls or security here.

+++ The fourth talk was about how sysadmins think about/deal with software updates. They’re more proactive than end-users in seeking out updates (which makes sense), and describe info related to updates as scattered in many places, which makes researching and deciding on them harder than it should be. The talk then spent some time on update testing and deployment processes, pointing out that sensible-sounding staged/staggered rollout strategies mean that vulnerable and user-facing production machines often wait a relatively long time to get the updates. (It also means that for a while you’re running different environments, which might also increase costs/load of supporting them). It’s also surprisingly hard to develop concrete deployment strategies that minimize disruption and make decisions around buggy updates. All of this is constrained by organizational context, where policy, coordination, and resources impact what sysadmins can actually do.

Tuesday late morning: Authentication

The first talk addressed the usability of doing continuous (or implicit) mobile authentication, looking at the cost/benefit of required re-authentications when the continuous authentication method gets suspicious or stale. Those reauthentications can be intrusive and interruptive, so the talk looks at methods the system might use to reduce that pain, including encouraging voluntary re-authentications at less intrusive times and providing info about the system’s current confidence in the user’s authentication status to help people understand and decide to voluntarily re-authenticate. Clever idea, although some of the implementations they tested (particularly one that gives a 10 second warning before imposing a re-authentication) don’t feel like good designs relative to either no warning or giving continuous status that people can check and act on between tasks; that said, their user study suggests that even the 10 second warning is better than no warning, and all warnings increase people’s use of voluntary re-authentication, which is arguably good. All task interruptions turn out to be super-annoying, though, so they emphasize the idea of trying to delay re-authentication at least a little bit to a time when the person isn’t doing an important task or when they’re not accessing sensitive information. Those recommendations aren’t so surprising based on prior work around interruptability by Iqbal, Bailey, Fogarty, and others, but it’s good to think about them in the context of balancing usability and security.

The second talk looks at to what extent people can intentionally vary biometric measures (such as keystroke touch features on mobiles), perhaps interesting because most biometrics work assumes these features are byproducts rather than intentional. The idea is that this might highlight some problems with those assumptions or provide new resources for stronger authentication.  To do this, they provided people with a visualization of their typing behavior with respect to these features (based on timing, length, and location of press on key), along with a visualization of a desired target behavior that behaves differently. The upshot from measurement is that people can often, but imperfectly, do this, and (perhaps not surprisingly) more complex variations in terms of amount and relationship between changes increase the error rate. Their conclusion is that this might be a useful adjunct to password creation, and more generally that we should think harder about keystroke biometrics as a controllable variable and what that implies. I thought about piano players and the level of control they have over pressure and timing, and concluded that there might be something here.

Exploring Intentional Behaviour Modifications for Password Typing on Mobile Touchscreen Devices. Lukas Mecke, Daniel Buschek, Mathias Kiermeier, Sarah Prange, Florian Alt.

+++ The third talk is about why people don’t, and don’t effectively, use password managers. The first observation is that “password manager” can mean both things that save entered passwords like web browsers or Apple’s Keychain, and also things like standalone password generation/management-focused tools, and those should be thought about separately. Part of the issue for people who don’t is that some nontrivial number of people don’t even know what a password manager is; others don’t know/trust web browsers to save the info, or conclude it doesn’t work well when related tech like “Remember me” checkboxes on doesn’t use that saved info. Password generator-managers are seen as meaningfully more secure, but hard to set up and use, have some worries about single points of failure both around security from others (including other authorized users of the computer) and around forgetting one’s master password.

The fourth talk is about the potential for universal 2FA (U2F) keys, given some known issues around them including potential for loss, utility of SMS-based 2FA, and the vagaries of personal experience shaping how willing people are to interact with them. To this end they did a lab study of setup and a diary-based one week field study of everyday use. On the lab study, the high level was that key setup was a little harder, but login a little faster, with keys than SMS. The diary study found that sites kept sending SMSs even when people registered and preferred the hard keys — annoying —  and also that sites supported automatic logins that reduced key use — sad for security; the talk conclusion is that this means keys might be most effective primarily for specific use cases (sensitive data, public computers).

The fifth talk is about usability of second factors one might use in two-factor authentication, with a very similar setup as the prior talk: lab study of setup, two-week study of use (in a simulated banking website with experimenter-posed banking tasks). High level takeaway is that improving usability of setup of second factos is pretty important — and that thinking about differences in users’ capabilities (vision, age) in this domain would be important as well. The U2F key was fastest to authenticate but lowest-rated in terms of usability scale ratings, and more prone than most other methods to setup failures because of the workflow. (Passwords were, interestingly, rated as more usable than any other second factor). More generally, they confirmed a lot of common themes about security use in general and 2FA in particular, in terms of people’s perceptions of their security threats affecting their preferences, and worries about the availability of the required login info/codes/devices. SMSs performed less well overall in terms of use time, setup time, and usability than I might have expected. Timed one-time passwords were relatively hard to set up (again, an issue with the workflow of setup), but pretty usable once set up.

Tuesday afternoon (Privacy):

The first talk looked at how the rules about the right of access in GDPR create privacy risks through possible errors in how organizations verify the identity of requesters. They do this, interestingly, by having two authors attempt to get information about each other through right of data requests that use information faked in the kinds of ways one might do if one is hacking and social engineering. The attacks work about 25% of the time on 55 large companies (though of those, 1/5 of them leak information about the wrong person). It’s a cute talk, and I appreciate the ethical discussion they had; my main question is that I don’t know if it’s especially a problem with GDPR versus anything that has avenues for recovering personal information. Interestingly I saw a newspaper article a few days after the conference reporting a very similar finding (from a different person I think, though).

The second talk is a content analysis of privacy choices provided by 150 websites related to opting out of email, targeted advertising, and data deletion. Interestingly and on the plus side, many privacy choices are located in multiple locations (though less so as sites get smaller, and primarily this happens around email subscription management). On the minus side, reading grade levels are very high for descriptions of privacy choices and their wordings inconsistent between sites, making it harder for users to engage with them, Further, the described impact of those choices is often ambiguous, and the user interaction style often pretty terrible.  Maybe not surprising, but interesting to have catalogued in detail with concrete evidence. Recommendations fit the findings pretty well, including language standardization, centralizing privacy choices, and simplifying privacy controls.

+++ The third talk is about neural habituation/generalization to security notifications, which are often both repetitive and have similar look and feel with non-security-related elements. The idea of ignoring security messages is well-known, but actually connecting it to neurological theory and using that to drive the modeling is cool, and lets you differentiate habituation from fatigue. They do it in the context of a controlled design experiment in MTurk where they control for design similarity, amount of similar stimuli seen, and fatigue vs. habituation vs. generalization.  Conclusion: make security messages visually distinct — worth do even though (and I suppose theoretically, because) this violates general UI guidelines.

Then, more lightning talks. The first talk pointed out that, since seizing cell location data now requires a warrant based on a case called Carpenter, we should think about what other kinds of location data should be thus protected. Based on a survey, most users think that most such data should be protected (and deleted immediately after not needed by the page). The second talk described a notification system (e.g., application notifications such as arriving emails, etc.) that is privacy and security-aware, addressing concerns of both users and notifiers. The third talk described a case study of building a UX team inside a privacy/security product organization, with some thoughts on building the business case, what to look for in employees, and how to integrate UX into the company’s development process. The last talk called for designing social (versus informational) interventions to address the privacy paradox and described a general research approach toward it (scanning a little like a doctoral consortium talk).

Tuesday late afternoon talks: Privacy and Sensing

The first talk looked at experienced fitness trackers’ practices and concerns about sharing fitness data. There’s a lot of work on this around sharing in social media, but here the authors are looking to understand motivations and audiences for sharing very broadly, in many contexts and circles. And, that’s what they found: people share with a wide variety of recipients, from friends and family to physicians and fitness activity incentive programs (with very different goals for each group). Further, they found that on average people don’t see fitness data as very sensitive in itself, versus with respect to social norms and impression management. (Though one questioner made the cool point that if they were _asked_ for some of this very same data, versus sharing it _themselves_, they might see it as more intrusive/risky.)

The second talk looked at people’s perception of risks of smart home type devices, through a combo of eliciting mental models and interviews. The models varied; some were more focused on the technical flow of the data, while others focused on more the functions of the devices. Perceptions of risk, appropriate use and sharing, data retention and controls, etc. also varied in ways that align reasonably well with ways people talk about these issues in other contexts (as did the summary conclusions about people’s perceptions of risk and ability to act) — good for face validity, but leaving me less clear about how this moves the conversation forward. Perhaps attitudes/knowledge around devices has evolved since other studies, and there is some value in confirming general findings in new contexts, and the paper notes aspects of both of these, but I wish this had been clearer both in the paper and the talk.

The third talk was about user understanding, and security and privacy perception of, voice-based personal assistant ecosystems. The big takeaway is that people don’t think so much about the system’s ecosystem; this, in turn, leads them to a tendency to think that data is processed locally, not understanding the data sharing implications of third party skills or interacting with other smart home devices. As with the other talks in this session (and really, the first two in the first session), I really wanted to know how this built on what’s already known about related concepts: the last paper, for instance, explicitly noted that the findings aligned with more general internet perception findings. This paper did have a piece that focused on perceptions of shopping through smart assistants that surfaced some interesting concerns around it and that felt more novel; it made me wonder if there were other specific use cases or domains that would surface interesting specifics about people’s beliefs and behavior.

Then, one last brace of lightning talks. The first talk looks to come up with a useful gloss on privacy as “consistency of the informational behaviors of a system with the reasonable expectations/norms of its users” and a related paradigm for thinking, designing, and developing around it. The second describes an OSS project “2FA notifier” designed to support use of 2FA on popular websites that provide it by helping users be aware of and do the enabling of it without having to be proactive about it. Third proposes that password logins should allow for mild errors in the password content by using some keystroke dynamics to help distinguish legit users from attackers, and giving that slack to presumed legit users. +++ The last talk suggests that instead of focusing on building trust in technology as a way to encourage usage, that trust is maybe a side effect of desirable designs. Instead of thinking trust, think adoption motivation, and more generally that “trust” is an overloaded term and that you probably should be measuring something more directly related to what you’re really trying to build/create/enhance — this, I am totally on board with from seeing how this plays out around other complex concepts treated too generically, including “harrassment”, “disinformation”, and “influence”.

#30#