And, notes from Mary Flanagan and Eric Paulos’ keynote and tutorial from the second day of the SoCS workshop. Mary first.
Not all games are serious games: critical games, causal games, art games, silly games — the question here is about how play and games interact. Or, even more basic, how do we know when people (or animals) are playing? Openness, non-threatening, relative safety, …
Instead, thinking about important elements of play might be a fruitful way to help designers (“Intro to game design 101” part of the talk). So, for instance, thinking about what people are looking for in meaningful play: meaning/context, choice/inquiry, action/agency, outcomes/feedback, integration/experience, discernment/legibility. Or, thinking about ways to carve up the elements of the game itself: rules/mechanics, play/dynamics, and culture/meaning.
So, what does this mean if you want to design games with a purpose, that have social good or social commentary or social change as their goals? In particular, gamification has a connotation of making people do things, a la persuasive computing, that makes people who design games feel really awkward because play is generally voluntary. Are Girl Scout merit badges about “play”?
Buffalo: a game that juxtaposes pairs of words that requires people to give names that encompass both the words (“contemporary” “robot”; “female” “scientist”; “multiracial” “superhero”; “Hispanic” “lawyer”), and the group of players to agree that the names are appropriate. It’s designed to help us reflect on stereotypes and implicit biases, and increase or broaden our “social identity complexity” (ability to belong to/respect/know about multiple social groups).
Which is important, because apparently these kinds of biases and divisions show up really young. (Though, it makes you wonder how to design games — both practically and ethically — for the very young, to help address issues like that.) And, because the biases are floating around in your social computing users and in your social computing software.
A fair number of games and studies suggest that you can have fairly dramatic effects on the biases, at least in the short term. Long-term effects are less clear, though — how would we study that?
I ran out of battery so lost most of the last bit, but one notable element was a design claim that having a more diverse team leads to more diverse games and outputs. This rang true on its face, and has some support from our own experiences writing prompts for the Pensieve project; the team was largely white rich kids from the Northeast and the prompts reflected that, in ways that sometimes made users from different backgrounds sad. So, this seems like a pretty useful nugget to take away.
Now, Eric, on design and intervention, and design vs. design research. In particular, claims about design research: it tends to focus on situations with topical/theoretical potential; it embeds designers’ judgment, value, and biases; and the results hopefully speak back to the theory and topics chosen, broadening the scope of knowledge and possibilities, as well as perhaps improving or reflecting on design processes themselves.
He’s also advocating for a more risky approach and perspective on design, with the claim that we often are good at solving well-defined problems (“good grades”) but not so good at having ideas that might help us think about tough problems (“creative thinking”). Further, the harder the problem, the less we know about it.
Like Mary, Eric is talking to some extent about critical design, speculative prototyping that calls out assumptions, possibilities, and hypotheses. There’s a general critical design meme that you’re looking for strong opinions that doesn’t seem necessary (suppose I reflect on assumption X and come to conclude I’m okay with it), but the general goal is to look outside of the normal ways that we look at a situation.
Now we’re going through a process that Phoebe Sengers talks about for thinking about design (and research) spaces: figure out what the core metaphors and assumptions of the field are, look at what’s left out, and then try to invert the goals to focus on what’s left out and leave out what’s a core assumption. Here we’re talking about this in the context of telepresence: what would it mean to think about telepresence not for work and fidelity but for fun and experience. One bit is an interesting parallel between early causal telepresence robots and, say, FaceTime.
Our next step is to think about the core assumptions of social computing (“find friends and connect”), what’s left out (“familiar strangers”), and inverting (“designing technology that exposes and respects the familiar stranger relationship”). Again, an impact claim here, that designs like Jabberwocky inspired things like Foursquare (with evidence that this is true, which is cool).
I wonder what the ratio of ‘successful’ to ‘unsuccessful’ or ‘more interesting’ to ‘less interesting’ critical design projects is. We’re now going through a large series of designs and it makes me wonder how many other designs we’re not talking about. Of course, you can say the same thing about research papers… and in both cases it would be really nice to see a little more of the sausage being made.
The large number of designs is also reminding me of the CHI 2013 keynote. Here we’re looking to illustrate the idea that looking at other design cultures might be useful for us, but the connections between particular designs and the underlying concepts/points/ideas are often not so clear: how do we make sense of these as a group. There’s always a talk tension where you want to talk about lots of cool stuff, making the connections between them, and managing the audience capacity, and again, this is not specific to designs (you see it in, say, job talks sometimes).
Now a discussion around the value of amateurs, DIY culture, and the idea that innovations often happen when people cross over these boundaries ($1 to Ron Burt, I think). It’s not clear that this follows from the set of designs we’ve talked about, but it’s a plausible and reasonable place and one that I try to live in a fair amount myself. There are costs to this — learning time and increase risk of failure — but I think that’s part of our game.
More heuristics around critical design, more than I expected, so kind of a giant list here:
- Constraints (when is technology useful)
- Questioning progress (what negative outcomes and lost elements arise)
- Celebrate the noir/darkness (seek out unintended uses and effects)
- Misuse technology (hacking/repurposing tech and intentionally doing things the wrong way)
- Bend stereotypes (who are the ‘intended’ users)
- Blend contexts (perhaps mainly physical and digital)
- Be seamful (exploit the failure points of technology)
- Tactful contrarianness (confidence in the value of inversions)
- Embrace making problems (designs that cause or suggest issues, not resolve them)
- Make it painful (versus easy or useful to use),
- Read (not just papers), and
- Be an amateur (a lover of what you do).