Social computing
CS 347 Michael Bernstein
Social computing CS 347 Michael Bernstein Announcements Abstract - - PowerPoint PPT Presentation
Social computing CS 347 Michael Bernstein Announcements Abstract drafts due Friday We recommend getting feedback in office hours this week and next! We will work hard with you to help shape the project. 2 Recall Sociotechnical system
CS 347 Michael Bernstein
Abstract drafts due Friday We recommend getting feedback in office hours this week and next! We will work hard with you to help shape the project.
2
3
Social interactions define the system Technical infrastructure defines the system The two components are interrelated and both responsible
Social computing behavioral science as offering a new lens onto traditional social science theory
Predicting tie strength with social media Social capital’s relationship to social media use
4
YOU READ THIS
Social computing systems as supporting new, or more pro-social, forms of social interaction. Examples:
Q&A systems — Answer Garden evolves into StackOverflow and Quora Collective action — Dynamo, SquadBox
5
The Good Stuff
Encouraging contributions Social media’s influence on us New models for online interaction
The Bad Stuff
Trolls, harassment, and moderation Disinformation AIs in social environments
6
The Good Stuff
[Beenen et al., CSCW ’04]
Social loafing: why should I contribute if many others could as well? Hypothesis: calling out uniqueness will increase participation Method: rating campaign on MovieLens (think: IMDB ratings)
“As someone with fairly unusual tastes, you have been an especially valuable user of MovieLens [...] You have rated movies that few others have rated: [...]”
Result: participants in the uniqueness condition rated 18% more movies
8
The Good (?) Stuff
“The Internet Paradox” [Kraut 1998]: people are more lonely the more they use the internet. Does Facebook use really displace
Method: longitudinal time-series analysis of self-reported tie strength, compared to Facebook activity logs Result: composed pieces (comments, posts, messages) increase it substantially, but one-click pieces (likes) only by a bit
10
“Social network site” [boyd and Ellison 2007]
Well-being?
“Receiving targeted, composed communication from strong ties was associated with improvements in well-being while viewing friends' wide-audience broadcasts and receiving one-click feedback were not.” [Burke and Kraut 2016]
Job hunting?
“Most people are helped through one of their numerous weak ties but a single stronger tie is significantly more valuable at the margin” [Gee, Jones and Burke 2017]
11
Exposure to diverse political news?
“We find strong evidence that [social media] foster more varied online news diets. The results call into question fears about the vanishing potential for incidental news exposure in digital media environments.” [Scharkow et al. PNAS 2020] “We […] quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook’s algorithmically ranked News Feed and further studied users’ choices to click through to ideologically discordant content. Compared with algorithmic ranking, individuals’ choices played a stronger role in limiting exposure to cross-cutting content.” [Bakshy, Messing, and Adamic Science 2015]
12
The Good Stuff
[Viégas and Donath, CHI ’99]
Chat circles: “narrowcasting” via physical proximity
14
[Hiruncharoenvate, Lin and Gilbert, ICWSM ’15]
The Chinese government censors sensitive topics on social media However, homophones can be difficult for censors to distinguish from intended use
和谐 (slang ‘censorship’) vs. 河蟹 (river crab)
15
This work introduces an algorithm that decomposes words and nondeterministically creates homophones that are likely to create confusion for censors
[Horowitz and Kamvar, WWW ’10]
Technical challenge: question routing over IM
Use a joint model over topical relevance and social distance
Interesting equilibrium: people were more willing to answer questions than ask them!
16
The Bad Stuff
[Cheng et al., CSCW 2017]
Popular press: trolling is confined to an antisocial sociopathic
Experiment: put people in a good or bad mood, show them positive
Measure resulting trolling behavior
18
19
35% troll comments 49% troll comments 47% troll comments 68% troll comments Positive Mood Negative Mood Positive Norm Negative Norm The effects compound.
20
Proportion of flagged posts on CNN.com
0.03 0.033 0.036 0.039 0.042
Time of day
6 12 18 24
Daily negative affect [Golder & Macy 2011]
Why does this happen? [1min]
[Suler 2004]
A major theory as to why trolling happens: when we interact online, we say and do things that we would not do IRL. We self-disclose more, and we act out more. This is known as the online disinhibition effect: we have less inhibition when online. Online disinhibition would imply that we do troll more online than
(It would also imply that we write harsher CS 347 commentaries online than we might share in class, or to the author’s face.)
21
Should we use real names? Pseudonyms? Let people be anonymous? This is a classic, old question in the field.
Anonymous environments create greater disinhibition, which results in more trolling, negative affect, and antisocial behavior [Kiesler et al. 2012] On the other hand, anonymity can foster stronger communal identity [Ren, Kraut, and Kiesler 2012] and more creativity [Jessup, Connolly, and Galegher 1990]
22
[Chandrasekharan et al., CSCW 2018]
Question: does banning bad behavior help, or just relocate the behavior? Dataset: Reddit banned /r/CoonTown and /r/FatPeopleHate as violating its hate speech policy
23
Result: many accounts left; those that stayed, did not introduce hate speech into
[Seering et al., CSCW 2017]
Moderating content or banning substantially decreases negative behaviors in the short term on Twitch. Analysis: interrupted time series
What happens to the channel right before vs. right after a moderator’s injunction?
Result: the behaviors of high-status users has ripple effects on others’
24
Y O U R E A D T H I S
Friends intercept harassing emails before they appear in your inbox
25
The Bad Stuff
Misinformation spreads: Reddit’s Boston Bomber rumors were corrected, but the corrections spread too slowly. [Starbird et al. 2014] Investigation of rumors spread on Twitter over eleven years… [Vosoughi, Roy, and Aral 2018]
The top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000. Falsehoods also diffused faster than the truth. Bots accelerated true and false news at the same rate, so false news is spreading more virally than truth because humans, not bots, are spreading it.
27
Pink — anti-White Helmet accounts on Twitter — are dominant in volume. But, not bots and trolls: lots of journalists aligned with Syrian and Russian government interests, Syrian and Russian government members, and alternative media It looks more like activism than deliberate disinformation
28
From Starbird@Stanford 2019
[Starbird, Arif, and Wilson 2019]
The question is often posed: can’t we train classifiers to identify pieces of disinformation and automatically remove them? But the problem is, an individual piece of content is hard to disambiguate. Starbird’s argument: it’s much more effective to study and classify disinformation campaigns — a collection of information actions
29
[Reeves and Nass 1996]
People react to computers (and other media) the way they react to other people We often do this unconsciously, without realizing it
31
32
Participants worked on a computer to learn facts about pop culture. Afterwards, participants take a
“did a good job”.
this machine did a good job
[Reeves and Nass 1996]
33
Participants were then asked to evaluate the computer’s helpfulness. Half of them evaluated
room to evaluate on a second computer. Participants worked on a computer to learn facts about pop culture. Afterwards, participants take a
“did a good job”.
this machine did a good job
[Reeves and Nass 1996]
34
The evaluations were more positive when evaluating from the same computer than when evaluating from another computer …almost as if people were being nice to the computer’s face and meaner behind its back. When asked about it, participants would swear that they were not being nicer to its face; that it was just a computer.
this machine did a good job
[Reeves and Nass 1996]
The same principle has been replicated many times…
For example, putting a blue wristband on the user and a blue sticker on the computer, and calling them “the blue team”, resulted in participants viewing the computer as more like them, more cooperative, and friendlier [Nass, Fogg, and Moon 1996] The authors’ purported method: find experiments about how people react to people, cross out the second “people”, write in “computer” instead, and test it.
The reaction is psychological and built in to us: the “social and natural responses come from people, not from media themselves”
35
[Reeves and Nass 1996]
Algorithms increasingly mediate content in socio-technical systems. Many are unaware of these algorithms. [Eslami 2015] People respond to these algorithms by creating folk theories: intuitive, informal theories to explain the system’s behavior [DeVito et al. 2018, French and Hancock 2017]. Facebook’s feed algorithm is:
Transparent platform (4.4 out of 7 on a Likert scale) Unwanted observer (4.4 out of 7) Corporate black box (3.6 out of 7) Rational assistant (2.9 out of 7)
36
YOU READ THIS
[Jakesch et al. 2019]
When the environment is all-AI or all-human, people rate the content as trustable — or at least calibrate their trust. However, when the environment is a mix of AI and human actors, and you can’t tell which, the content believed to be from AIs is trusted far less.
37
Today was focused on recent research results in the space
Find today’s discussion room at http://hci.st/room