Readers of /r/worldnews on reddit often report tabloid news to the volunteer moderators, asking them to ban tabloids for their sensationalized articles. Embellished stories catch people’s eyes, attract controversy, and get noticed by reddit’s ranking algorithms, which spread them even further.
via Pocket http://ift.tt/2k0DN3H
In this experiment, tabloid news articles on /r/worldnews were randomly assigned either
- no sticky comment (control)
- sticky comment encouraging fact-checking
- sticky comment encouraging fact-checking and downvoting of unreliable articles.
Changes in human behaviour
Both sticky comments resulted in a higher chance of comments on articles containing links (1.28% more likely to have at least one link for the sticky with scepticism, and 1.47% more likely to have at least one link for the sticky encouraging scepticism and voting). These figures are representative of the effect on individual comments – but the increase in evidence bearing comments per post is much higher:
“Within discussions of tabloid submissions on r/worldnews, encouraging skeptical links increases the incidence rate of link-bearing comments by 201% on average, and the sticky encouraging skepticism and discerning downvotes increases the incidence rate by 203% on average.”
- Changes in algorithmic behaviour
Reddit posts receive an algorithmic ‘score’, which influences whether the post is promoted or not.
“On average, sticky comments encouraging fact-checking caused tabloid submissions to receive 50.9% lower than submissions with no sticky comment, an effect that is statistically-significant. Where sticky comments include an added encouragement to downvote, I did not find a statistically-significant effect.”
Why does this matter? And what does it have to do with learning analytics?
The experiment illustrates a complex entanglement of human and material agency. The author of the study had predicted that the sticky encouraging fact-checking would increase the algorithmic score of associated posts, thinking that the Reddit score and HOT algorithm would respond to changed commenting activity, or that other behaviours that do influence the Reddit score and HOT algorithm would also be changed by the changes in commenting behaviour. It was predicted that the inclusion of encouragement to downvote would limit the predicted changes in algorithmic scoring. However, mid-experiment Reddit updated their algorithm.
“Before the algorithm change, the effect of our sticky comments was exactly as we initially expected: encouraging fact-checking caused a 1111.6% increase in the score of a tabloid submission compared to no sticky comment. Furthermore, encouraging downvoting did dampen that effect, with the second sticky causing only a 453.26% increase in the score of a comment after 13,000 minutes.”
The observed outcomes show the difficulty of predicting both human and algorithmic responses, the dramatic impact on outcomes which changes to an algorithm can produce, and the need for monitoring of these outcomes, to ensure desired effects are maintained.
“Overall, this finding reminds us that in complex socio-technical systems like platforms, algorithms and behavior can change in ways that completely overturn patterns of behavior that have been established experimentally.”
Connecting this to learning analytics rather than algorithms more generally, when we use algorithms to ‘enhance’ education, particularly when ‘nudges’ aimed at improving student success, we need to be cognisant that behaviours don’t always change in the ways expected, and that the outcomes of behavioural changes can be ‘overwritten’/cancelled-out by algorithmic design.
2 Replies to “Lifestream, Pocket, Persuading Algorithms With an AI Nudge”
Comments are closed.