Lifestream, Pocket, Artificial intelligence is ripe for abuse, tech researcher warns: ‘a fascist’s dream’

Excerpt:

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI.

via Pocket http://ift.tt/2nwHZcF

 

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

…With AI this type of discrimination can be masked in a black box of algorithms

Crawford’s comments, and those of the article’s author, Olivia Solon, correlate with Ben Williamson’s assertion (based on Seaver, 2014) that objectivity and impartiality claims about algorithms ignore the reality that little black boxes are actually massive networked ones with hundreds of hands reaching into them, tweaking and tuning.

Slide from Ben Williamson’s lecture, Calculating Academics: theorising the algorithmic organization of the digital university (2014)

 

Crawford goes further, however, in identifying the potential for algorithms and AI to be used by authoritarian regimes to target specific populations and centralise authority. Her concerns are similar to those of Tim Berners-Lee, which were included in my Lifestream last week. Where Berners-Lee calls for greater (individual, personal) control of our data and more transparency in political advertising online, Crawford calls for greater transparency and accountability within AI systems. However, both are responding to the same key point: algorithms and AI are not just social products, they also produce social effects. The same point is taken up by Knox (2015),

“..algorithms produce worlds rather than objectively account for them, and are considered as manifestations of power. Questions around what kind of individuals and societies are advantaged or excluded through algorithms become crucial here (Knox, 2015).”

and Williamson (2014, referring to Kitchin & Dodge, 2011):

Slide from Ben Williamson’s lecture, Calculating Academics: theorising the algorithmic organization of the digital university (2014)

Lifestream, Tweets


Is learning analytics a movement that seeks to rebalance the effects of higher education’s apparent blindness to privilege, its unequal access regimes and persistent retention and attainment gaps through a more skilful and strategic use of student data? Or is it part of a larger project to surveil students and staff in higher education, in pursuit of greater efficiency and control?

I’ve only got round to reading Wintrup’s article today (20/03) – and I think it may be the most important thing I’ve read all week. Wintrup interrogates what we mean by student ‘engagement’, and highlights not just the inadequacy of LA in capturing data which aligns with more traditional, robust research into ‘engagement’ (Kuh, 2001, for example) but also the risk of defining quality of learning or engagement based on the data that we are able to collect (rather than richer, well-researched ideas), and the subsequent risk of learning being shaped by a quantifiable measure of quality which is not supported by more robust research. Ultimately, these concerns reflect the potential for LA to weaken learning, and the potential for institutional and political uses of LA to disrupt the positive uses which (for example) Long and Siemens  (2011) espouse.

Wintrup also writes of the ethical dangers of LA. If retention rates were, for instance, to be included in university rankings or quality education scores, universities would have a disincentive to broaden access, and groups and individuals considered more likely to drop-out could be disadvantaged. Further, it is questionable whether HE can actually obtain meaningful consent from students to have their digital presence monitored and quantified because of the unknown implications of either opting in or opting out (p. 97). Finally, Wintrup makes an important point about safeguarding (my emphasis) spaces in which learners can  (privately) engage in creative, social, connected and experimental learning practices – spaces which LA has the potential to disrupt through surveillance.

The article really is a call to action: LA does have the potential to give insight into learning and could be used productively (by students, by teachers working with students and with their informed consent), but ‘the potential of powerful, competing organisations to control and subvert its use’ (p. 97) is great. We want learning that encourages real student engagement, not just displays or performances of engagement that leave quantifiable traces – and need to work towards uses of data that serve rather than damage learning.

Lifestream, Pocket, Analytics isn’t a thing

Excerpt:

Software is usually classified based on the problems it solves. Need software to help track customers? CRM. Need software to manage what happens in the classroom? LMS. Need software to handle your core business functions? ERP.

via Pocket http://ift.tt/2mBQtA5

I like the tack taken here:

Don’t say that you’re looking to buy an analytics product. Talk about the problems you want to solve and the goals you want to achieve. Once you zero in on that end goal, then you can talk about how information and access to data will help get you there.

Institutions take note! (yes, mine too 😉