Category Archives: Human Post

Final blog summary

I believe my lifestream is a useful representation of much of my EDC experience as it logs a lot of my reading, my participation within the community and many of my thoughts about our studies. These have been expressed in post titles or in brief comments on items I’ve considered relevant to pull in (illuminating in itself) as well as more considered reflections. The provision of these summaries and comments has been a useful discipline, tracing my preoccupations and thought-trains and enabling meaningful review.

The blog has really worked for me as a central focusing ‘place to put everything’. Early on I resisted the impulse to organise with pages so the stream remained a better representation of what I understood it to be – a chronological series of thoughts, ideas and finds mashed up in a variety of modalities to chart my progress through EDC, more or less governed by myself. I decided to rely on tagging and categorising to locate posts or identify emerging themes or events, enjoying the economy of the WordPress tag-cloud which enables a one-click surfacing of themes or collections. This premeditated organisation is both illustrative of our human wish to create order and pin meaning and a sense-making imperative in a scrolling blog. My tag-cloud contains nothing surprising, but would be a rich source of information had I chosen other folksonomies, or schemes of emphasis, using it to light up posts I’d considered important or those with unanswered questions.

Central to the quasi-confessional nature of a blog and in common with all reflective diaries, I believe a tone has emerged, and the revisions I’ve made whilst composing longer posts attest to this performative aspect. The knowledge of its being public on the web has sometimes been inhibiting, but most often it’s a thought I have put aside or not had time to entertain.

I have enjoyed writing, particularly when I’ve been inspired by an idea or a reading, but I consider some of my posts to be too informal with ideas expressed in inappropriately flowery language. I believe I can write in an academic register, but the lifestream has somehow worked to release my inner sensationalist! It is interesting to consider that the blog form may have encoded within its literacy a human-designed essentialist ‘algorithm’ prompting me to write in a certain way. More likely it is my particular response to the affordance and has certainly been a natural and involuntary one. I’m not sure if that’s a good thing for academic study.

In a recent post I likened my lifestream to a river course. This natural-world analogy is distant from the algorithmic operations underpinning much of our real life-course which subtly dictate our choices and organise our journey. There are hints of algorithmic agency co-constituting my lifestream such as comments left by automated bots, auto-updating rss feeds, the sudden appearance of comments I’ve written elsewhere and the surprise I register when finding posts I’d forgotten I’d invoked by ifttt.

I believe the technologies used have co-created my lifestream, helping shape both its form and substance, a mix of the human and the human-designed non-human providing an experience I’m glad not to have missed.

How is my driving?

Public Domain image
http://maxpixel.freegreatpicture.com/Satisfaction-Customer-Review-Feedback-Opinion-1977986

Rating and quantifying the consumption of ‘experience’ is on the increase. Recently in the course of my daily life I have undertaken such diverse activities as contacting local government departments, calling in a plumber and some real life chocolate shopping 🙂

Soon after these encounters I have been invited to rate my experience via a phone call or by logging on to a website where I may select a number on a scale to record my level of satisfaction with the service I’ve received. In each case, the government official, the plumber and the shop assistant have all asked or alerted me to this with an unspoken understanding that they stand to gain or lose from the feedback they receive.

This is a demeaning experience both for me as consumer/customer and for them as service provider. The consumer is constructed as a potent arbiter able to award points with no other authority than the money in her pocket. The service provider is fashioned as a worker needing to amass tokens to attest to satisfactory service. Such a contrivance is part of the ontology of the computer harnessed by capitalism which dehumanises the individual and reduces social contact to a mechanistic exchange after the real one has taken place and thereby calling into question its authenticity. Similar construction of  the individual was predicted by Hand (2007) as one of his ‘narratives of threat’,

The idea of a digitally mediated participatory citizenship disguises the ‘push-button’ nature of digitally mediated political life (Street 1997). That is, the Web is simply another media of simple polling of preferences and opinion. The figure of the consumer-citizen takes centre stage where the processes of political management and engagement are inseparable from mass-mediated and customized forms of consumption. Information, instead of being an empowering force for cultural democratization, operates as a substitute for authentic knowledge, particularly where institutional and organisational uses of information centre upon the construction of preference databases. The individual freedoms associated with digital-empowerment are illusory – these are simply methods of decentralizing and delegating responsibility for citizenship to the individual. Citizens are thus now expected to behave like the dominant images of private consumers in economic theory – autonomous, individualised decision-makers removed from the communitarian fabric.
(Hand, 2008, p.39)

Push-button voting is redolent of Social Media likes and the use of rating and gamification in learning environments. Rehearsed in our consumer experiences, they become more readily acceptable in our educational exchanges. The teacher as facilitator is construed as a service provider in a relationship with the student that can only be verified or valorised by digital computation.

Hand, M., (2008) “Hardware to everyware: Narratives of promise and threat” from Hand, Martin, Making digital cultures : access, interactivity, and authenticity   pp.15-42, Aldershot: Ashgate

Proxies, abstractions and dark matter

Public domain image

Jeremy Knox’s blog post entitled Abstracting Learning Analytics on codeactsineducation argues that what is important about LA is not simply whether it offers a faithful representation of ‘learning’ or the propitious conditions for it, but that it provides a view of what matters to the analyst and the norms and the values of the world she inhabits to create her depiction. As Knox contends that an abstract painting is ‘an account of the internal act of producing the painting’ so, too, he argues, learning analytics is concerned with the ‘immanent practices’ of producing analytics and all that is encoded in them.

Knox’s abstraction brings to mind situations and representations in which what counts is not always ‘mattered’ in front of our eyes in crisp authoritative images, more or less faithful to what it represents, but what is obscured, neglected or struggles for definition from behind other layers of meaning. Code acts in education often to promote ‘wider political, economic and societal’ influences, leaving its heart to be suggested by its absence like dark matter, the objects of symbolist poetry or the reverso of the negative image. The intangible and difficult-to-locate qualities of education and learning make them hard to reveal and to measure, leaving things open for other forces to substitute their own methods and metrics. Knox’s article calls for these to become less opaque so that we can properly compute their relevance and application for education.

From Jacques Derrida: Deconstruction by Catherine Turner, 27 May 2016, Critical Legal Thinking

However while the idea of exclusion suggest the absence of any presence of that which is excluded, in fact that which is instituted depends for its existence on what has been excluded. The two exist in a relationship of hierarchy in which one will always be dominant over the other. The dominant concept is the one that manages to legitimate itself as the reflection of the natural order thereby squeezing out competing interpretations that remain trapped as the excluded trace within the dominant meaning.

Tweet archive analysis

The tweetorial

The tweetorial was set as an activity using Twitter, the software itself dictating to a large extent the type of interaction possible, ie an asynchronous discussion in 140 character ‘bytes’. The tweet archive revealed that some of us had chosen to use a twitter-facilitating software such as TweetDeck or Hootsuite, which would have shaped our experience a little differently, making it easier to see conversation threads perhaps. The use of the mediating communication technology ‘performed’ the experience of the discussion together with our knowledge that the tweetorial would be analysed afterwards; the whole entangled with our own affective and material context. These sociomaterial elements were not evident in the analytics, unless some detail could be construed from any of the tweets themselves.

Jeremy asked whether education changes when we use automated (more-than-human) means to understand it and I answered by saying that I thought it did because the way we frame and do things creates the world in which they are done:

I considered that the use of Twitter, the knowledge that the tweetorial would be automatically analysed, as well as our own individual situations enabling us to participate to a greater or lesser extent would all have contributed to the conditions in which it took place.

In the same way that Twitter and the conditions of the activity constituted the experience and the way we engaged with it, so the analytics privileged a particular perception of it.

How has the Twitter archive represented our Tweetorial?

As far as I can tell, the tweet archive has represented our Tweetorial by providing:

  • list of the tweets in chronological order
  • a wordcloud and list of the most frequently used words
  • a list of top urls tweeted
  • breakdown of the source of tweets (platform or software used)
  • the language of the tweets
  • a graph showing the number of tweets over the days
  • user mentions
  • hashtags tweeted
  • tweeted images
  • ‘influencer’ ranking

We can see immediately from the archive’s panel of information what is deemed important to its architects. The summaries and visualisations are market-orientated as their nomenclature suggests: ‘rank’, ‘top’, ‘volume’, ‘influencer’ and were primarily concerned with quantifying. This quantitative data might be useful over time revealing emerging patterns when compared to regular tweetorials held under similar or slightly varying conditions, but was less useful for an analysis of a single learning encounter. As Nigel pointed out during this week’s hangout, more revealing would have been, for example, information on which tweets provoked the most responses.

It wasn’t clear from the archive which of the mscedc students were missing from the conversation, nor did it, of course, list any tweets which didn’t have the mscedc hashtag but which could have been tweeted by one of us during that time and when we forgot to include it. In that sense, an appreciation of who or what might be ‘missing’ was more nuanced than the archive suggested.

Student absence would not have been very noticeable during a rapid conversation-style learning activity, nor deemed worthy of particular comment with everyone assuming they were unable to take part if they considered it at all.  A Learning Analytics dashboard which reported on such absence, would, however, subtly alter the norms around such activities, making participation more prominent and therefore presented as important for learning (observable in many moocs). Such a dashboard-delivered conclusion would not properly account for lurkers or students able to learn without a strong participatory presence and any action triggered by non-participation might be injudiciously applied. This sort of problem may be overcome by students explaining their absence, but the necessity of having to do so changes the educational landscape from open and unpoliced to monitored and check-pointed in which itemised individuals are marshalled for market forces. It also changes the nature of the trust relationship between student and institution.

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

Twitter is a very distinctive application and is fast-flowing and often confusing, making it difficult to follow threads of conversations unless you are very organised and adept at using facilitating dashboards such Hootsuite or TweetDeck to handle a chat. It is easy to feel behind the curve of a Twitter conversation, with a sense of needing to catch up. We didn’t use any question numbering schemes to which answers could have been matched, so it was sometimes difficult to know if you ‘should’ be tweeting a response to an earlier question if the conversation had moved on, and if so, whether to provide context. There is a wish to contribute politely without muscling in on an exchange and ensuring, if you can, that your point hasn’t already been made. My own experience was to quickly scroll back to see what I had missed at the same time as trying to pick up on interesting things and remember what I wanted to say – a high cognitive load! I tried to answer some of the main questions when I took part because they were easy to although I also attempted to join in ongoing conversations too.

The visualisations, summaries and snapshots provided by the archive didn’t reflect this feeling of sometimes being overwhelmed which I experienced during the course of the two days as I ‘part-time participated’, as others must have done, other than by the reproduction of the tweet stream itself. In an educational context for those not used to Twitter, it might be a difficult experience at first, a situation not reflected in the tweet archive, yet a crucial affective factor influencing learning.

Rather than reporting on the number of tweets, it would have been interesting to know which ones shaped the conversation, changed its course or provided diversions. The twitter archive was oriented towards consumerism and inferred reward for profusion and prolixity rather than ‘quality’ of tweet which is more difficult to analyse by any method including human judgement. This is why, perhaps, in a market economy such quantitative data is privileged over more expensive-to-acquire (?) information. Qualitative analysis would better match educational need, searching for keywords to determine changes in direction or emphasis, sentiment or tweet type (positive, negative, affirming, conciliatory, questioning, humour etc). It is interesting to compare the tweet archive to some of the other tools discovered by students. For example, Nigel found a website called Keyhole (http://keyhole.co/) which reported on ‘Sentiment’ amongst other things. Without knowing how these results were measured, it is difficult to draw any conclusions (a very telling indictment of such analytics in itself).

(Note that I signed up for a free short-term trial of this tool and used the mscedc hashtag, but the results above may not reflect the actual two days of the tweetorial.)

The tweet archive did report on the top urls which, as noted in my Thinglink, was useful because I had missed quite a lot of them during the conversation. The aggregation of this information, easily achievable by code, was helpful and underlined for me the necessity of being more purposeful and organised about capturing tweeted links. This sort of metacognitive reflection was something I was concerned would be contracted out to Learning Analytics rather than being developed in the student, but I ended up benefiting from it myself and realise that this could very usefully extend to other learning situations, spaces and platforms.

The archive’s report of the top ranking words was neither surprising nor particularly revealing, although the use of the word ‘perhaps’ was reflective of a tentative and questioning position entirely natural in a learning context. Evident on the word list were the attempts to play Twitter’s algorithms and have fun. That it was so easy to manipulate Twitter’s algorithms was enlightening and reinforced a human wish to resist and subvert constraint. It is important to keep this in mind in a world in which data surveillance is growing so rapidly with much discursive effort expended to justify itself as neutral or benevolent and even, or especially, as its polished presentation seems to repel contestation. Keyhole reported a different word cloud, underlining the clichéd but telling view that it is possible to demonstrate multiple viewpoints with statistics.

 Jeremy and James ranked the highest for user mentions in the tweet archive which was indicative of their place as experts in our community of practice and of our responses to their direct questions, but mscedc students and those outside the immediate group figured too, reflecting an extending connectivist learning community.

There were some anomalies in the reported data which provoked thoughts of buggy algorithms working behind the scenes. For example, the Language data reported on und as one of the two-letter language codes and the Influencer list didn’t include me although it seemed to rank by number of followers of which I had enough, I think, to be on there. Evidently, something much more discerning than number of followers was factored into the calculations but remained opaque to the observer! In the main, however, I was surprised to feel that I was ‘faithfully represented by the code’ as far as it went. This would not, I’m sure, always be the case when we are defined by metrics, a situation leading to problems of being misrepresented and this faulty interpretation subsequently following us in our digital life.

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

The dashboard figures and visualisations didn’t relate anything of the learning that might have taken place unless unscientific inferences are drawn from the volume of tweets correlating with engagement and amount of learning. However, the sheer number of tweets would suggest lively and compelling exchanges from which, it could be argued, it would be difficult not to absorb some new ideas.

The tweet archive’s word cloud was especially disappointing as it must have simply counted words and reported on the most used. It included such words as I’m and I’ve which other algorithms might have excluded or analysed in conjunction with adjoining words. For this activity, a close analysis of the discourse might better reveal ‘learning’ although it would have been a difficult task even before considering our use of abbreviation to cram our thoughts into 140 characters.

From my own experience of the activity, I am sure ‘learning’ did happen because I mused over questions, offered some answers and modified my own thinking as I gained new perspectives, followed others’ arguments and made connections. A close analysis of the tweets themselves by a human (or AI) would enable them to be classified and sorted, giving a more accurate picture perhaps of what occurred during the tweetorial.

Looking over the archive subsequently, I created my own top ten takeaway tweets, a list which would be different for everyone and which even for the selector, would fast become out-of-date. Thinking about this prompts me to consider the value of likes and retweets and how they are not only context- but also time-dependent or significant. Setting great analytic store by likes, for example, might be an accurate situated snapshot but would not constitute enduring fact. An analysis of liked tweets over time might well reveal development of thought and would be useful for the student to view a progression. This would characterise learning as an ongoing process and not a single temporal synaptic event and emphasise that it’s a process never finished and impossible to simplistically depict.

Algorithms made manifest

Image from https://hellohart.com/2015/05/25/the-mathematics-of-crochet/

by crocheting computer-generated instructions of the Lorenz manifold: all crochet stitches together define the surface of initial conditions that under influence of the vector field generated by the Lorenz equations end up at the origin; all other initial conditions go to the butterfly attractor that has chaotic dynamics. The overall shape of the surface is created by little local changes: adding or removing points at each step

Art or craft can make complex mathematics ‘visible’ for the layperson revealing its beauty and intricacies and opening up ways of understanding what composes our black boxed technologies.

The mathematics of crochet

Analysing Analytics

During snatched moments this week I have been thinking about algorithms and learning analytics, but in an uninformed and distracted way as it has been busy at work. Yet this time was spent in a world semi-constituted and organised by algorithms without my really taking note, as Nigel’s tweet about the way emails get placed into Clutter folders reminded me,

and as even my own lifestream should have underlined as it filled with tweets and posts left uncommented.

My default position on Learning Analytics I expressed early on, but I recognised the need to fight this instinct or at least to examine it more carefully. Siemens’ suggestion that

For some students, the sharing of personal data with an institution in exchange for better support and personalised learning will be seen as a fair value exchange.
(Siemens, 2013, p.1394)

had compounded my involuntary rejection of LA as it packed so many contentious statements in one short sentence.

I took issue with the bargaining trope of data exchange for assured personal gain. I questioned who decides what ‘better support’ is and whether such a promise would hold out after the relinquishing of data. I remained suspicious of the student and institution arriving at a fair outcome when the power balance of that relationship is characterised by inequality. I was wary of ‘personalised learning’ and wondered what it really means and whether it would divest the learner of any of their own thinking skills.

Although at the week’s end when I could read more I discovered Jisc’s counter to my worry,

Students maintain appropriate levels of autonomy in decision making relating to their learning, using learning analytics where appropriate to help inform their decisions.

I remained sceptical, however, because for some students reflection and meta-cognition are not easily achieved (nor always introduced and encouraged) and an effort to develop them may more simply be contracted out to graphs and graphics, leading to a misunderstanding of what counts in learning.

After reading Siemens (2013) my head was full of buzzwords such as actionable insights. I consoled myself by deciding actionable is not a word, but when I looked it up, I found its definition to be rooted in law and, seemingly, marketing, which was indeed insightful.

I had to keep reminding myself (and having to be reminded) that politics and power struggles happen with or without algorithms and not to fall into the trap of algorithms bad, no algorithms good. (What is the opposite of algorithm? Chaos? Proper choice? Manual?) I didn’t think their pervasive and deep penetration of our daily lives was a reason not to want to examine them and get a measure of their scope, dangers and failings, in accordance with Beer’s stated acknowledgement of

a sense that we need to understand what algorithms are and what they do in order to fully grasp their influence and consequences
(Beer, 2017, p.3)

Kitchin (2017) offers “six methodological approaches” (Abstract) to understanding them such as spending time with coders, conducting ethnographies, reverse engineering and witnessing others doing so.

Sociotechnical

I did of course, get ensnared in thinking that algorithms are dissociable from the sociotechnical world they co-constitute, especially frustrating as I see exactly how coded IF statements are firmly rooted in context: IF … THEN … ELSE …, where the elipses here stand in for prescriptive descriptions of the very detail of our lives and can comprise, too, more nested IF statements or containers into which variables are poured – by us, or by other algorithms, with such complexity, interrelation and recursiveness that these codes seem at once to be “neutral and trustworthy systems working beyond human capacity” (Beer, 2017, p.9-10) as well as organic-seeming and mutable, causing the need, from time to time, for the hand of the putative “viewer of everything from nowhere” (the fictitious person alluded to in Ben Williamson’s lecture) to make the fine adjustments named tweaks. The hand that tweaks is firmly located, but hidden, often in financial, commercial, government or educational institutions, involved in a secret and protected remit to organise and present the knowledge that ensures their continued power.

As Beer, quoting Foucault, makes the point,

… the delicate mechanisms of power cannot function unless knowledge, or rather knowledge apparatuses, are formed, organised and put into circulation.”
(Beer, 2017, p.10)

Manovich (1999, p.27) states that the point of the computer game is the gradual revealing of its hidden structure, the exact opposite of the algorithm which operates under cover by stealth to confound our mapping of it. Algorithms all too easily offer themselves as inscrutable and indecipherable, attributes which supply their perfect camouflage of objectivity and neutrality, as mechanisms for avoiding the bias and prejudice of messy human judgement. Commenting on the twofold “translation of a task or problem” into code, Kitchin states

The processes of translation are often portrayed as technical, benign and commonsensical
(Kitchin 2017, p.17).

Information gathering

It is recognised that Learning Analytics needs to gather information from multiple data points from distributed systems to better map and model the learner in recursive processes. Inherent in this gathering are decisions about what to collect, from where and how, with each of these decisions dependent on the platforms and software that capture the information and which have encoded in them their own particular affordances, constraints and bias. Once aggregated by another encoded fitment, decisions on how to interpret data have to be made as well as comparisons drawn against like typical and historical models in order to arrive at what might be predicted or trigger action. Siemens (2013) outlines problems of data interoperability himself,

distributed and fragmented data present a significant challenge for analytics researchers
(Seimens, 2013, p.1393)

This complex sociotechnical construction is not in any way an objective systematised analysis of authentic behaviour, but a range of encoded choices afforded by particular softwares and programming languages made by living and breathing individuals acting on a range of motivations to construct a more, but probably less, reliable image of the student. The construction of LA will favour some but perhaps inhibit, repel, harm or exclude others.

In addition, learning analytics posits the educational project as reducible to numbers, as a discernible learning process which may be audited and in which

‘dataveillance’ functions to decrease the influence of ‘human’ experience and judgement, with it no longer seeming to matter what a teacher may personally know about a student in the face of his or her ‘dashboard’ profile and aggregated tally of positive and negative ‘events’
(Selwyn, 2014 p.59)

Patterns

Learning Analytics attempts to seek out patterns which naturally begs the question, what about the data which falls away from the pattern cutter?

Another danger of pattern searching is voiced by boyd,

Big Data enables the practice of apophenia: seeing patterns where none actually exist
(boyd, 2012, p.668)

Patterns are concerned with data that recurs and they fail to take account of the myriad minute varied detail in which crucial contextual information may lie,

Data are not generic. There is value to analysing data abstractions, yet retaining context remains critical, particularly for certain lines of inquiry. Context is hard to interpret at scale and even harder to maintain when data are reduced to fit a model.
(boyd, 2012, p.671)

Siemens (2013) too, alludes to the difficulty in getting the measure of the individual,

recognizing unique traits, goals, and motivtions of individuals remains an important activity in learning analytics
(Siemens, 2013, p.1383)

So much for my own objectivity and neutrality, I seem to have fallen back into that pit whose muddy walls are white and mostly black. Struggling back out, I voiced my concerns in the tweetorial, but attempted to remain open minded,

If this state of affairs which is learning analytics today, is surfaced and properly taken into account, the endeavour shouldn’t be rejected out of hand, but investigated, honed and trialed to see if can usefully help understand the conditions for learning as well as support learners. It should be done in full partnership with students, enabling a more equal and transparent participatory experience as the University of Edinburgh’s LARC project demonstrates.

The significant barriers to LA, ethics and privacy, can be foregrounded and regarded as “enablers rather than barriers” (Gašević, Dawson and Jovanović, 2016) as the editors of the Journal of Learning Analytics encourage,

We would [also] like to posit that learning analytics can be only widely used once these critical factors are addressed, and thus, these are indeed enablers rather than barriers for adoption (p.2)

Jisc has drawn up a Code of Practice for learning analytics (2015) which does attempt to address issues of privacy, transparency and consent. For example,

Options for granting consent must be clear and meaningful, and any potential adverse consequences of opting out must be explained. Students should be able easily to amend their decisions subsequently.
(Jisc, 2015, p.2)

Pardo and Seimens (2014) identify a set of principles

to narrow the scope of the discussion and point to pragmatic approaches to help design and research learning experiences where important ethical and privacy issues are considered. (Abstract)

Yet even if the challenges of ethics and privacy are overcome, there remains the danger that learning analytics reveals only a very pixelated image of the student, one which might place her at a judged disadvantage, an indelible skewed blueprint existing in perpetuity and following her to future destinations. That this should be the case is not surprising if we consider that a sociomaterial account of learning analytics foregrounds its complex mix of the human, the technical and the material performing an analysis and an analysand by a partial apparatus of incomplete measurement. The encoded institution’s audit met with the absence of student context or nuance, means that LA will struggle to give anything other than general actionable insights.

http://fiona-boyce.deviantart.com/art/Pixelated-ID-192825081

References

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), pp.1-13.

boyd, d. and Crawford, K. (2012). Critical questions for Big Data. Information, Communication & Society, 15(5), pp.662-679.

Gašević, D., Dawson, S., Jovanović, J. (2016). Ethics and privacy as enablers of Learning Analytics. Journal of Learning Analytics, 3(1), pp.1-4.

Jisc, (2015). Code of practice for learning analytics. Available at: https://www.jisc.ac.uk/guides/code-of-practice-for-learning-analytics

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), pp.14-29

Manovich, L. (1999). Database as a symbolic form. Millennium Film Journal (Archive), 34, Screen Studies Collection, pp. 24-43

Pardo, A., Siemens, G. (2014). Ethical and privacy principles for learning analytics. British Journal of Educational Technology, 45(3), pp.438-450.

Selwyn, N. (2014). Distrusting Educational Technology. Routledge, New York.

Siemens, G. (2013). Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10), pp.1380-1400

Williamson, B. (2017). Computing brains: learning algorithms and neurocomputation in the smart city. Information, Communication & Society, 20(1), pp.81-99.

Week 8 Weekly thoughts

SQL Syntax from https://www.w3schools.com/

qryShow_Paid_Posts

SELECT Lifestream_CH_Posts.PostTitle + ‘, ‘ + Lifestream_CH_Posts.PostSubject + ‘, Week ‘ + Lifestream_CH_Posts.intWeek AS [Listing], Lifestream_CH_Posts.PostTitle AS [Post Title],
Lifestream_CHPosts.Postbody AS [Post]
COALESCE(CONVERT(nvarchar(12), Lifestream_CH_Posts.PostDate,113), N”) AS [Date of Post],
Lifestream_CH_Viewers.ViewerType AS [Viewer Type],
Lifestream_CH_Payments.PaymentRcvd AS [Payment Received],
Lifestream_CH_Validations.ViewerValue AS [Viewer Validation]
FROM Lifestream_CH_Viewers INNER JOIN
Lifestream_CH_Posts INNER JOIN
Lifestream_CH_Payments INNER JOIN
ON LIfestream_CH_Validations
ON Lifestream_CH_Posts.PostID = Lifestream_CH_Payments.PostID
ON Lifestream_CH_Viewers.ViewerID = Lifestream_CH_Payments.ViewerID
ON Lifestream_CH_Validations.ViewerCode = Lifestream_CH_Viewers.ViewerID
WHERE (Lifestream_CH_Viewers.ViewerType IN (‘A2‘, ‘A3‘, ‘B3‘, ‘B7‘, ‘D17‘, ‘L23‘, ‘L30‘, ‘M25‘, ‘S7′, ‘S14‘, ‘T9‘))
AND (Posts.PostTitle IN  (‘On and Off, ‘Privacy Paradox’, ‘Algo Chat, ‘Acceptance Creep’, ‘Not so favourite‘, ‘Start the week‘))
AND (Lifestream_CH_Payments.PaymentRcvd=True)
AND (Max(Lifestream_CH_Validations.ViewerValue) > 600)

Acceptance Creep

Algorithms constitute much of our online reality by their gathering and interpretation of our data and their presentation back to us of what they deem relevant, newsworthy or trending.

Algorithms produce worlds rather than objectively account for them
(Knox, 2015).

We often don’t know the “warm human and institutional choices that lie behind these cold mechanisms” (Gillespie, 2012) and our efforts to do so are frustrated by information providers’ frequent tweaking and the algorithms’ own shifting nature as they are fed by our interaction with them.

Has the algorithm been conscripted for daemonic hegemonic practice or more innocently put to work for market forces? Is Google, as a major information provider, attempting to take over the world or (merely) seduced by its self-imposed heady mission to catalogue and present the world’s information (a misguided vocation, like Edward Casaubon’s in Middlemarch?), refusing to shoulder responsibility for the political and social consequences of doing so?

Mager (2014) comments,

… the capitalist ideology is inscribed in code and manifests in computational logics

Why do we comply?

An answer might be what I term acceptance creep. A commentator in the Privacy Paradox podcast warns,

In our shopping behaviour we are rehearsing the idea that it is ok to give up our data

We want to do the searching, the shopping, the socialising and the sharing without continually thinking of world issues. We want certainty and trust where there is none, so we accede to the demands of global capitalism, come to fill the post-human vacuum, because it suits us, too.

Mager (2014) describes a symbiotic relationship between Google (and other global IT corporations), content providers (website creators) and users:

This dynamic perfectly exemplifies Gramsci’s central moment in winning hegemony over hegemonized groups, the moment “in which one becomes aware that one’s own corporate interests […] become the interests of other subordinate groups” (Gramsci 2012, 181). It is the moment where “prosumers” start playing by the rules of transnational informational capitalism because Google (and other IT companies) serve their own purposes; a supposedly win-win situation is established. Prosumers are “steeped into” the ruling ideology to speak with Althusser: “All the agents of production, exploitation and repression, not to speak of the ‘professionals of ideology’ (Marx), must in one way or another be ‘steeped’ in this ideology in order to perform their tasks ‘conscientiously’ – the tasks of the exploited (the proletarians), of the exploiters (the capitalists), of the exploiters’ auxiliaries (the managers), or of the high priests of the ruling ideology (its ‘functionaries’), etc” (Althusser 1971).

If we are rehearsed (performing conscientiously) in our leisure and social lives, we will accept it, too, in our educational lives.

 

Knox, J. (2015)Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Mager, A. (2014) Defining Algorithmic Ideology: Using Ideology Critique to Scrutinize Corporate Search Engines. Triple C Journal for a Global Sustainable Information Society, 12(1).
http://www.triple-c.at/index.php/tripleC/article/view/439/641

On and Off

What a week, I couldn’t seem to fire a single algorithm. Early on I enthusiastically toggled Show me the best tweets first on different devices on my Twitter account but with no real discernible difference [1] [2] [3] [4].  I hardly ever use my sparse and locked down Facebook account, but I wandered around in the Settings basement and hauled some levers to ON. Still nothing personalised except a lonely effort by Alison Courses to get me to learn something. I could endorse it, inflicting it on my friends and spawning a million more of the same and similar for me.

It seemed that not only had I somehow gained the right to be forgotten, I had been. What was going on? Normally I only have to think the word hotel for my IP address to be swiped and the price hiked.

Clearly, algorithmically speaking, I should get out more. I started frantically browsing holiday cottages and choosing stuff in online swim shops to provoke a stream of targeted ads. Nothing. How long should it take? Where were the mono fin recommendations? These are algorithms, they shouldn’t show signs of pique. I considered asking a friend to experiment with his Fb timeline settings, but using another person’s data for my own gain seemed, well, dirty. I distracted myself by typing rude words into Google and was blanked, instantly. Naughty me. I did discover that many of us must be contemplating marrying our cousin (is it legal to …).

I headed to YouTube and logged in and out of my Google account like a mad thing, turning the pop up Allow Notifications to Block reflexively. I was impressed by the extent I could analyse my videos – I could get watch time reports, audience retention, playback locations, devices, comments (none) … the list went on. Nothing for Demographics, but the heading was there.

I had wanted to demonstrate how the

arrangements of comments, and thus the spatial qualities of the YouTube page … come together through multiple and contingent relations between the human users … as well as the non-human algorithms which operate beneath the surface
(Knox 2014, p.49)

I wanted to investigate how

the spaces utilised for educational activity cannot be entirely controlled by teachers, students, or the authors of the software”
(Knox, 2014, p.50)

but it seemed unlikely now.

So back in subterranean boiler rooms I struggled rusted faucets to OPEN and tapped the barometers to DELUGE. Sprinting back upstairs, I Googled myself to check I was still alive. Phew, a few of my selves had faint pulses. From a Kafkaesque corridor I dragged down my Google archive to the desktop but found only slim pickings. Seemingly I hadn’t been anywhere on the map for years. I travelled as far as Amazon where, at last, I was greeted with a jaunty Hello C, and I burst into tears of relief at their intimate knowledge of my hoover bag preferences and proffered book recommendations. They were accurate, useful and interesting except for the History book suggestions which must, I dimly remember, be a result of ordering revision guides for my children some hundred years ago.

I never thought I would be so glad to chum up with people who bought this and also bought that. I was back in the human race.

What had I been missing? What friendly, self-affirming world had I separated myself from by turning off tracking and not using Facebook? I’d denied myself even the decision to let Fb decide what I see. Am I doomed to be alone and un-liked with my own dull agency, forced to wander about to achieve serendipity myself instead of having it tastefully sprinkled on top of my carefully-aimed long tail niche cappuccino of recommendations?

“Recommendation algorithms map our preferences against others, suggesting new or forgotten bits of culture for us to encounter” (Gillespie, 2012).

Author of my own destiny? Perhaps not, thanks. I wouldn’t know which of the 52,000 Facebook categories were mine (Beyond Boring? Underactive Thyroid? Paranoid Meanie?). But then I wouldn’t know that anyway,

Categorization is a powerful semantic and political intervention
(Gillespie, 2012).

Best kept hidden.

Is it really consume like crazy, like and retweet in overdrive, complete complicated cameos, share lolcats and link this to that – or –  walk the wilderness? I suspect it’s a bit more nuanced.

I created and later updated a Storify to make sense of other people’s experiences. Perhaps I should keep my settings turned on and just frustrate the algos. I could have fun. I should have believed the boiler room posters (proclamations of “the legitimacy of these functioning mechanisms” (Gillespie, 2012), part of the “providers’ careful discursive efforts” (p.16) which assured me that my experience would be improved.

This articulation of the algorithm is just as crucial to its social life as its material design and its economic obligations
(Gillespie, 2012)

I should have heeded the signs in the (lack of) Control Room which shouted Cookies are Vital and threatened politely to forget which pages I like in Cyrillic. Manovich states that computer games are the “projection of the ontology of the computer onto culture itself” (Manovich, 1999, p.28); shouldn’t I just start to play?

But what was I doing with Storify? Temporarily fixing a contingent assemblage of student and teacher tweets sourced from filtered searches within the affordances of a particular technology? Was this,

the pedagogy of networked learning in which knowledge construction is suggested to be ‘located in the connections and interactions between learners, teachers and resources, and seen as emerging from critical dialogues and enquiries
(Knox, 2014, p.51, quoting Ryberg et al, 2012) ?

Was it like EDMOOC News in which

a set of dependencies and relations that entwine participants and algorithms in the production of educational space
(Knox, 2014, p.51) ?

Not really, but getting closer.

As someone who regularly gets lost rather than turn on their GPS, changing my preferences isn’t going be easy. Yet if I really want to map how “Complex algorithms and codes of the web shape and influence educational space” (Knox, 2014, p.52), untangle, as far as I can, the sociomaterial “procedures irreducible to human intention or agency” (p.53) and discern the power structures encoded in the code, I might have to take the plunge. Lucky I’ve got ten new costumes.

I should augment the number of actors in the “recursive loop between the calculations of the algorithm and the “calculations” of people” (Gillespie, 2012), lifesaving idealistic hopes and avoiding my cousins.

 

Recommended for me

 

Gillespie, T. (2012). The Relevance of Algorithms. in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.

Knox, J. K. (2014). Active algorithms: sociomaterial spaces in the E-learning and Digital Cultures MOOC. Campus Virtuales, 3(1): 42-55.

Manovich, L. (1999). Database as a Symbolic Form. Millenium Film Journal, 34, pp.24-43.