Category Archives: Block 3 – Algorithm culture

Week 10 Lifestream summary

A work project moving from the planning phase into full on delivery, together with commitments over the weekend, left precious little time for blogging last week.  However, I did manage to find several periods of time over the week to write the required analysis of the Tweetorial.  I’ve since had a few more thoughts on the use of Twitter for education and I will either add these to the analysis or create a separate blog post.

The fact that I have what amounts to some self-imposed analytics on my Lifesteam, in the form of the calendar of blog posts, hasn’t escaped me.

Calendar of blog posts
Calendar of blog posts

I included the calendar for two reasons, firstly because I thought it might be helpful to future students of this course who visit my blog and secondly because it’s a reminder to me of the course requirement to ‘add to the Lifestream almost every day’.   The irony of this is that the Tweetorial analysis I worked on over several days only shows as a single post – another example of analytics not necessarily ‘making visible the invisible’.

As part of my current work project I’m using Articulate Storyline to create a tool that will enable our practice managers to review their current knowledge and use their input to point them to resources that will help them.  This has involved creating a means of filtering their input, which has required a multi-stage approach and several hundred conditional triggers.  In effect I’m writing my own algorithm and it will be interesting to apply some of the thinking I’ve done around Algorithmic Cultures to how the tool might be viewed by those it’s intended for, and by others in the business.

The Tutorial on Friday was lively and useful.  It was interesting to hear everyone’s views on the Tweetorial and the Algorithmic Cultures block.  In common with my fellow students my thoughts are now turning to tidying up my blog, continuing to add metadata and starting preparation for the multi-modal essay.

The Tweetorial

How has the Twitter archive represented our Tweetorial?

1. Ranked league tables

Winners podium
from http://www.eventprophire.com

The first thing that struck me about the analytics is that many of them are ranked and use words like ‘top’ or heat-map style representations with most used words or most frequently mentioned contributors shown in decreasing order of size on the graphic.

Studies such as Cherry, T. and Ellis, L.V. (2005) indicate that that norm-ranked grading drives competition.  Whilst this not the intention of the analytics I did find myself taking an interest in where I ‘ranked’ in the various tables and charts and had some sense of achievement in appearing in the ‘top’ half or higher in most of them.

This sense of achievement is, of course, entirely spurious.  Most of the results indicate quantity rather than quality.  Siemens (2013) raises this as an issue “Concerns about data quality, sufficient scope of the data captured to reflect accurately the learning experience, privacy, and ethics of analytics are among the most significant concerns” .  Dirk’s Tweetorial analysis highlights this well and he asks a similar question “Is any of the presented data and data analysis relevant at all? Does it say anything about quality?” While it doesn’t for the use we are making of it, for a marketeer knowing who the key influencers are would be very useful.

2. Participation

I was working from home on the day of the Tweetorial so was able to join in for several hours.  Borrowing from Kozinets (2010) classifications of community participation, my own contribution to the event felt like a reasonable balance between ‘mingler’, ‘lurker’ and ‘insider’.  The quantitative nature of the analytics does not enable any distinction between these types of participation.

The high number of mentions I had was largely due to my experiment early on in the day, attempting to ‘attract’ algorithm driven followers through keywords.  I had noticed that I was gaining followers from the data analytics and artificial intelligence fields, presumably based on the content of my tweets, so I decided to try tweeting the names of cheeses to find out if this yielded similar results.  Helen took up the theme and ran with it and this became the source of some playful behaviours and social cohesion over the course of the day.  A good example perhaps, of the first follower principle illustrated in  Derek Sivers’ (much debated) theory illustrated by his ‘Leadership  lessons from dancing guy’ video:

3. Most used words

Interestingly the word Cheese and other words such as ‘need’ that appear quite prominent on the heatmap below shared by Anne, do not appear on the analytics linked from the course site.  This is likely to be due to the capture period selected and, if so, it illustrates how statistics can be manipulated both intentionally and unintentionally, to convey a particular narrative.

The most used words seem to be fairly constrained, words I’d expect to see given the nature of the questions are there, but having taken part in the Twitter ‘conversation’ I can see that they do not capture the diversity of the topics discussed.  Some of the more diverse words do show up in the hash tag heat map.

Cathy’s Thinglink summary of the Tweetorial points out the frequent use of the word ‘perhaps’ and she offers a possible explanation “It may reflect a tentative and questioning aspect of some tweets”.  I know I tend to use the word when I have not qualified a statement with a source, or I feel I’m interpreting a source differently to the way the author intended, so this might be another explanation…perhaps.

Overall, while the analytics imposes some order on the presentation of the data, human interpretation by someone who was present during the event (shades of the ethnography exercise here) are necessary to make sense of them.  As Siemens (2013) points out “The learning process is essentially social and cannot be completely reduced to algorithms.”

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

1. Volume over time

This is another example of the time frame used providing only a limited insight.  In this case the fact that the number of tweets increased markedly on the days of the Tweetorial is hardly an insight at all.  I’ll refrain from using a popular British idiom involving a fictional detective here, but this would have only been an insight had the reverse been true, or there had been no increase.  Had they happened both of these alternatives scenarios would also have required human interpretation to make any sense of them.

A more useful time frame might have been 15 minute slots over the course of the two days (or even each minute), as the data could then have been aligned to when new questions were asked by Jeremy or James.  It would then have been possible to see the different levels of activity following each question and pass judgement on which were the most effective at generating debate. However, even with a greater degree of granularity it still wouldn’t have been possible to attribute an increase in activity to a tutor question, as it could also have been due to a supplementary question being asked by one the students.

2. The contribution of others

The user mentions heat map has Jeremy and James as central to the discussions, presumably because a lot of the tweets were posted as replies to their questions.  While they were active contributors I don’t think they were as central to the discussions as the heat map would suggest, indeed the focus moved around between contributors as the discussions progressed.

3. My own contributions

I’ve already made some observations about quantity versus quality and the top-tweeter, Philip, has rather humbly (unnecessarily so) made similar self-deprecating comments about his own contributions.

Being purely quantitative the analytics would provide no useful data, if student’s contributions were being assessed and graded for educational purposes.  I made a similar point during the Tweetorial  – simply counting the number of tweets is similar to the way some learning management systems count learning as ‘completed’ if a learner opens a pdf or other document.

As well as academic discourse I believe some social interaction and risk taking by participants is good for healthy debate, but again the limited analytics we have available do not provide any insights into this type of community participation.

4. Images 

I’m not sure if it’s because I’m a particularly ‘visual’ person, but I found the images give by far the most accurate representation of how the tweetorial felt to take part in.  They capture both the academic and social aspects of the conversations and they provide a useful ongoing resource.

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

1. The format

As a medium for education the format would take some getting used to.  The multiple streams of discourse can be difficult to follow and I felt the conversation had often moved on by the time I reflected on a particular point and formulated my answer.  I experienced a very similar situation during a previous course when I took part in a synchronous reading of one of the set papers and an accompanying Twitter chat.  It was soon clear that everyone read at a different pace and before long the whole thing was out of sync and one paragraph was being confused with another.  Tools such as Tweetdeck and Hootsuite do help visualise the conversation by allowing the user to split a continuous stream into multiple columns, for example one column for a specific person, another for key word(s) and so on.

I see some potential as means of kick-starting a discussion, the pace and multi-modality can generate a lot of ideas and links to resources very quickly.  Follow up activities could then explore the various threads in more detail, with further Tweetorial(s) to reinvigorate any topics that slow down or stall.

In this experiment there was some value in not knowing exactly what analytics were going to be recorded, as this made it less likely that our behaviours would be influenced.  Personally I had forgotten there would be any analysis by the time the second question was asked.  If was going to use this format with my learners and analytics were going to be used I think I would adopt an open model and be clear up front about the limited nature of what was going to be recorded and how it would be used.

2. The analytics

In his blog Abstracting Learning Analytics post Jeremy Knox writes “… my argument is that if we focus exclusively on whether the educational reality depicted by analysis is truthful or not, we seem to remain locked-in to the idea that a ‘good’ Learning Analytics is a transparent one.  

In this blog post Knox refers to a painting of Stalin lifting a child and points out that there might be more to be understood from abstracting this depiction than might be gained from “attempting to come up with a new, more accurate painting that shows us what Stalin was really doing at the time.”

So, what if we take a more abstract view of the depictions of the Tweetorial presented by the Tweet Archivist analytics?   Following Knox’s lead perhaps the real questions we should be asking include:

  • Why have these particular data been selected as important?
  • Why is the number of mentions an individual receives considered more important than, for example, the number of links to external resources they provide?
  • Why is a ranked or heat map view used rather than a spider graph or other mechanism that might better demonstrate connections?

Knox brings this idea of taking a more abstract view of analytics back to education “What may be a far more significant analysis of education in our times is not whether our measurements are accurate, but why we are fixated on the kinds of measurements we are making, and how this computational thinking is being shaped by the operations of the code that make Learning Analytics possible.

In the case of the Tweetorial, analytics were provided to us, possibly in the knowledge that they would raise precisely the sort of ‘lack of transparency’ questions I have discussed above.  In reality I could take Dirk’s example a step further and carry out my own data collection and analysis, or used a different tool such as the tool shown below ‘Keyhole’, which provides additional views such as ‘sentiment’ scores percentage of tweets that are positive, negative or neutral and any gender bias.

Analytics from keyhole.co
Analytics from keyhole.co Click image to open in higher resolution

Similarly, in my own professional practice I could take a critical look at the data we’re collecting and ask some fundamental questions about what it tells us about our organisation and what we value in our learners.

References:

Cherry, T. and Ellis, L.V. (2005) Does Rank-Order Grading Improve Student Performance? Evidence from a Classroom Experiment, International Review of Economics Education, volume 4, issue 1 (2005), pp. 9-19

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): 1380-1400

Kozinets, R. V. (2010) Chapter 2 ‘Understanding Culture Online’, Netnography: doing ethnographic research online. London: Sage. pp. 21-40.

Sivers, D.  TED talk ‘How to start a movement’ https://www.ted.com/talks/derek_sivers_how_to_start_a_movement

Tweetorial tweets

This page brings together all my tweets and replies and new followers that resulted from the tweet storm (which I believe are relevant in this context).

Ben Williamson is now following me on Twitter! Bio: Digital data, ‘smart’ technology & education policy. Lecturer @StirUni (1621 followers) http://twitter.com/BenPatrickWill

Dr. GP Pulipaka is now following me on Twitter! Bio: Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | #Bigdata | #IoT | #Startups | #SAP #MachineLearning #DeepLearning #DataScience. (19910 followers) http://twitter.com/gp_pulipaka

 

Michael J.D. Warner is now following me on Twitter! Bio: CEO @ThunderReach ⚡️ #socialmedia #marketing + VIP digital services ➡️ https://t.co/Rf6jA4EIEo • ig @mjdwarner • ✉️ceo@thunderreach.com ⚣ #gay 📍toronto • nyc (98298 followers) http://twitter.com/mjdwarner

 

Featured Heights is now following me on Twitter! Bio: Elevating your #brand with creative websites & engaging marketing. Sharing #marketing, #webDev, #design, #ux & #socialmedia resources. (2281 followers) http://twitter.com/featuredheights

Lumina Analytics is now following me on Twitter! Bio: We are a big data, predictive analytics firm providing insightful risk management & security intelligence to large, regulated corporations & government clients. (10786 followers) http://twitter.com/LuminaAnalytics

 

MuleSoft is now following me on Twitter! Bio: MuleSoft makes it easy to connect the world’s applications, data and devices. (59039 followers) http://twitter.com/MuleSoft

 

Pyramid Analytics is now following me on Twitter! Bio: Bridging the gap between business and IT user needs with a self-service Governed #Data Discovery platform available on any device. #BIOffice #BI #Analytics (6729 followers) http://twitter.com/PyramidAnalytic

 

Kevin Yu is now following me on Twitter! Bio: Co-founder & CTO @socedo transforms B2B marketing with social media by democratizing #CloudComputing & #BigData. Husband of 1, dad of 2, tech and sports junkie. (68521 followers) http://twitter.com/kevincyu

Cheese Lover is now following me on Twitter! Bio: Lover of #cheese and interested in #education (0 followers) http://twitter.com/CheeseLoverBot

Siemens (2013) mind map deconstruction and commentary

Mindmap deconstruction of Siemens, G. (2013) Learning Analytics The Emergence of a Discipline.  Click to open higher resolution version in new browser tab. Browser with zoom functionality will be needed to see detail.

Learning analytics

My experience of learning analytics is at a fairly rudimentary level. The tools I have built into the Academy I manage enable me to look at data at a macro level through to individual learner completions.  I don’t have the sophisticated tracking of learner navigation in terms of click through,  time spent on task etc. immediately to hand, although, some of this information is being recorded and looking at the subject in more detail this week has prompted me to look again at data that could provide valuable insights.

Siemens makes the point that “To be effective, holistic, and transferable, future analytics projects must afford the capacity to include additional data through observation and human manipulation of the existing data sets”.   The additional data and human observation I find most valuable in my own professional practice are the insights gained from the social tools I have built into the Learning Academy.  Discussion and blog comments augment and add colour to the otherwise ‘dry’ learning analytics data.  Together these two resources do enable me to “incorporate […] feedback into future design of learning content.” and in some cases into existing content.

I think the output of learning analytics alone, without the additional layer of human created metadata, would not provide me with sufficient information to judge what learning had taken place or the effectiveness of the materials provided.  As Siemens suggest “The learning process is creative, requiring the generation of new ideas, approaches, and concepts. Analytics, in contrast, is about identifying and revealing what already exists.”  and “The learning process is essentially social and cannot be completely reduced to algorithms.”

Organisational capacity

“Prior to launching a project, organizations will benefit from taking stock of their capacity for analytics and willingness to have analytics have an impact on existing processes.”  I wonder how often this happens in reality.  The business I work for is both purpose and numbers driven, the strategy (hope) being that the former drives the latter.  There is certainly a willingness to react to analytics in all aspects of the business, whether that be customer satisfaction scores, unit sales or learning and development provision.  In my view there is also a danger in reacting immediately to analytics, strategy being a long-game activity, where cultural and other changes can take months or even years to shift.

Privacy and scope

Siemens raises some important issues around privacy and scope. “Distributed and fragmented data present a significant challenge for analytics researchers. The data trails that learners generate are captured in different systems and databases. The experiences of learners interacting with content, each other, and software systems are not available as a coherent whole for analysis.”  I’ve attempted to combat this by integrating everything into one platform, with a single view of the learner.  Where this hasn’t been possible we’ve gone to great lengths to implement a single sign on solution, which is both easier and more convenient for the learner but also helps avoid some of the issues Siemens raises.

From a privacy perspective I’ve implemented as open a model as I’m able to with the data that is available.  I’d love to be able to do more to personalise learning for individual learners but, as with all commercial operations, this comes back to the three levers of cost, time available and quality achievable.  However, our learners are able to interrogate their own learner record and they have an achievements wall where all learning completed online is tracked, along with any external achievements the learner wishes to add manually.  They can also see how their achievements compare to those of their peers. In this respect learners can “see what the institution sees”.

All references are from

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): 1380-1400

Week 8 Lifestream Summary

It’s been an interesting week experimenting with algorithms. I’ve enjoyed trying to ‘reverse engineer’ the Amazon recommendation algorithm and, ultimately, going some way toward disproving my own hypotheses.

Reflecting on the cultural aspects of algorithms, I see similar dichotomies in the views people hold about them to those we saw documented in the literature relating to cyberculture and community culture.  To me this is clearly a linking theme and I see possibilities in exploring this in my final assignment for this course.

As with many other topics, the views people hold are likely to be heavily influenced by the media and, just as all things ‘cyber’ are often painted as worthy of suspicion, this does seem to be a ‘go to’ stance for copy writers and producers when taking a position on algorithms.  The banking crisis is probably the biggest world-wide event that has contributed to this and other stories, such as reliability issues with self-driving cars, or inaccuracy of ‘precision munitions’, add to the general feeling of unease around the use and purpose of algorithms.  I chose the latter two examples deliberately as there is a moral as well as technical aspect to both.

So the stories about algorithms that help people control prosthetic limbs more effectively, or to ‘see’ with retinal implants, or even driver-less cars  travelling tens of thousands of miles without incident, can be lost amongst more sensationalist stories of those same cars deciding ‘whose life is worth more’ when an accident is unavoidable.

As a result I wonder how much knowledge the general public has about the algorithms that make their day to day life a little easier, by better predicting the weather, ensuring traffic junctions cope better with rush hour traffic, or even just helping people select a movie they’re likely to enjoy.

One could argue that this underlying distrust of algorithms is no bad thing, particularly if this can lead to unbiased critical appraisal of their use in a particular field, as highlighted by Knox, J. (2015) with regard to their use in education:

“Critical research, sometimes associated with the burgeoning field of Software Studies, has sought to examine and question such algorithms as guarantors of objectivity, authority and efficiency. This work is focused on highlighting the assumptions and rules already encoded into algorithmic operation, such that they are considered always political and always biased.”

This week has made me a little more uneasy about the way “algorithms not only censor educational content, but also work to construct learning subjects, academic practices, and institutional strategies” Knox, J.  (2015).  In my professional practice we do not have the sophistication of systems that would make this a concern, but our learners are exposed to and learn from other systems and their apprehensions about how we might use their data will no doubt be coloured by their view of ‘big data’.  With that in mind this is clearly a subject I should have on my radar.

Apologies for writing double the word limit for this summary and including new content rather than summarising, it’s one of those subjects that, when once you start writing, it’s difficult to stop!

References:

Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

 

TWEET: Microsoft white paper on the future of education

Link to Microsoft’s white paper on the way education is changing.

“Learning technology is not a simple application of computer science to education or vice versa”

“Universities tend to be proactive in their approach to preparing undergraduates for the world of work. However, as the employment landscape becomes increasingly fluid, universities must constantly update their teaching practices to suit the demands of the jobs market.”

“This means that students and academics could be working on anything up to three internet-enabled digital devices in a single session: a laptop or desktop, a tablet and a smartphone. Students, like most modern employees, are working on the move, at any time of day, in almost any location as work and leisure hours become blurred by increasingly ‘mobile’ lives”  Yes, definitely reflects my life!

“The primary applications for artificially intelligent systems in HE will occur within marking and assessment. Automated systems designed to mark essays, for example, will reduce the time spent by academics on paperwork and increase their face-to-face time with students or their time spent engaging with research occurring outside of the institution.”  I hope this never happens

“Work in the future will be more interconnected and network-oriented. Employees will be working across specialist knowledge boundaries as technologies and disciplines converge, requiring a blend of technical training and the ‘soft’ skills associated with collaboration.” I think we’re already starting to see this happening

Learner or predictive analytics […] can serve to both measure and shape a student’s progress. Universities will also unlock new insight into how students are engaging in digital and physical spaces.   Very relevant to this algorithmic cultures block.

“Although mobile technology has permanently changed learning environments, all of our interviewees stressed the point that learning technology should be a tool and never the end goal. The ideal university education is still about improving a student’s ability to produce appropriate ideas, solve problems correctly, build on complex theories and make accurate inferences from the available information.” Hurrah!