Category Archives: Blog posts

Week 10 Lifestream summary

A work project moving from the planning phase into full on delivery, together with commitments over the weekend, left precious little time for blogging last week.  However, I did manage to find several periods of time over the week to write the required analysis of the Tweetorial.  I’ve since had a few more thoughts on the use of Twitter for education and I will either add these to the analysis or create a separate blog post.

The fact that I have what amounts to some self-imposed analytics on my Lifesteam, in the form of the calendar of blog posts, hasn’t escaped me.

Calendar of blog posts
Calendar of blog posts

I included the calendar for two reasons, firstly because I thought it might be helpful to future students of this course who visit my blog and secondly because it’s a reminder to me of the course requirement to ‘add to the Lifestream almost every day’.   The irony of this is that the Tweetorial analysis I worked on over several days only shows as a single post – another example of analytics not necessarily ‘making visible the invisible’.

As part of my current work project I’m using Articulate Storyline to create a tool that will enable our practice managers to review their current knowledge and use their input to point them to resources that will help them.  This has involved creating a means of filtering their input, which has required a multi-stage approach and several hundred conditional triggers.  In effect I’m writing my own algorithm and it will be interesting to apply some of the thinking I’ve done around Algorithmic Cultures to how the tool might be viewed by those it’s intended for, and by others in the business.

The Tutorial on Friday was lively and useful.  It was interesting to hear everyone’s views on the Tweetorial and the Algorithmic Cultures block.  In common with my fellow students my thoughts are now turning to tidying up my blog, continuing to add metadata and starting preparation for the multi-modal essay.

The Tweetorial

How has the Twitter archive represented our Tweetorial?

1. Ranked league tables

Winners podium
from http://www.eventprophire.com

The first thing that struck me about the analytics is that many of them are ranked and use words like ‘top’ or heat-map style representations with most used words or most frequently mentioned contributors shown in decreasing order of size on the graphic.

Studies such as Cherry, T. and Ellis, L.V. (2005) indicate that that norm-ranked grading drives competition.  Whilst this not the intention of the analytics I did find myself taking an interest in where I ‘ranked’ in the various tables and charts and had some sense of achievement in appearing in the ‘top’ half or higher in most of them.

This sense of achievement is, of course, entirely spurious.  Most of the results indicate quantity rather than quality.  Siemens (2013) raises this as an issue “Concerns about data quality, sufficient scope of the data captured to reflect accurately the learning experience, privacy, and ethics of analytics are among the most significant concerns” .  Dirk’s Tweetorial analysis highlights this well and he asks a similar question “Is any of the presented data and data analysis relevant at all? Does it say anything about quality?” While it doesn’t for the use we are making of it, for a marketeer knowing who the key influencers are would be very useful.

2. Participation

I was working from home on the day of the Tweetorial so was able to join in for several hours.  Borrowing from Kozinets (2010) classifications of community participation, my own contribution to the event felt like a reasonable balance between ‘mingler’, ‘lurker’ and ‘insider’.  The quantitative nature of the analytics does not enable any distinction between these types of participation.

The high number of mentions I had was largely due to my experiment early on in the day, attempting to ‘attract’ algorithm driven followers through keywords.  I had noticed that I was gaining followers from the data analytics and artificial intelligence fields, presumably based on the content of my tweets, so I decided to try tweeting the names of cheeses to find out if this yielded similar results.  Helen took up the theme and ran with it and this became the source of some playful behaviours and social cohesion over the course of the day.  A good example perhaps, of the first follower principle illustrated in  Derek Sivers’ (much debated) theory illustrated by his ‘Leadership  lessons from dancing guy’ video:

3. Most used words

Interestingly the word Cheese and other words such as ‘need’ that appear quite prominent on the heatmap below shared by Anne, do not appear on the analytics linked from the course site.  This is likely to be due to the capture period selected and, if so, it illustrates how statistics can be manipulated both intentionally and unintentionally, to convey a particular narrative.

The most used words seem to be fairly constrained, words I’d expect to see given the nature of the questions are there, but having taken part in the Twitter ‘conversation’ I can see that they do not capture the diversity of the topics discussed.  Some of the more diverse words do show up in the hash tag heat map.

Cathy’s Thinglink summary of the Tweetorial points out the frequent use of the word ‘perhaps’ and she offers a possible explanation “It may reflect a tentative and questioning aspect of some tweets”.  I know I tend to use the word when I have not qualified a statement with a source, or I feel I’m interpreting a source differently to the way the author intended, so this might be another explanation…perhaps.

Overall, while the analytics imposes some order on the presentation of the data, human interpretation by someone who was present during the event (shades of the ethnography exercise here) are necessary to make sense of them.  As Siemens (2013) points out “The learning process is essentially social and cannot be completely reduced to algorithms.”

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

1. Volume over time

This is another example of the time frame used providing only a limited insight.  In this case the fact that the number of tweets increased markedly on the days of the Tweetorial is hardly an insight at all.  I’ll refrain from using a popular British idiom involving a fictional detective here, but this would have only been an insight had the reverse been true, or there had been no increase.  Had they happened both of these alternatives scenarios would also have required human interpretation to make any sense of them.

A more useful time frame might have been 15 minute slots over the course of the two days (or even each minute), as the data could then have been aligned to when new questions were asked by Jeremy or James.  It would then have been possible to see the different levels of activity following each question and pass judgement on which were the most effective at generating debate. However, even with a greater degree of granularity it still wouldn’t have been possible to attribute an increase in activity to a tutor question, as it could also have been due to a supplementary question being asked by one the students.

2. The contribution of others

The user mentions heat map has Jeremy and James as central to the discussions, presumably because a lot of the tweets were posted as replies to their questions.  While they were active contributors I don’t think they were as central to the discussions as the heat map would suggest, indeed the focus moved around between contributors as the discussions progressed.

3. My own contributions

I’ve already made some observations about quantity versus quality and the top-tweeter, Philip, has rather humbly (unnecessarily so) made similar self-deprecating comments about his own contributions.

Being purely quantitative the analytics would provide no useful data, if student’s contributions were being assessed and graded for educational purposes.  I made a similar point during the Tweetorial  – simply counting the number of tweets is similar to the way some learning management systems count learning as ‘completed’ if a learner opens a pdf or other document.

As well as academic discourse I believe some social interaction and risk taking by participants is good for healthy debate, but again the limited analytics we have available do not provide any insights into this type of community participation.

4. Images 

I’m not sure if it’s because I’m a particularly ‘visual’ person, but I found the images give by far the most accurate representation of how the tweetorial felt to take part in.  They capture both the academic and social aspects of the conversations and they provide a useful ongoing resource.

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

1. The format

As a medium for education the format would take some getting used to.  The multiple streams of discourse can be difficult to follow and I felt the conversation had often moved on by the time I reflected on a particular point and formulated my answer.  I experienced a very similar situation during a previous course when I took part in a synchronous reading of one of the set papers and an accompanying Twitter chat.  It was soon clear that everyone read at a different pace and before long the whole thing was out of sync and one paragraph was being confused with another.  Tools such as Tweetdeck and Hootsuite do help visualise the conversation by allowing the user to split a continuous stream into multiple columns, for example one column for a specific person, another for key word(s) and so on.

I see some potential as means of kick-starting a discussion, the pace and multi-modality can generate a lot of ideas and links to resources very quickly.  Follow up activities could then explore the various threads in more detail, with further Tweetorial(s) to reinvigorate any topics that slow down or stall.

In this experiment there was some value in not knowing exactly what analytics were going to be recorded, as this made it less likely that our behaviours would be influenced.  Personally I had forgotten there would be any analysis by the time the second question was asked.  If was going to use this format with my learners and analytics were going to be used I think I would adopt an open model and be clear up front about the limited nature of what was going to be recorded and how it would be used.

2. The analytics

In his blog Abstracting Learning Analytics post Jeremy Knox writes “… my argument is that if we focus exclusively on whether the educational reality depicted by analysis is truthful or not, we seem to remain locked-in to the idea that a ‘good’ Learning Analytics is a transparent one.  

In this blog post Knox refers to a painting of Stalin lifting a child and points out that there might be more to be understood from abstracting this depiction than might be gained from “attempting to come up with a new, more accurate painting that shows us what Stalin was really doing at the time.”

So, what if we take a more abstract view of the depictions of the Tweetorial presented by the Tweet Archivist analytics?   Following Knox’s lead perhaps the real questions we should be asking include:

  • Why have these particular data been selected as important?
  • Why is the number of mentions an individual receives considered more important than, for example, the number of links to external resources they provide?
  • Why is a ranked or heat map view used rather than a spider graph or other mechanism that might better demonstrate connections?

Knox brings this idea of taking a more abstract view of analytics back to education “What may be a far more significant analysis of education in our times is not whether our measurements are accurate, but why we are fixated on the kinds of measurements we are making, and how this computational thinking is being shaped by the operations of the code that make Learning Analytics possible.

In the case of the Tweetorial, analytics were provided to us, possibly in the knowledge that they would raise precisely the sort of ‘lack of transparency’ questions I have discussed above.  In reality I could take Dirk’s example a step further and carry out my own data collection and analysis, or used a different tool such as the tool shown below ‘Keyhole’, which provides additional views such as ‘sentiment’ scores percentage of tweets that are positive, negative or neutral and any gender bias.

Analytics from keyhole.co
Analytics from keyhole.co Click image to open in higher resolution

Similarly, in my own professional practice I could take a critical look at the data we’re collecting and ask some fundamental questions about what it tells us about our organisation and what we value in our learners.

References:

Cherry, T. and Ellis, L.V. (2005) Does Rank-Order Grading Improve Student Performance? Evidence from a Classroom Experiment, International Review of Economics Education, volume 4, issue 1 (2005), pp. 9-19

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): 1380-1400

Kozinets, R. V. (2010) Chapter 2 ‘Understanding Culture Online’, Netnography: doing ethnographic research online. London: Sage. pp. 21-40.

Sivers, D.  TED talk ‘How to start a movement’ https://www.ted.com/talks/derek_sivers_how_to_start_a_movement

Week 9 Lifestream summary

flat design illustration of human development - self development

As well as taking part in the Tweetorial activity this week I’ve started to go back through each weeks reading and update / add posts where I have more to say, or where I feel I now have a better understanding of the topic.

A TED talk by Amber Case, that I happened upon when researching Haraway, D (2007) has featured in several of my posts as, for me, some of the concepts  she discusses resonated with all three blocks of the course.   I created this visual artefact in response and have updated my thoughts on cyborgs and aspects of digital communities here.

I’ve started to pull my Tweetorial tweets into an order and I was intending to add some commentary to them, although I’ve now seen that a much more in depth analysis of all the activity is required so I’ll focus my attention on that instead.  It was an interesting exercise in social learning and, because it was carried out in the public domain, it illustrated some of the concepts around community and algorithms very well.  My attempts to entice Twitter bot followers with keywords failed miserably, but worked when I wasn’t actually trying.  This was the original source of the cheese related content, which in itself illustrated some interesting social cohesion, particularly once @helenwalker7  had taken up the challenge and run with it!

I’ve also started adding more metadata to many of the links embedded via IFTTT to indicate why I linked them and to expand on the points raised in the linked articles and videos (this for example). There’s more to do and some general housekeeping to make the whole blog more navigable, but I feel this weekend’s efforts have moved my understanding a step or two up the ladder.

We are all cyborgs now

“By the late twentieth century our time, a mythic time, we are all chimeras, theorised and fabricated hybrids of machine and organism; in short, we are cyborgs. The cyborg is our ontology;
it gives us our politics”

Haraway, D (2007) A cyborg manifesto

References:

Haraway, Donna (2007) A cyborg manifesto from Bell, David; Kennedy, Barbara M (eds),  The cybercultures reader pp.34-65, London: Routledge.

Also heavily influenced by this TED talk by Amber case

Most images are composites constructed from iStock and Google Images

Remixed here by Dirk Schwindenhammer:

Recollections of Miller, V (2011)

Image borrowed from http://thequestionconcerningtechnology.blogspot.co.uk/2014/04/ecotone-renegotiating-boundaries.html

“the fourth discontinuity is yet to be overcome and is the distinction between humans and machines”

“Norbert Wiener suggested that a pilot/aeroplane could be seen as a self-governing mechanism that continually processes and tries to respond to external stimuli under a complex, though ultimately predictable set of rules, in order to maintain homeostasis (that is, stability and control)” (Miller, V. 2011 Chapter 9, p211)

A number of interactions I had this morning with a learner using the VLE I manage and the VLE itself, brought me back to thinking about this paper and Wiener’s idea of the man-machine self-governing mechanism.

Whilst out and about, my smart-watch alerted me to a Forum message from a VLE user.  I opened this on my phone to find out the details of the issue, which related to a duplicate account being created in error after being locked out of an existing account.  I logged into the VLE and resolved the issue there and then and messaged the learner back to let them know everything was sorted.   Whilst logged into the VLE automated notifications alerted me to a couple of small housekeeping tasks that needed completing and I dealt with those too.  A few minutes later a notification popped up on my smart-watch with a ‘thank you’ from the learner.  Normal service had been resumed.

As Miller proposes the lines between human and machine in those interactions were certainly blurred and one could argue that an observer might find it difficult to determine whether the machines were serving me or vice versa, or, as Miller suggests, the machines and I were ‘working together as a self-governing mechanism’.

My connections to my phone, my smart-watch and the remote VLE also reminded me of Donna Haraway’s ‘A manifesto for cyborgs’ and her proposition that “we are all chimeras, theorized and fabricated hybrids of machine and organism” (Haraway, 1991: 149-150), or as Hands, M. (2008) states “…what was previously visible as the hardware of technoculture and information culture is now increasingly invisible as the infrastructure of contemporary digital culture”.

References:

Miller, V. (2011) Chapter 9: The Body and Information Technology, in Understanding Digital Culture. London: Sage.

Haraway, Donna (2007) A cyborg manifesto from Bell, David; Kennedy, Barbara M (eds),  The cybercultures reader pp.34-65, London: Routledge.

Hand, M (2008) Hardware to everywhere: narratives of promise and threat, chapter 1 of Making digital cultures: access, interactivity and authenticity. Aldershot: Ashgate. pp 15-42.

Tweetorial tweets

This page brings together all my tweets and replies and new followers that resulted from the tweet storm (which I believe are relevant in this context).

Ben Williamson is now following me on Twitter! Bio: Digital data, ‘smart’ technology & education policy. Lecturer @StirUni (1621 followers) http://twitter.com/BenPatrickWill

Dr. GP Pulipaka is now following me on Twitter! Bio: Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | #Bigdata | #IoT | #Startups | #SAP #MachineLearning #DeepLearning #DataScience. (19910 followers) http://twitter.com/gp_pulipaka

 

Michael J.D. Warner is now following me on Twitter! Bio: CEO @ThunderReach ⚡️ #socialmedia #marketing + VIP digital services ➡️ https://t.co/Rf6jA4EIEo • ig @mjdwarner • ✉️ceo@thunderreach.com ⚣ #gay 📍toronto • nyc (98298 followers) http://twitter.com/mjdwarner

 

Featured Heights is now following me on Twitter! Bio: Elevating your #brand with creative websites & engaging marketing. Sharing #marketing, #webDev, #design, #ux & #socialmedia resources. (2281 followers) http://twitter.com/featuredheights

Lumina Analytics is now following me on Twitter! Bio: We are a big data, predictive analytics firm providing insightful risk management & security intelligence to large, regulated corporations & government clients. (10786 followers) http://twitter.com/LuminaAnalytics

 

MuleSoft is now following me on Twitter! Bio: MuleSoft makes it easy to connect the world’s applications, data and devices. (59039 followers) http://twitter.com/MuleSoft

 

Pyramid Analytics is now following me on Twitter! Bio: Bridging the gap between business and IT user needs with a self-service Governed #Data Discovery platform available on any device. #BIOffice #BI #Analytics (6729 followers) http://twitter.com/PyramidAnalytic

 

Kevin Yu is now following me on Twitter! Bio: Co-founder & CTO @socedo transforms B2B marketing with social media by democratizing #CloudComputing & #BigData. Husband of 1, dad of 2, tech and sports junkie. (68521 followers) http://twitter.com/kevincyu

Cheese Lover is now following me on Twitter! Bio: Lover of #cheese and interested in #education (0 followers) http://twitter.com/CheeseLoverBot

Siemens (2013) mind map deconstruction and commentary

Mindmap deconstruction of Siemens, G. (2013) Learning Analytics The Emergence of a Discipline.  Click to open higher resolution version in new browser tab. Browser with zoom functionality will be needed to see detail.

Learning analytics

My experience of learning analytics is at a fairly rudimentary level. The tools I have built into the Academy I manage enable me to look at data at a macro level through to individual learner completions.  I don’t have the sophisticated tracking of learner navigation in terms of click through,  time spent on task etc. immediately to hand, although, some of this information is being recorded and looking at the subject in more detail this week has prompted me to look again at data that could provide valuable insights.

Siemens makes the point that “To be effective, holistic, and transferable, future analytics projects must afford the capacity to include additional data through observation and human manipulation of the existing data sets”.   The additional data and human observation I find most valuable in my own professional practice are the insights gained from the social tools I have built into the Learning Academy.  Discussion and blog comments augment and add colour to the otherwise ‘dry’ learning analytics data.  Together these two resources do enable me to “incorporate […] feedback into future design of learning content.” and in some cases into existing content.

I think the output of learning analytics alone, without the additional layer of human created metadata, would not provide me with sufficient information to judge what learning had taken place or the effectiveness of the materials provided.  As Siemens suggest “The learning process is creative, requiring the generation of new ideas, approaches, and concepts. Analytics, in contrast, is about identifying and revealing what already exists.”  and “The learning process is essentially social and cannot be completely reduced to algorithms.”

Organisational capacity

“Prior to launching a project, organizations will benefit from taking stock of their capacity for analytics and willingness to have analytics have an impact on existing processes.”  I wonder how often this happens in reality.  The business I work for is both purpose and numbers driven, the strategy (hope) being that the former drives the latter.  There is certainly a willingness to react to analytics in all aspects of the business, whether that be customer satisfaction scores, unit sales or learning and development provision.  In my view there is also a danger in reacting immediately to analytics, strategy being a long-game activity, where cultural and other changes can take months or even years to shift.

Privacy and scope

Siemens raises some important issues around privacy and scope. “Distributed and fragmented data present a significant challenge for analytics researchers. The data trails that learners generate are captured in different systems and databases. The experiences of learners interacting with content, each other, and software systems are not available as a coherent whole for analysis.”  I’ve attempted to combat this by integrating everything into one platform, with a single view of the learner.  Where this hasn’t been possible we’ve gone to great lengths to implement a single sign on solution, which is both easier and more convenient for the learner but also helps avoid some of the issues Siemens raises.

From a privacy perspective I’ve implemented as open a model as I’m able to with the data that is available.  I’d love to be able to do more to personalise learning for individual learners but, as with all commercial operations, this comes back to the three levers of cost, time available and quality achievable.  However, our learners are able to interrogate their own learner record and they have an achievements wall where all learning completed online is tracked, along with any external achievements the learner wishes to add manually.  They can also see how their achievements compare to those of their peers. In this respect learners can “see what the institution sees”.

All references are from

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): 1380-1400