Algorithms; community and eye tracking in VR

Given that we’ve moved on to algorithms and culture, I thought to check out some stuff from Reddit. Reddit is driven by an algorithm, though it also has elements of human interaction to drive content up or down a hierarchy. It’s from a day-gone-by pre web2.0, but the site holds in there with huge communities built up around the “sub-reddits” there. I found an interesting discussion opening up over on /r/vive, a section of Reddit which is dedicated to primarily HTC Vive, but also VR in general.

“With all the talk about eye tracking inside VR headsets, I wanted to ask and see how many people on here have had LASIK eye surgery. The reason I am asking is because I believe I have discovered a fatal flaw in eye tracking technology when it comes to people who have had LASIK eye surgery.”

Eye tracking flaw after LASIK surgery from Vive

The user suggests that after a form of eye surgery, eye tracking software ( which is increasingly likely to be bundled along with VR. I can hear Facebook salivating at the thought of tracking eyeballs and adverts…..).

I “upvoted” the discussion, as I feel it’s an interesting and well considered original post. However, the algorithm behind Reddit, combined with other user’s interactions with the story, will determine how long the post remains visible to other users, and its position on the page.

 

My Micro-Ethnography and Week 7 round-up.

Here is the link to my Micro-Ethnography.

I created it using an “artificially intelligent” web design package. This is part of its editing facility:

I spent too long on this exercise at the expense of other things (such as tidying up this blog) which was not what I intended. Nevertheless,  I feel like I have an understanding of Kozinets (2010) and an appreciation for other netnographers that I did not start with. I’d need to suck up my guts and get on with sorting niggling bits of grammar, spelling and structure for an assessment. As it is, I think it stands well enough for the kind of “low-stakes” exercise we’ve been asked for. I’m both happy and annoyed with it at the same time. I like it for what it is, and what it represents, and I’m frustrated that I haven’t been able to do more in the time that I had.

Week 6 otherwise has been quieter on the life-streaming front. I followed in the footsteps of Fournier, Kop and Durand (2014) and tried out Nvivo. Which I share their reservations about (for another blog post, perhaps). In the end I just eyeballed my data and counted in my head….

I checked out the excellent micro-ethnography submissions from my EDC cohort, and managed to get comments through from their blogs on to mine.

I’m interested in pursuing something around Virtual Reality and community for my assessment, so I’m trying to pull in relevant articles in to my lifestream.

Categories are set up for most (if not all) my posts, which I’ll need to do something with, but for now they can be selected to filter down to some of the themes of my lifestream.

On to block three…

Fournier, H., Kop, R. & Durand, G., (2014). Challenges to Research in MOOCs. Journal of Online Learning and Teaching, 10(1), pp.1–15.

Comments for Linzi’s EDC blog

Comment on A Mini-ethnography by cmiller

An automicrowebnography! Very nice presentation style too. It really helps to provide pace to your writing.

I’m very interested in your experience initially, as I’m looking through a lense of parts of Kozinet’s work for my ethnography. My finding for my mooc seems to place the community at a very different place from my experience.

When I see your response from the community, I’m more sure that my own conclusions could well prove to stand up to more rigorous debate: namely that we need a specific matrix to account for MOOC communities. They just do not seem to behave like “normal” online communities !

from Comments for Linzi’s EDC blog http://ift.tt/2lsspCi
via IFTTT


Comment on A Mini-ethnography by cmiller

What I mean is that your experience with the community on your mooc and my experience with my mooc appear to be polar opposites, but neither, I think, are adequately covered by the matrix presented by Kozinets!

from Comments for Linzi’s EDC blog http://ift.tt/2m7Rvn3
via IFTTT

Comment over on Eli’s EDC blog

Comment on Tweet! IFTTT tools that we would recommend by cmiller

I tried to get YouTube playlist and IFTTT working together, but it didn’t work.

I’m loathe to use the “liked” function with youtube because I’d spend more time removing junk from my blog, than I would save from manually cutting and pasting my video link directly. Which is what I’ve taken to do.

I am very impressed with the layout and legibility of your blog though. Something I aspire to, but time is not my own at the moment.

I’m a bit annoyed at the restrictions on our WordPress tbh. We can’t apply our own templates, edit CSS or really do much with it other than pick a theme… or am I missing something!

Cheers,

Colin

from Comments for Eli’s EDC blog http://ift.tt/2mp9V5h
via IFTTT

Comment on Lifestream, Tweets by cmiller

Excellent production. Not sure if it was intended, but with the audio quality as it was, I felt I was actually eavesdropping on your conversation rather than watching a youtube video!

The idea of MOOCS getting in the way of conversation I’m not sure about. It’s not like our MSCEDC discussion forum is a hotbed of informed debate either…

from Comments for Renée’s EDC blog http://ift.tt/2lLJZgT
via IFTTT

Data – The Machine Will Out

“With a high volume of data, there was no other choice than to utilize a computer program to aid in organizing the data and increase rigor by coding all data systematically” (Fournier et al, 2014 p6).

Thanks to MOOCs, which are made possible via computers and the Internet, the data sets generated can be so vast that there is “no other choice” (ibid) to use a computer to analyse the results. Fournier recognises the shortcomings of the “restrictive nature” (ibid) of such tools but carries on with them regardless.

The software used was NVivo (see QSR video below). Does the software claim to be more than human? It seems like it.

“Maximize your knowledge. Gain an Edge, and make better decisions ” (0:24).  Not just “better” but this software actually “helps you make intelligent decisions”(0:40) so you can “make robust decisions faster” (2:40) and “uncover insights faster” (4:09). “It’s the perfect option to start your research journey” (1:20).

This one was interesting though: “discover emerging themes, patterns and sentiment in minutes” (2:27). Sentiment! Interpreting sentiment is surely the domain of the human. Should we leave software to “[count] particular words, rather than interpreting them as a human researcher might do?” (Fournier et al, 2014 p6).

Fournier et al argue that human and machine working together is preferable for research in and around MOOC contributions. So I’m now signed up to a 14-day trial of this tool and I can see whether or not I feel my knowledge is “maximised”, an “edge” is gained, and my quick decision making is “better”, “robust” and “intelligent”.  This will form part of my micro-ethnography submission, I hope.

QSR International Video source:

Fournier, H., Kop, R. & Durand, G., (2014). Challenges to Research in MOOCs. Journal of Online Learning and Teaching, 10(1), pp.1–15.