While fooling around on Netflix, I thought I would see what would happen if I chose to watch a film from the ‘horror’ genre – of which I would never usually choose as I detest this genre!
The first movie in this genre on the list was:The Last Days On Mars, which I watched a portion of and was relieved to discover it was not that scary. Because I chose a film that was out of my normal realm of preferred viewing (crime drama, comedies, foreign films and documentaries), how will this affect my future Netflix recommendations, if at all?
The next Netflix algorithm at work became apparent from the notifications section where Netflix recommended ‘top picks’ for me: Friends, Chef’s Table and Transformers.
Netflix also provided me with recommendations based on my past viewing habits, as seen in the following photo:
After reading Siemens (2013) article, I was enlightened about Google’s ‘knowledge graph’ as an example of “articulating and tracing the connectedness of knowledge” (p.1389). Since I was eating a bowl of raspberries, I tried searching for “amount of calories in 10 raspberries” and came up with the following results on Google’s knowledge graph:
Google’s knowledge graph provided me with a wealth of information on raspberries and cited sources from Wikipedia and the USDA. Although the information is useful and can be obtained without going beyond the initial Google search, why does Google obtain its source information from only Wikipedia and the USDA? Are there other sources that are not listed? Why does the algorithm work this way – what is happening behind the scenes that I’m not seeing?
Since Google is so pervasive and, I dare say, educators and students alike use it to perform a magnitude of daily internet searches, should we question how the information being presented to us is gathered or blindly trust Google as an institution in the search engine field?
Moreover, I checked my topics on my Google+ profile and found it interesting to see the results (as seen in the photo below):
As it says, “these topics are derived from your activity on Google sites…” It is interesting to note that I had to manually add in the topics of ‘Anthropology” and “Archaeology” because I have a causal interest in these areas and I felt like Google might cheat me out of potential fascinating web content if I didn’t add them to my list. I also (embarrassingly) felt a little hurt that Google didn’t automatically recognize that anthropology and archaeology are interests of mine. Am I disappointed in the algorithm’s performance? It seems ridiculous for me to have feelings towards this since I’m talking about a machine who is simply “running complex mathematical formulae“, but despite this, I was affected. Is this what Knox (2015) is talking about in reference to the “co-constitutive relations between humans and nonhumans?”
References
Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1