EDC Lifestream Blog Summary

Image result for robot at the finishing line
Image Source: dailymail.co.uk

Where to start, where to start? This Education and Digital Cultures course has opened so many doors, traveled so many paths and crossed so many lines of conversation, interaction and thought that’s it almost feels like an injustice simply summarising our rampant and energetic dive in to our world of digital influences, both real and unreal as it now exists into a synchronous set of weeks’ activity.
Our foray into the awakenings of AI and the lessons we (think we) should be heeding was a visual candy shop to kick off our EDC experience. Largely dystopian and seldom encouraging, only time will tell if we are indeed ready to articulate the delicate mandolin that is ever developing artificial intelligence, biomechanics and our inevitable march towards a post humanist and eventual transhuman state of being. Writings from the likes of Miller (2011), Hand (2008) and Hayles (1999) walk the path of either insightful readings of future trends or irrelevant commentary to be hurled on the trash pile of past inaccuracy. Only time will tell. And doubly so, we can only hope that whichever future does manifest, it does so in ways that benefits the human race profoundly more so that it does so currently without us sacrificing too much of our precious human compassion, consideration for our home and our ethical duty to help others. Baynes (2015) criticism of TEL resonated on several occasions where we were called to question the use of the digital medium.
The development of the MOOC for instance is at least one major stab at applying the benefits of a highly connected world to overcome the barriers to education though lack of resources, capabilities or institutional advantages granted by place of birth, race, or another means by which humans can be separated.
The dream of open education, although noble, is not without its challenges as was so deftly demonstrated in our ethnographic studies on the OER phenomenon. Open, as we have come to learn is not as ‘open’ as we imagine and participants in these free learning environments face a series of obvious and sometimes not-so obvious tests to achieving understanding, some of which are created through just being human (self-direction and motivation). EDC, in this case helped push me to consider so many more factors within the MOOC device that I had not even begun to consider before. Not only that, it exposed me to ways of visualising and communicating on these factors in innovative ways too that I believe was of benefit not just to me, but, to my fellow learners as well. And so too did their creative experiences enrich my understanding and achieve the core essence of the community basis of learning.

Lastly, the foray into the structured ,but, at the same time, somewhat manipulative world of the alogarithim and its cousin, learning analytics, aptly dissected by the likes of Siemens (2013) and Knox (2015) as well as Eynon, (2013) to reveal its growing influence in all parts of our lives, indicates that these phenomenon’s must be interrogated at every step for the sake of learners everywhere lest we be led by the proverbial nose down a path of good intentions that could also discriminate and exclude.
Coming to the end of this course on education and digital culture, with its array of immersive and portentive experiences I am drawn to the fact that although it is heavily imbued with layers, flows and currents of existing and future technology, it is human connectedness, feelings and perception that is still at the heart of what good teaching is all about. Even as we go about finding ways of trying to improve those elements across time, distance, culture or language, connecting with others to learn, to share and to experience should be at the heart of every single digital endeavour we embark on.

Article: Prefab homes from Cover are designed by computer algorithms

06 Apr 2017

Specializing in backyard studios

If you’re in the market for a prefab dwelling—either as a full-time home or backyard unitoptions are aplenty. What L.A.-based startup Cover wants to add to the equation is a tech-driven efficiency that makes the whole design and building process a total breeze for the customer.

As detailed in a new profile on the company over on Co.Design, Cover sees itself as more of a tech company than a prefab builder. Indeed, whereas a typical prefab buying process would begin with choosing one of a few model plans and maybe then consulting with architects to tweak the design for specific needs, Cover turns the whole design process over to computer algorithms. Co.Design explains:

Once customers begin the design process, Cover sends them a survey of about 50 to 100 questions to inform the design. It asks about lifestyle–how many people typically cook a meal and what appliances are must-haves?–and structural needs, like should they optimize one view and block another one?

The company also use computer modeling to optimize window placement, cross-ventilation, and natural light, making use of zoning, sun-path, and geospatial data. All of these parameters are then sent to a proprietary computer program that spits out hundreds of designs that satisfy the requirements supplied.

Here are a couple of key things to know about Cover’s prefabs:

  • The company is specializing in the accessory dwelling unit, which is a secondary structure on a property with an existing single-family house. They can serve as guesthouses, in-law units, offices, yoga studios, and potentially a source of rental income.
  • While the computer will churn out a whole bunch of designs, Cover dwellings generally have a minimal modern look with an insulated steel structure, glass walls, and built-in storage.
  • When you order with Cover, the company takes care of the whole process, from coming up with a design, as described above (which takes three business days and $250), to acquiring necessary permits (two to five months, $20,000), to building and installation (12 weeks, final price contingent on the specific design). Some sample costs offered on the website are as follows: $70,000 for a guest room, $130,000 for a studio with a kitchenette, $160,000 for a one-bedroom unit, and $250,000 for a two-bedroom unit.

Via: Co.Design

Tags: #mscedc

April 06, 2017 at 11:40PM


EDC Week 11 Summary

Image result for email spam image
Image Source: MarketingLand.com

I would absolutely hate to be a celebrity. Can you just imagine the attention, constant harassment by fans, paparazzi, having to put up with photos pf your naked body all over the tabloids just because you decided to ‘let-go’ for the summer. No thanks!. But probably the worst part of the celebrity gig must be all the fan mail – mountains and mountains of it every day from fans who think you and them share some special bond, some that are total whack-jobs and a fair few, I bet, that want to interest you in a business deal.

These last few weeks of the EDC blogging process has been somewhat of a trench war against an unending barrage of spam, junk mail and totally unwanted commentary. Now, embarking on the clean up before presentation for assessment I have probably received at least three or four spam comments each day trying to sell me everything from Spanish condos, to french language lessons and even muscle gain formula (Has this bot been stalking my Facebook page?)

As educators we often don’t even begin to think about the daily grind that most students have to bear with in terms of the glut of internet marketing, five second intros, spam, junk email and its ilk. It hard enough to concentrate as it is but adding another layer of irritating marketing to the picture really chips away at the nerves after a while. Many folks just filter it out, but, when you really think about it this stuff is contributing to an animosity about the web that doesn’t particularly help in the field we are engaged in. Learners, particularly younger ones, can be disengaged at the best of times, so do we really need to think about how much they are exposed to?

Just like a break in cigarette marketing help to bring down rates of younger smokers could such a ban assist in creating more engaged learners, even if only by one or two percent?

The onslaught continues, but with my trusty spam reporting button in hand I may yet prevail against the tide of nonsensical spammage!


Article: We Just Created an Artificial Synapse That Can Learn Autonomously

We Just Created an Artificial Synapse That Can Learn Autonomously


A team of researchers has developed artificial synapses that are capable of learning autonomously and can improve how fast artificial neural networks learn.

Mimicking the Brain

Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks. These contain algorithms that can be trained, among other things, to imitate how the brain recognizes speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.

Image Credit: Sören Boyn/CNRS/Thales physics joint research unit

Now, researchers from the National Center for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip. It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.

In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are are stimulated. The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarized) that is enclosed between two electrodes. Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa. The memristor’s capacity for learning is based on this adjustable resistance.

Better AI

AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do. For instance, intelligent systems can now compose music, play games and beat human players, or do your taxes. Some can even identify suicidal behavior, or differentiate between what is lawful and what isn’t.

This is all thanks to AI’s capacity to learn, the only limitation of which is the amount of time and effort it takes to consume the data that serve as its springboard. With the memristor, this learning process can be greatly improved. Work continues on the memristor, particularly on exploring ways to optimize its function. For starters, the researchers have successfully built a physical model to help predict how it functions. Their work is published in the journal Nature Communications.

Soon, we may have AI systems that can learn as well as out brains can — or even better

Author Dom Galeon April 5, 2017

Tags: #mscedc
April 06, 2017 at 03:24PM
Open in Evernote

Article: Unpaywall Is New Tool For Accessing Research Papers For Free

“Unpaywall” Is New Tool For Accessing Research Papers For Free

April 5, 2017 by Larry Ferlazzo

As anyone who has tried to pursue even a little bit of academic research can attest, publishers charge an arm-and-a-leg to access studies if you are not part of an institution that subscribes to their journals. And the authors of those studies don’t even get any of that money!

Last year, Sci-Hub broke through that barrier in one attempt (which may or may not be legal) to create more access – see The Best Commentaries On Sci-Hub, The Tool Providing Access to 50 Million Academic Papers For Free.

Today, another option was unveiled.

Today we’re launching a new tool to help people read research literature, instead of getting stuck behind paywalls. It’s an extension for Chrome and Firefox that links you to free full-text as you browse research articles. Hit a paywall? No problem: click the green tab and read it free!

The extension is called Unpaywall, and it’s powered by an open index of more than ten million legally-uploaded, open access resources.

Apparently, many institutions now require their faculty upload their published papers to their libraries, and that is a primary source for Unpaywall research.

I just tried it and it seems to work fairly well…

Tags: #mscedc
April 06, 2017 at 03:20PM

Impressive Adobe Algorithm Transfers One Photos Style Onto Another

Impressive Adobe Algorithm Transfers One Photo’s Style Onto Another

Mar 29, 2017


Two pairs of researchers from Cornell University and Adobe have teamed up and developed a “Deep Photo Style Transfer” algorithm that can automatically apply the style (read: color and lighting) of one photo to another. The early results are incredibly impressive and promising.

The software is an expansion on the tech used to transfer painting styles like Monet or Van Gogh to a photograph like the app Prisma. But instead of a painting, this program uses other photographs for reference.

“This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style,” says the rather technical abstract of the Deep Photo Style Transfer paper.

Put more plainly: when you put in two photographs, the neural network-powered program analyzes the color and quality of light in the reference photo, and pastes that photo’s characteristics onto the second. This includes things like weather, season, and time of day—theoretically, a winter’s day can be turned into summer, or a cloudy day into a glorious sunrise.

The team’s early examples show the program in action. So this original photo:

Plus this reference photo:

Equals this final photo:

It’s important to note that the software does not alter the structure of the photo in any way, so there’s no risk of distorting the lines, edges or perspective. The entire focus is on mimicking the color and light in order to copy the “look” or “style” of a reference photograph onto a new shot.

Since this is a lot easier said than done, the program has to intelligently compensate for differences between the donor and receiving image. If there is less sky visible in the receiving image, it will detect this difference and not cause the sky to spill over into the rest of the original shot, for example.

The software even attempts to “achieve very local drastic effects,” such as turning on the lights on individual skyscraper windows, all without altering the original photo by moving windows around or distorting edges.

In the future, a perfected version of this technology could make its way into Photoshop as a tool, or run as a separate program or plug-in. Not that you should bank on this tech fixing the photos from your upcoming trip; like any other new technology, there is work to be done.

“The study shows that our algorithm produces the most faithful style transfer results more than 80% of the time,” the paper cautions. So maybe you can’t change Ansel Adam’s Moonrise, Hernandez to a Sunrise, Hernandez, but you get the picture (no pun intended) and it is very promising.

Tags: #mscedc

March 30, 2017 at 12:10AM

Week 10 – Phew!!

Source: https://performancestreet.wordpress.com

Gauging the reaction from my fellow participants the last few weeks have been a veritable sprint to the finish! And while this was a collective experience and learning event there was, to my mind, an underlying sense of competitiveness about it.

The Twitter debates and questions in week 9 were the equivalent of an 800m dash with users jockeying for position, looking for insightful advantage and generally trying to beat out some of the other participants. I alluded to some of these elements in my previous post.

But this should not come as a surprise though. Humans are by their very nature competitive and will continuously seek out advantages even if part of a harmonious and cooperative societal structure. That tiny bit of advantage gained or that minor piece of recognition has a very self-satisfying feel to it and for many is rather addictive.

However, it appears that competitiveness has been under attack in academia and been labelled an unhelpful by product of the quest for performance for some time. It is constantly being managed or curtailed in some way and wrapped up in sickly sweet quips like ‘You’re’ only competing against yourself’. Why are we fighting this? The ultimate contradiction must surely be gamification, a massive by word in digital education these days.

Knox alludes to this partly is his Community Cultures piece – the emphasis has turned from learning as an individual internalization to one based primarily on a social construct

So while this data has been very revealing about us and our community what else is it saying about us as individuals and what is motivating us – the deeper layers.

Ultimately learning analytics has been devised to improve performance within each individual but are we maybe ignoring one of its biggest advantages? Furthermore, if we can develop sophisticated analytical and insightful measures to enhance performance in learning cant we also create ways to apply them with the fundamental basic human instinct of competition too while simultaneously breeding out its most distasteful parts ?

Data Anamorphs in the use of LA

The Ambassadors – 1533, Hans Holbein the Younger

In critiquing our EDC Week 9 Twitter discussion I was hoping to draw out some easy to read trends and findings. However, on closer inspection, it appears that our little study on the use of data has been anything but easy to understand. Without taking a statistical viewpoint how well dos this exercise really demonstrate the real use of data as a means by which to adjudge performance or even participation. Taking it further, if, as educators, we were to use the data as a form of assessment could we be certain that we are indeed seeing the full picture? To my mind the mini study on our activity is somewhat like an anamorph – ‘A distorted or monstrous projection or representation of an image on a plane or curved surface, which, when viewed from a certain point, or as reflected from a curved mirror or through a polyhedron, appears regular and in proportion; a deformation of an image’ (Source: anamorphosis.com)

I offer the following to validate this view:

Volume of Tweets

User phillip_downey’s top count of 70+ tweets put his score higher by some way than even second placed on the list. Was this part of an ulterior motive to ensure the highest number or was there a genuine accompanying development or promotion of learning or capability? Without a demonstrable mechanism to determine if the latter is the case the volume achieved does not indicate anything other than a sort of ‘gaming’ of the process. This data, in the hands of the LA uninitiated could be very misleading.

Top Words

It is interesting to note that word 9 and 20 (I’m & I’ve) on the top words list are both contractions of the original pronoun ‘I’. Users, it shows, are continuously internalising to understand all that is presented through the online tweet based discussion. But, what is this saying about us as an online community that has been interacting at length these past 9 weeks. Where are the ‘us’, ‘we’ and ‘we’ll’? We are, it could be posited, still islands in the vast connected ocean of the web. Maybe we have become a chain of common, closer islands but islands we remain. What does this say for the theory of community of learning?

Sources of Tweets

Tweetdeck was by far the most popular application of choice by which to receive, view and disseminate Tweets. Although I have made use of it in the past I didn’t on this occasion and was limited to the 4th most used medium, Twitter for Android. Does technical supremacy ( a bigger gun?) show how it can provide a medium for greater Tweet volumes? (Quick! Someone call CSI Miami to cross reference Source with Volume of Tweets..) I think this points to a potential danger in the real use of LA in that administrators assume all users are all the same in terms of social status, wealth, culture and behaviour. Where is the social study of the data and what will it reveal? Is LA only good for these people or those kinds of learners? Social and educational inequalities as described by Eynon, 2013. Discrimination, as we have learned, can be automated too.

User Mentions

Let’s be honest, the facilitators hold the centre and are critical to the success of this exercise as is demonstrated. Can we claim to be a high functioning learner body with a maturity level to match? Personally, I’m not that confident that we could have pulled off this exercise as well if Jeremy and James hadn’t led with the questions. But to be fair that was the brief, so perhaps a bit harsh? What is positive though is that this exercise demonstrates to me just how important the modern teacher is and just what an effect they have on the guidance of development of thinking and learning on the web.


#mld2017? #immersivetechnologies? #totallybroke? Im confused, was this part of my discussion stream? Confusing or unaccountable data that I can’t relate to reveals that either I have missed out on a large section of learning or an important experience or it is totally irrelevant – which is it? Having some inkling of what should be revealed about activity in the data is important no? Isn’t that the point of LA? ‘Algorithmic cultures described a current phase in which automated computer operations process data in such a way as to significantly shape the contemporary categorising and privileging of knowledge, places and people – Knox, 2015’


From the graph provided it appears one or two tweets from Crafty_AI has some major bang for the buck spent. How should this be considered in the greater context of our data? What if one insightful comment, an influential user’s action or even minor, collective, action could skew an entire reading of LA to the point where administrators or facilitators adjust course on a learning programme in response? Are we even comfortable potentially leaving this to more competent AI’s in future who could do the same?

I think this, and all of the above, points to the fact that we don’t really know enough about what we see (Knox’s,Abstracting Learning Analytics (2014))  – abstract art angle personified) and create in our own data. Just as the anamorph is distorted and misshapen from our current viewpoint we still, perhaps, need to develop a method of assuming an oblique vision so that its true representation comes into view.

EDC Week 9 Summary

DCIM142GOPRO (Image: Author)


For Data decryption, copy the above text and go to:
Paste into Input text box

Use the 'Decrpyt' option

Click in Output area for decryption

Article: Google’s new algorithm shrinks JPEG files by 35 percent


Google’s new algorithm shrinks JPEG files by
35 percent
David Lumb/17 Mar 2017

For obvious reasons, Google has a vested interest in reducing the time it takes to load websites and services. One method is reducing the file size of images on the internet, which they previously pulled off with the WebP format back in 2014, which shrunk photos by 10 percent. Their latest development in this vein is Guetzli, an open-source algorithm that encodes JPEGs that are 35 percent smaller than currently-produced images.

As Google points out in its blog post, this reduction method is similar to their Zopfli algorithm that shrinks PNG and gzip files without needing to create a new format. RNN-based image compression like WebP, on the other hand, requires both client and ecosystem to change to see gains at internet scale.

If you want to get technical, Guetzli (Swiss German for “cookie”) targets the quantization stage of image compression, wherein it trades visual quality for a smaller file size. Its particular psychovisual model (yes, that’s a thing) “approximates color perception and visual masking in a more thorough and detailed way than what is achievable” in current methods. The only tradeoff: Guetzli takes a little longer to run than compression options like libjpeg. Despite the increased time, Google’s post assures that human raters preferred the images churned out by Guetzli. Per the example below, the uncompressed image is on the left, libjpeg-shrunk in the center and Guetzli-treated on the right.

Tags: #mscedc
March 17, 2017 at 04:10PM