In the readings and discussion so far this week I have found some interesting information about learning analytics. The basic concepts seem to be very familiar as they are something I use every day in trying to determine student success, predict failure, and what I may be able to do in my own lesson planning to influence either objective. In my readings, I found this interesting exchange between two of our reading’s authors, George Siemens and Mike Sharkey. Very generally, the discussion forum was focused on the variable definition of learning analytics and how whatever your chosen definition could be applied. I have included almost all of the discussion between Sharkey and Siemen, only editing out what I determined (if I am allowed to do so for this exercise) to be irrelevant at this point.
The reason I have used a large part of their discussion was that I wanted a record in the same place of the context of what Sharkey and Siemen were talking about. I found it so applicable to how myself and others in my field approach academics and how we define success or failure in the classroom. This is a struggle I deal with throughout each school year as I move with the ebb and flow of students’ accomplishments during their assignments and assessments.
The following discussion took place in Learning Analytics Google Group Discussion, in August of 2010 (the exchange below is conveyed verbatim and has not been edited in terms of grammar, syntax or emoticon use):
I wanted to add a dimension to the discussion, specifically around
defining success. In the descriptions of learning analytics we talk
about using data to “predict success”. I’ve struggled with that as I
pore over our databases. I’ve come to realize there are different
views/levels of success:
In its simplest form, academic success means getting a good/passing
grade. That works for a 15-week course since you can use the first
few weeks of data to predict the remainder of the course. However, I
work in an environment where courses are 5, 6, or 9 weeks long (we
teach courses one- or two-at-a-time in serial). That prevents me from
using data within a course to predict the outcome for that student.
There’s a second part to this argument about whether good grades =
success. That’s a discussion we need to have over a beer so I’ll pass
for now. 😉
Another academic metric is learning outcomes. Look at assessment
data and use mastery of outcomes as a gauge of success. If the
institution does a good job measuring learning outcomes, this is a
If we can’t measure success within a course, we might look at it
across the student’s program. From an academic standpoint, that means
GPA. That will just lead us to the same discussion about whether or
not grades are a good measure of success. From a practical
standpoint, success may mean “is the student still attending”. Are
they progressing through the program in a timely fashion? This isn’t
a good qualitative measure, but the argument can be made that if the
student is still attending, there’s a better chance they will succeed
in the program (especially when you compare that to students who have
stopped attending and have zero chance of graduating).
“Are they attending” is aligned with engagement. Is the student
actively engaged in the course? We can measure this by attendance
(did they show up) or by some alternate engagement metric (e.g. number
of actions in the course LMS). We can even get more detailed on the
progression metric and look at two dimensions:
– Persistence (when is the last time we heard from the student)
– Density (over the last x weeks, what percent of the time has the
student been engaged)
I have started to model metrics and I haven’t come to any solid
conclusions yet. It really boils down to who you are and how you
define success. Different parts of the institution will have
I hope to chat more with you at the conference in February.
Director of Academic Analytics
University of Phoenix
Hi Mike – thanks for contribution. Last year, I met someone from U of Phoenix (can’t remember how it was!) and they mentioned some of the current – and planned future – use of analytics at UoP. It was quite advanced from what I’ve seen at other institutions. Analytics require explication. Online courses, programs, and institutions are uniquely placed to be early trail-blazers of analytics.
Good question about success. Success has come up a few times already and, as you note, will be different in different situations and institutions. Or learners, for that matter. For some learners, simply passing a course could be defined as success. For others, only top grades would be seen as success.
Your points about persistence and density form part of the research that needs to be done around analytics. What learners characteristics contribute to success (however it is defined)? Which signals or deviation from those characteristics can we observe early enough through analytics to intervene to ensure success? Some great areas of research and exploration!
Mike (and others from the perspective of their institutions) – would you mind sharing a bit more about how you use analytics at UoP? What is working well? How are learners responding? What technology are you using for data collection and analytics? What role does visualization play?
In conclusion, sort of, when Siemens mentioned the types of analytics being discussed would work well for online course, etc., it reminded me of the evaluations we completed in the Course Design for Digital Environments course at the University of Edinburgh just last Fall. We had to consider various analytical frameworks to create operable and meaningful learning outcomes for the courses we designed. Of course, these outcomes were both dependent and determinant of the curriculum and activities we included in the course structure. It is very easy to see, from my perspective, how difficult it is to create and implement a solid strand of outcomes yet try to address as many of the different facets of learning and teaching that each teacher and student face each day.