What's been missing in MOOCs4th September 2018
A step towards decreasing dropout rates.
Learning online is an absolutely awesome idea. It's flexible, easy to access, cheap (or free) and offers an enormously wide range of knowledge. Who wouldn't want to learn life-changing lessons from wherever you so choose?
When Massive Open Online Courses (MOOCs) made their first appearance, they made big promises to revolutionize education in a wide sense. But after the initial hype died down, the general consensus is that MOOCs have failed to deliver on their promise.
It's no secret that MOOCs face huge drop out rates and seem to have a hard time improving upon them. In fact, on average, only 15% end up completing their courses according to an ongoing scientific study.
To save the MOOC and improve it for future students, it's crucial to form a solid understanding of what is causing dropouts to such a great extent and come up with a working solution.
Since the problem was identified a couple of years back, researchers have been studying this phenomenon to find the reason behind it. Many theories have been proposed, but a common theme is finally starting to materialize. Several independent studies have identified two very decisive factors linked to course completion:
To a large degree, MOOC drop-offs are related to:
1. Instructor-student interaction and,
2. Course content.
The most recent study among these, Hone et al (2016), claims that a staggering 79% of dropouts can be traced to these two factors.
So how are these factors handled in physical colleges? Surely there must be something to learn from that?
Comparing differences between how student-instructor interaction and course content is handled between on-campus colleges and MOOC-providers is hard due to the vast difference in scale. On-campus instructors often keep an ongoing dialog with students where feedback and information flow both ways. MOOC-professors, on the other hand, often struggle both in giving and receiving feedback.
As the instructor-student ratio within a MOOC often exceeds 1:10,000, keeping a personal relationship is simply not a realistic option. New and more effective means of communications are definitely needed to enable interaction at such ratio.
Today, most MOOC providers rely on a communication system of peer-feedback between students, discussion forums, and surveys.
Feedback is one of the most important aspects of communication in this setting. Providing students with relevant feedback is in fact probably the biggest challenge in online courses, and therefore, also one of the most studied areas in MOOC-research.
Feedback flowing in the reversed direction, from students to instructors, is on the other hand pretty much neglected in related research. Online surveys are used widely both on-campus and in MOOC-settings and are doing a good job at monitoring satisfaction scores. But not much else.
What is missing in these surveys are open-ended comment sections.
Open-ended comments from students to instructors have been identified as one of the most important sources of feedback on course and instructor quality and can often give direct and actionable suggestions on how to improve the experience from the students' perspective.
MOOC providers very rarely collect open-ended feedback, and the reason is simple: there hasn't been a reasonable way of utilizing qualitative feedback at such a scale before.
And that's too bad. Open-ended questions sections are where students can really express what is going well in a course and what can be improved on a detailed level without having to be restricted by Likert scales (I strongly agree, I agree, and so on). Interestingly enough, in the Hone et al (2016) study, researchers did use an open-ended comments sections (from a sample of students) to dig down deeper into the problems and gained some highly useful insights. Just look at this sample of collected comments:
"I successfully completed my course because the content was just right"
"I did not continue with the Coursera course as I was demotivated due to poor instructor's feedback"
"The instructor didn't engage us in discussions"
"Contents are overwhelming and not given in small chunks"
"Course was not easy to scan in terms of content"
(=invaluable feedback that can form a framework for low effort/high impact changes in terms of retention)
So, what if you could collect detailed feedback and extract the most impactful topics from any number of students on MOOC-platforms?
Enter Hubert.ai, a chatbot interview facilitator built to replace surveys with conversations and intelligent text analytics.
Hubert engages the students in a short discussion about what they thought of the course and the instructor and is able to understand the context and follow up with more probing questions and intelligent replies.
Once done, Huberts' A.I-driven text analytics engine steps in to analyze and categorize responses into topics. That way it's easy to see what areas are mostly discussed and in what sentiment. Strong and improvable areas are identified and presented in a clear way to the instructor.
Here's how it works:
At Hubert.ai, we've made it our mission to improve interaction and completion rates in online courses by introducing a new way for students to communicate feedback to instructors and make it as easy as possible for these instructors to identify where and how effective improvement can be accomplished.
Hubert is still in beta phase and is totally free to use. Start by going to https://hubert.ai/signup to create your first evaluation and send the link to your students. Simple as that.
Right now we are searching for more partner platforms that can help us focus development. Reach out to me at firstname.lastname@example.org if you'd like to learn more.