How Conversational Feedback Increases Data Quality: Part 130th October 2019
The main reason why you should consider looking at conversational feedback collection is undoubtedly the increase in data quality. Sure, response rates are higher too, but when compared, the added benefit from the higher data quality easily outweighs the higher response rates any day of the week.
When building Hubert.ai, our main focus has always been to step as far away from surveys as possible. Just taste the word 'survey' and you'll know why. It leaves a foul residue in your mouth and the effects of continued exposure to surveys are pretty horrific.
We've used a lot of machine learning magic to train Hubert into mimicking the behavior of a human interview facilitator in a way that moves closer to a qualitative research method. And as you know…
. . . In conversational feedback, response rates aren't as important.
In all qualitative research, it's not the response rate that decides whether you have reached validity or not. Your data is valid when there simply aren't any new opinions appearing within a given subject/ question. That's because there isn't an infinite number of relevant opinions to be had within a given domain.
When you start seeing your topic groups fill up with additional respondents and not generating additional topics, that's when you've reached saturation and your data collection can be considered done.
Approaching a more qualitative methodology requires a set of minimum essential skills for our chatbot Hubert.
The first half of this skillset is the collection part, which is described in this post. The second half concerns how we present the data in a way that makes sense using our internally developed text analytics.
We've focused the initial development of Hubert around two basic concepts; real-time context analysis and intelligent follow-up questions.
1. Real-time context analysis
Just as the name suggests, real-time context analysis is what enables Hubert to understand what goes on in a conversation. Just like a human interview facilitator, Hubert needs to continuously interpret incoming data and decide what to do with it. Test out how Huberts' relevance analyzer works.
Maybe the respondent mentions something particularly interesting that requires following up, a response might call for a clarification or a question concerning something relevant needs answering.
It's an easy concept for humans to grasp, but a fairly complicated problem to solve using a computer. There are a plethora of factors adding to the complexity.
Just to mention a few:
-There are a large number of valid responses to every specific question.
-There is an almost infinite number of involved responses to every specific question.
-There are a large number of ways of expressing the same thing in different ways.
-Some words have different meaning in different domains.
-The respondent might relate to something mentioned previously in the conversation.
-And so on and so on..
Countless hours of research along with countless quantities of training data have gone into reaching the level Huberts' real-time analysis is at today.
Admittingly, Hubert is still not perfect but we've come a long way and we keep improving every day to closely mimic human behavior perfectly. After all, the goal with Hubert is to create a feedback-giving experience that is as personal and captivating as possible.
2. Intelligent follow-up questions
Asking sensible follow-up questions is a major reason why we put so much work into the real-time context analysis which serves as a cornerstone in the functionality. In contrast to ordinary query-based bots, Hubert is a question-based bot trained to ask rather than respond.
A follow-up question could, for example, be triggered in the following events:
-The response does not contain any, or too little, useful information.
-A vital part of information is missing from the response.
-The response does not make sense in the question context.
-The response is ambiguous.
-The response is too general.
A lot of consideration is directed to making sure all follow-up questions are triggered in the right situation, for the right type of response and in a relevant fashion. Follow-up questions should only be issued when there's a true need for them.
The balancing act between information density and dialog flow is delicate and must be taken into account. No-one wants follow-up questions on every little statement but the data should be as dense as possible in the fundamental areas.
The past two years have gone into perfecting these two essential concepts which now serve as a baseline for more advanced functionality and in support of exceptional dialog flow and qualitative data collection.
Check out the next post covering some more advanced functionality that we've built on top of these concepts.
Hubert.ai is a Swedish start-up company on a mission to free the world from boring surveys. Our chatbot Hubert that enables companies to automatically conduct chat interviews at large scale. By using AI, Hubert can ask relevant follow-up questions, ask for clarification and probe deep into the mind of your target group. To present the results, Hubert features a text analytics engine that can present findings in a very comprehensible way.