The availability of online higher education marks an extraordinary moment in the history of learning. Those who sell what are essentially correspondence courses, are understandably driven by quotas: they need to fill virtual seats to maintain or increase revenue. Students want easier – and cheaper – access to degrees and certificates, so the online model works for them, and faculty love the extraordinarily generous “record once/play often” compensation model. In fact, faculty fantasize and monetize about teaching only online where their visits to a real campus are few and far between and they never have to prepare long, consistent, real-time presentations to actual students at a designated time and place. It all “works.” It’s a sea change in higher education, for sure.
But what about quality? Who’s asking the tough questions about effectiveness? Are the online degrees and certificates students earn as valuable as the ones they learn in an actual class? Are we looking at the wrong metrics? Can generative AI tutors help?
Convenience Versus Learning Metrics
Most of the metrics we use to compare online-versus-in-person education are convenience metrics:
- Cost
- Admission Requirements
- Accreditation
- Flexibility
- Time to Degree
- Professional Networking
- Work Experience
- Time on Campus
- Structure
- Availability of Internships
- Demographic Diversity …
What’s interesting about these kinds of metrics is that they do not focus on learning outcomes – how much students actually learn from each experience. Bloom developed a set of learning metrics decades ago:
“Bloom’s Taxonomy of Educational Objectives (1956) is one traditional framework for structuring learning outcomes. Levels of performance for Bloom’s cognitive domain include knowledge, comprehension, application, analysis, synthesis, and evaluation. These categories are arranged in ascending order of cognitive complexity where evaluation represents the highest level.”
Here are the metrics:
“Knowledge (Which represents the lowest level of learning)
To know and remember specific facts, terms concepts, principles or theories
“Comprehension
To understand, interpret, compare, contrast, explain
“Application
To apply knowledge to new situations to solve problems using required knowledge or skills
“Analysis
To identify the organizational structure of something; to identify parts, relationships, and organizing principles
“Synthesis
To create something, to integrate ideas into a solution, to propose an action plan, to formulate a new classification scheme
“Evaluation (Which represents the highest level of learning)
To create something, to integrate ideas into a solution, to propose an action plan, to formulate a new classification scheme”
There are lots of other – though similar – learning outcome metrics out there, like:
“Intellectual skills
This type of learning outcome enables the learner to understand rules, concepts, or procedures.
“Cognitive strategy
In this type, the learner uses his or her thinking abilities to make strategies and organize, learn, think, and behave.
“Verbal information
“Motor skills
“Attitude”
Note that learning outcomes focus on what students actually learn and what they can actually do when instruction ends. Note that learning outcome metrics have nothing to do with flexibility, cost, time to degree or any of the metrics used today to compare in-person versus online education – the criteria online providers are fighting among themselves.
In-Person Versus Online Learning Effectiveness
Colleges, universities and other institutions of higher learning are fighting it out on the convenience metrics field, not on the learning outcomes field. Are they in the right game? Or should they be also fighting about learning outcomes? Maybe, maybe not, depending on the answers they want.
For example, a class on negotiations is described by Callister and Love (the bold is mine):
“This study asks the question “Can skills-based courses taught online achieve the same outcomes as face-to-face courses in which the instructor and students interacting in real time may have higher levels of interaction, thus potentially facilitating higher levels of skill improvement? If so, what are the critical success factors that influence these outcomes? These questions are examined by comparing four classes in negotiations (two face-to-face and two online) taught by the same professor. The courses were designed to be as similar as possible except for their delivery method. Results indicate that face-to-face learners earned higher negotiation outcomes than online learners even when using the same technology.”
Similarly, Faidley describes an accounting course:
“The purpose of this quasi-experimental ex-post-facto study was to compare student outcomes from two Principles of Accounting courses both delivered in two methods of instruction: traditional face-to-face (F2F) and an on-line asynchronous format … the results indicated students performed significantly better in the face-to-face classes than the online sections.”
For CPA performance Morgan notes:
“Programmatic-level comparisons are made between the certified public accountant (CPA) exam outcomes of two types of accounting programs: online or distance accounting programs and face-to-face or classroom accounting programs. After matching programs from each group on student selectivity at admission, the two types of programs are compared on CPA exam outcomes of graduates. Results show online or distance accounting programs have much lower average CPA pass rates than their matched face-to-face counterparts with equivalent student selection criteria. In addition, average 6-year graduation rates and average propensity to sit for the CPA exam after graduation are much lower in the online or distance accounting programs.”
Employers seem to fall in line with these results:
“This article examines employers’ perceptions of face-to-face versus online master of business administration degrees in new hire and promotion decisions. Results indicated that those making selection decisions view face-to-face degrees more positively than online degrees. This finding is considerably more pronounced for employers making new hire decisions than for those making promotion decisions.”
Note that there are research results that suggest that online education and training can be just as effective – and in some cases more effective – than in-person education and training. It often depends upon the domain: some subjects can be taught online just as effectively as in-person. The three examples above are drawn from the business domain, but there are others that create more than acceptable learning outcomes.
Generative AI Tutors to the Rescue
Can virtual tutors help improve the learning outcome of online students? Listen to what the AIContentfy team describes how ChatGPT can tutor:
“‘Personalized tutoring with ChatGPT’ refers to the use of the ChatGPT model to provide individualized instruction to students. This can be accomplished by using the model’s natural language processing capabilities to understand the student’s learning needs and tailor the instruction accordingly … one way this can be done is by using ChatGPT to generate personalized practice problems for the student. For example, a student who is struggling with a particular math concept can interact with the model, which will generate practice problems on that topic specifically tailored to the student’s level of understanding.
“Another way ChatGPT can be used for personalized tutoring is by providing students with instant feedback on their work. For example, a student who is writing an essay can use ChatGPT to receive feedback on grammar, sentence structure, and organization.
“Additionally, ChatGPT can be used to provide students with step-by-step explanations of difficult concepts. For example, a student who is struggling to understand a complex scientific theory can ask the model to explain it in simpler terms.”
For those domains where online education is relatively inferior to in-person education (like business), Generative AI (GenAI) tutors perhaps should become part of the offering. While a few students can fashion their own tutors using GenAI tools, integrating tutors into online education would be a significant learning outcomes differentiator for online education providers. In fact, given the learning outcome performance of many online domains, it’s surprising that online providers have not developed and deployed GenAI tutors, especially in domains like business, especially in business domains focused on AI education!
For example, Khan Academy has recently developed Khanmigo to help students and teachers:
“The chatbot, Khanmigo, offers individualized guidance to students on math, science and humanities problems; a debate tool with suggested topics like student debt cancellation and AI’s impact on the job market; and a writing tutor that helps the student craft a story, among other features.”
While still new, Khanmigo suggests how education and training can be tutored with the help of GenAI.
Conclusion
GenAI tools represent the future of online education – especially in domains where learning outcomes are inferior to those generated by in-person education. Will smart tutors hover around online students to check their work, watch their performance and suggest improvements? Yes.
It’s surprising that providers of online education for higher education have not developed, integrated and branded GenAI tutors. (The irony of course is that online programs that offer specializations in AI have not embedded GenAI tutors into their programs.)
The Only Important Fight
While online providers fight it out on the convenience, cost, flexibility and time-to-degree battlefield, there’s a much more important – though often ignored – fight about learning outcomes that providers should engage. GenAI tutors can help win the learning outcomes war. If providers can make their case for convenience and outcomes, they can differentiate themselves from the horde of providers only offering “convenience degrees.”
Read the full article here