Leanne Riseley, in “Moving Toward Authentic Learning” (10/7/13), raises a question asked by Marilyn Lombardi in “Authentic Learning for the 21st Century: An Overview” (Educause, May 2007): “Why isn’t authentic learning more common?”
This is a good question because the approach has been around for a while — plenty of time to go viral. But it hasn’t, and perhaps its origins provide a clue. It began in the medical field and seems to thrive in similar highly technical settings. One of my writing courses is technical communications, and for this course I’ve naturally incorporated authentic features. In courses with less defined real-world counterparts, such as English and history, the incorporation may be tougher.
I don’t have a quick answer or even a good one, but I’ll take a shot and share a relatively long, twisting, and awkward one that may or may not be in the ballpark.
The theoretical underpinning for authentic learning is transfer. Schools are training grounds, and the assumption is that what students learn in classrooms will transfer to the real world. The obstacle to transfer is the gap between school and reality. Thus, the instructional issue is how to close the gap, and the assumption here is: the smaller the gap, the better the transfer.
From this perspective, on-the-job training, or apprenticeship, offers the smallest gap. In between lies a continuum of arrangements that are progressively removed from the real world. Thus, at the other end is a classroom in a school that has little in common with the authentic environment.
The question for schools, then, is how to close the gap — short of moving into apprenticeships. (It could be argued that apprenticeships aren’t fully authentic.) Authentic learning is the compromise. However, “authentic” in this context is a misnomer. This approach is actually a semi-simulation (or semi-real) or hybrid, part pretend and part real.
The real-to-school continuum leaves a lot of wiggle room in between, which translates to difficulty in assigning “authentic” to any strategy. In a sense, nearly all approaches are authentic to some extent. It’s similar to attempts to define “blended” learning. Since it’s difficult to imagine any course that’s not somehow connected to the internet, it’s probably safe to say that if a course isn’t fully online, then it’s blended.
Thus, an activity is authentic if students address problems or are exposed to readings or videos by or featuring practitioners in the field. We could argue that it’s not authentic because it’s missing real-world conditions, feedback, or collaboration, but the counter could be simulations, rubrics developed by experts in the field, and input from classmates in the role of practitioners.
If we question the absence of a finished product that’s shared with the public, we might hear that presentations were recorded and shared on YouTube or final reports were published in one of the school’s journals.
The point is that when a term such as “authentic” loses its capacity to discriminate, when it becomes too inclusive, it becomes less useful in the sense that it can be made to apply to almost any strategy.
Thus, to answer the question, I’d say “authentic learning” isn’t more common because people don’t know what it really means. On the one hand, nearly all learning is authentic; on the other, all learning, short of full engagement in the field, is not authentic. All that gray stuff, that terra incognita, in between is the problem.
Perhaps a better way to approach authentic learning is to say that it’s an attitude toward teaching that makes the most of the instructional environment to simulate real-world conditions. In this view, “instructional environment” is variable and comprises a wide range of factors.
OK, that’s my shot. I’d like to hear yours.