|So, what are we measuring? Is it really learning?|
When considering David Merrill's idea that educational research needs to be more tightly aligned with the scientific method (Reiser & Dempsey, 2007), I feel this would be a good idea—but only to the extent to which it is possible. In my opinion, empirically based, randomly controlled experimentation is at its most rigorous when it follows, to the letter, scientific rules of experimental design. Moreover, the scientific method is at its best when the dependent variable can be accurately measured. This approach grew out of the ability of scientists to measure physical entities objectively and accurately. As the social sciences use this experimental method to measure learning phenomena, results are confounded when the dependent variable is, at best, a fuzzy measurement of the given learning phenomenon. How does learning occur? There is no general consensus on this issue, and there are no quantitative measures that can reliably and validly capture the moment of learning. So, in some cases, using quantitative, empirically driven experimental designs to understand a learning phenomenon can be falsely satisfying and even misleading. This is not to say experimental design principles have no place in educational research, but that their contextual limitations should be acknowledged, and research methodologies be adjusted accordingly. We are simply not at the point in the evolution of our field to have the security of knowing that it's possible to use numbers to accurately measure the learning phenomena.
Learning is a messy process, highly contextual, and driven by a constellation of socio-political factors. In order to explore this messiness, we do not have the luxury of settling upon one particular research method in which to do so. Educational researchers should come to terms with this idea, and be willing to take on the herculean task of knowing when and where to apply any of the entire range of research methods available to us in order to carry out research that is relevant to solving the given problem. Given this challenge, it is also a good idea to consider collaborative research, where persons of expertise in one research approach or another, can lend their talent to the corresponding aspects of any given research initiative.
|Might we leverage gaming environments like this one (Minecraft)|
to meet both learning goals and educational research goals?
If we can agree that quantitative measures are not sufficient tools to capture learning phenomena, then we are tasked with striking a balance between highly disciplined, rigorous experimental research and pure exploration, without the confines (or validity and reliability measures) of experimental design measures best suited to exploring measureable, physical phenomena. This will require creativity and an open mind when deciding how to best understand how learning occurs. Although there are (and should) be a variety of approaches to educational research, I am personally drawn toward game-like environments as a best way of learning and definitely as a best way of measuring how learning occurs (Reeves, T.C., 2011). A game environment, especially one that is computer based, affords the application of a wide range of both quantitative and qualitative research approaches. Perhaps for the first time, we will be able to analyze quantitative data that closely aligns to the learning phenomenon being explored. If these numbers are triangulated with qualitative measures, we may come close to having the best of both worlds. But still it is important to bear in mind—learning is a messy business, and we should guard against situations where numbers are used to come to any real conclusions about learning phenomena.
|Teachers and educational researchers working together|
is a win for the teacher, researcher, and learner.
As an aside, I was happy to see the mention of design tools being created that incorporate principles of Instructional Design, in order that regular users could benefit from I.D. principles. When I graduated with a master's degree in Instructional Design & Development, I posted on this blog about such a tool. My idea with that post was that, although I had spent much time and effort to thoroughly understand Instructional Systems Design (ISD), I am always open to automation that might make my skill set obsolete. When disruptive obsolescence happens like this, it's important for practitioners in our field (perhaps all fields) to evolve along with technological progress. Still, software such as this can only assist the user, and there is a danger of dependency on software that might remove the thinking processes from the designer. Worse still, such software may severely limit the potential growth of the user as a designer by imposing limiting structures on the creative process. It seems to be a good idea for educational researchers to test automated software such as this in order to revise and improve its effectiveness, as well as illuminate cautions and concerns for those who use the software.
Reeves, T.C. (2011) Can Educational Research Be Both Rigorous and Relevant? Educational Designer, 1(4).
Reiser, R. A., & Dempsey, J. V. (Eds.). (2007). The Future of Instructional Design (Point/Counterpoint). Trends and issues in instructional design and technology / edited by Robert A. Reiser, John V. Dempsey (2nd ed., pp. pp. 235 – 351). Upper Saddle River, N.J. : Merrill/Prentice Hall, c2002.