AssessmentfacultyResearchMethods

Is the Promise of Open Educational Resources Fulfilled? We need to get Sherlockian

In an age of misleading social media and “fake news”, I took solace in the fact that science has a peer review process that separates the wheat from the chaff. It was reassuring to think that authors had sifted and winnowed through reams of research to present key findings. I realize I was letting myself off the hook too easily.  It is now clear to me that we need to read even peer-reviewed work with the keen eye of a Sherlock Holmes, attentive to detail and nuance. Reading with a critical eye and between the lines, is essential.  Especially when millions of dollars and student learning is at stake.

Take the case of Open Educational Resources (OERs)- free textbooks and course materials. As enrollment pressures and funding shortcomings continue to shape higher education decision making, many schools switch to OERs. Clearly, free is cheaper than alternatives. Clearly, more students, especially low socioeconomic ones will be better able to afford a textbook and even education in general. But are OER textbooks as good as traditional, albeit costly, resources? It is not elementary dear Watsons. It is too early to tell though there is certainly promise.

Unfortunately, existing research on OERs needs a close reading. One of the first reviews of OER efficacy tests included 16 studies (Hilton, 2016). The abstract stated that “…students generally achieve the same learning outcomes when OER are utilized”. If you stopped there you would be remiss. By the time you get to the results section you note only nine of the sixteen articles that made it into the review analyzed student learning outcomes. The other six only focused on self-reported perceptions of the material. That’s not all.

All nine studies had major confounds such as method of instruction (e.g., OER sections taught online or blended and traditional texts used in a face to face class).  Some studies switched exams between comparisons and some changed course design (e.g., went to a flipped model). Most studies acknowledged that type of textbook was not the only factor that changed. One study of the 16 minimized confounds by using a common pretest and exams and that showed no statistically significant differences in use of an OER for chemistry (Allen, Guzman-Alvarez, Molinaro, & Larsen, 2015).

Astoundingly to the Sherlock Holmesian critical reader in me, many studies did not conduct statistical tests on differences (Hilton, 2016). One study reported students using OER performed “slightly better” but these differences were marginal and not statistically significant. If not statistically significant, “slightly better” does not count. Another study first reports OER users had “higher scores on the department final exam” but then notes the exams used were different. Again, does not count. The bigger surprise?  The authors report there were no analysis performed to determine whether the results were statistically significant. Strange.

Perhaps the biggest issue is that many large scale studies do not use comparable exams. In one dissertation included in the Hilton (2016) review, Robinson (2015) compared classes across institutions with results spanning the gamut. Student did worse with OERs in some classes or showed no differences. Likewise, Fischer, Hilton, Robinson, and Wiley (2015) examined 15 classes with varying results (OER users did better in 4, worse in 1, and were not different in 9) but again without comparable exams.

In the one national multisite study using a common exam across classes, OER users did worse than traditional book users (Gurung, 2017) with the digital format of OERs being a major issue. One study conducted at a single institution used the same exam for OERs and commercial textbooks. While the authors were also the non-blind instructors precluding researcher bias, students using the two types of books performed similarly and OER users did better on one of three exams compared (Jhangiani, Dastur, LeGrand, & Penner, 2018).

This issue of benchmarking is most salient in a recent examination of OERs. This summer, I was excited to see a study examining eight undergraduate courses spanning four disciplines (Colvard, Watson, & Park, 2018). The sample size was large (21,822 students) and the finding “OER improves end-of-course grades,…” got my interest. Researchers examined final grades (and other variables) in courses that switched from traditional textbooks to OERs between 2010 and 2016. Students using OERs had higher average scores.

Time to read between the lines. I noted that grades were aggregated over the eight courses and four disciplines. Were exams comparable?  The article did not say. When contacted, the lead author stated the data could not be shared. This seemed to be against open science practices and so I followed up and was told that the information was proprietary and he would check into getting permission.  I am still waiting.

Data aside, the big question is the benchmarks used. Did the instructors change their exams to fit the book (OERs or standard books) they were using? The author could not speak to the question. Given they were not evaluating individual performance on each exam but rather overall performance (overall grade) he had nothing more to add.  The research methodologist in me cringes. This seems to be a major confound and research designs need to account for it.

This nuance in the methodological shortcomings of multiple studies forces us to be more critical readers. OERs have the promise of making education more attainable but not enough research is well controlled. Little research focuses on what the instructor can do or does. It is likely that an effective instructor can help students learn from any material equally and studies where the instructor is constant bear this out (Colvard et al., 2018; Jhangiani et al., 2018). Even though students say they prefer some books over others, differential liking does not translate into differential learning. In a national study on different book users, student scores on a common benchmark (i.e., the same quiz) did not vary (Gurung, Daniel, & Landrum, 2012). The lack of significant differences between OERs and traditional books may highlight the same notion. An instructor passionate about OER can probably ensure students learn the same or better as they would from a traditional book, a confound rarely examined.

There is promise in the use of OERs. Is it fulfilled? We do not know yet. We have to be tighter in how we assess the efficacy of such materials and critical readers of research on the same.  We have to be aware that there are many administrators who may read abstracts and wrongly conclude it is time to fiat switches or command curricular change. I urge caution and hope to see more robust studies on this topic. In general we can no longer take published research at face value.

 

References

Allen, G., Guzman-Alvarez, A., Molinaro, M., & Larsen, D. (2015). Assessing the impact and efficacy of the open-access ChemWiki textbook project. Educause Learning Initiative Brief. January 2015. https://net.educause.edu/ir/library/pdf/elib1501.pdf.

Colvard, N. B., Watson, C. E., & Park, H. (2018). The impact of open educational resources on various student success metrics. The International Journal of Teaching and Learning in Higher Education, 30, 262-276.

Fischer, L., Hilton, J, I. I. I., Robinson, T. J., & Wiley, D. A. (2015). A multi-institutional study of the impact of open textbook adoption on the learning outcomes of post-secondary students. Journal of Computing in Higher Education, 27(3), 159–172.

Gurung, R. A. R. (2017). Predicting learning: Comparing an open education research and standard textbooks. Scholarship of Teaching and Learning, 3(3), 233-248. doi:10.1037/stl0000092

Gurung, R. A. R., Daniel, D. B., & Landrum, R. E. (2012). A multi-site study of learning: A focus on metacognition and study behaviors. Teaching of Psychology, 39, 170-175. doi:10.1177/0098628312450428

Hilton, J., & Laman, C. (2012). One college’s use of an open psychology textbook. Open Learning, 27(3), 265-272.

Jhangiani, R. S., Dastur, F. N., LeGrand, R., & Penner, K. (2018). As good or better than commercial textbooks: Students’ perceptions and outcomes from using open digital and open print textbooks. The Canadian Journal for the Scholarship of Teaching and Learning, 9(1).

Robinson, T. J. (2015). Open textbooks: The effects of open educational resource adoption on measures of postsecondary student success. Doctoral dissertation.

 

Share some mental stimulation:

Leave a Reply

Your email address will not be published. Required fields are marked *