Training Effectiveness Framework

Research Assessing the Effectiveness of ICT-Mediated Learning

Posted by on Dec 7, 2013 in Blog, Training Effectiveness Framework

This entry is part 3 of 5 in the series Instructional Effectiveness

An analysis of the extensive amount of research conducted by (Russell, 1999) to assess the effectiveness of ICT-mediated learning leads to the conclusion that there is no significant difference to be observed in performance measures between learning with and without technology. A meta-analysis of over 500 studies conducted by (Kulik, 1994) as cited by (Baalen & Moratis, 2001) indicated that students receiving computer-based instruction tend to learn more in less time. Baalen and Moratis (2001) identified some interesting trends from these studies: “The preference of students for face-to-face instruction reported in the 1950s and 1960s can perhaps be attributed to their unfamiliarity to the technology. Recent research tends to show a developing preference for distance learning among post-secondary learners.” Earlier studies were designed to demonstrate that technology would not have a negative impact on learners’ performance. The goal was to prove the non-significant difference. In contrast, more recent studies have attempted to determine if technology-based learning was more effective that face-to-face instruction. Although most of these studies report no significant difference in outcome measures, many other studies reported equal or superior achievement over traditional classroom instruction.

Earlier attempts to use technology for learning were restricted to drill and practice and tutorial programs. With today’s enabling technology ICT-mediated learning engages learners in authentic learning tasks that allow them to use the technologies to communicate, collaborate, analyze data and access information sources. Although research on these innovative applications of ICTs in education is not extensive, some studies have demonstrated positive learning outcomes in support of ICTs. After reviewing the literature and research on distance education, (Merisotis, J.P.; Phipps, 1999) concluded: “It may not be prudent to accept these findings at face value. Several problems with the conclusions reached through these studies are apparent. The most significant problem is that the overall quality of original research is questionable and thereby renders many of the findings inconclusive” (p. 3). Some of the shortcomings identified are: much of the research failed to control for extraneous variables; most studies failed to use randomly selected subjects; instrument of questionable validity and reliability were used; and many studies failed to control for reactive effects.

Brennan, McFadden and Law (2001) also concluded that: “the gaps between the often rhetorical claims of ‘effectiveness’ and the reality of well-researched studies are not often bridged” (p. 64)… thereby renders many of the findings inconclusive” (Brennan, R.; McFadden, M.; Law, 2001, P.3). A more recent systematic attempt to shed light on the effectiveness of e-learning was conducted by the US Department of Education in 2010. The department conducted a meta-analysis of 50 e-learning studies involving older learners and the general conclusion reached was that: “students in online conditions performed modestly better, on average, than those learning the same material through traditional face-to-face instruction” (US Department of Education, 2010, p. xiv). Few rigorous research studies assessing the effectiveness of e-learning for youth were found.

Many studies comparing ICT-meditated learning to traditional face-to-face instruction are also of limited relevance and values for two main reasons. First, it is impossible to establish a benchmark for making a meaningful comparison. Second, several years of educational research spent comparing methods of instruction have failed to inform practice. The Aptitude by Treatment Interaction research indicates that an instructional treatment interacts with the learner’s characteristics to produce differential learning gains. (Snow, 1976) argued: “No matter how you try to make an instructional treatment better for someone you will make (it) worse for someone else” (Snow, 1976, P. 292). Additionally, according to (Messick, 1976) “No matter how you try to make an instructional treatment better in regard to one outcome, you will make (it) worse to some other outcomes” (Messick, 1976, p. 266). Clearly, there is a need for developing a conceptual framework to guide research in ICT-mediated learning and there is also an urgent need to impose more rigor on research in this area.

After conducting a thorough review of research on online delivery of education and training, Brennan, McFadden and Law (2001, p. 65) concluded that there are many tensions in the literature regarding the effectiveness of online teaching and learning. In an attempt to explain these tensions Baalen and Moratis (2001) argued that assessing the effectiveness and efficiency of ICT-mediated learning using empirical research results provides only a very narrow perspective on the true value of learning technologies. They suggested that the effectiveness and efficiency of ICT-mediated learning is “emergent”. By this they meant that it is only through experimentation and experience that the true value of learning technologies can be realized.

ICT-mediated learning appears to hold great promise for achieving the goals of education for all, such as reducing poverty and promoting social inclusion. However, the integration of ICTs in education requires considerable investment in time and resources. Consequently, when planning to integrate ICT in education and training, policy-makers should be able to use evidence-based information for making sound decisions. In spite of the critical importance of sound research to guide policy and practice, it appears that there is a lack of valid and reliable evidence-based information in the field of learning technology. Many studies conducted during the past 70 years have failed to establish a significant difference in effectiveness between learning technology and traditional methods. While these findings tend to suggest that learning technology does not considerably improve learning, the fundamental question that remains unanswered is: “were the researchers assessing the effectiveness of ICTs or were they simply assessing the effectiveness of instructional treatments that were initially less than perfect? If the instructional treatment is weak or flawed it may lead the researcher to make either: (1) a type 1 error that is rejecting the null hypothesis when it is true; or (2) a type 2 error, that is failing to reject the null hypothesis when it is false; and lead the researcher to reach false conclusions” (Chinien & Boutin, 2005).

Learn More
%d bloggers like this: