Blog

Research Assessing the Effectiveness of ICT-Mediated Learning

Posted by on Dec 7, 2013 in Blog, Training Effectiveness Framework

This entry is part 3 of 5 in the series Instructional Effectiveness

An analysis of the extensive amount of research conducted by (Russell, 1999) to assess the effectiveness of ICT-mediated learning leads to the conclusion that there is no significant difference to be observed in performance measures between learning with and without technology. A meta-analysis of over 500 studies conducted by (Kulik, 1994) as cited by (Baalen & Moratis, 2001) indicated that students receiving computer-based instruction tend to learn more in less time. Baalen and Moratis (2001) identified some interesting trends from these studies: “The preference of students for face-to-face instruction reported in the 1950s and 1960s can perhaps be attributed to their unfamiliarity to the technology. Recent research tends to show a developing preference for distance learning among post-secondary learners.” Earlier studies were designed to demonstrate that technology would not have a negative impact on learners’ performance. The goal was to prove the non-significant difference. In contrast, more recent studies have attempted to determine if technology-based learning was more effective that face-to-face instruction. Although most of these studies report no significant difference in outcome measures, many other studies reported equal or superior achievement over traditional classroom instruction.

Earlier attempts to use technology for learning were restricted to drill and practice and tutorial programs. With today’s enabling technology ICT-mediated learning engages learners in authentic learning tasks that allow them to use the technologies to communicate, collaborate, analyze data and access information sources. Although research on these innovative applications of ICTs in education is not extensive, some studies have demonstrated positive learning outcomes in support of ICTs. After reviewing the literature and research on distance education, (Merisotis, J.P.; Phipps, 1999) concluded: “It may not be prudent to accept these findings at face value. Several problems with the conclusions reached through these studies are apparent. The most significant problem is that the overall quality of original research is questionable and thereby renders many of the findings inconclusive” (p. 3). Some of the shortcomings identified are: much of the research failed to control for extraneous variables; most studies failed to use randomly selected subjects; instrument of questionable validity and reliability were used; and many studies failed to control for reactive effects.

Brennan, McFadden and Law (2001) also concluded that: “the gaps between the often rhetorical claims of ‘effectiveness’ and the reality of well-researched studies are not often bridged” (p. 64)… thereby renders many of the findings inconclusive” (Brennan, R.; McFadden, M.; Law, 2001, P.3). A more recent systematic attempt to shed light on the effectiveness of e-learning was conducted by the US Department of Education in 2010. The department conducted a meta-analysis of 50 e-learning studies involving older learners and the general conclusion reached was that: “students in online conditions performed modestly better, on average, than those learning the same material through traditional face-to-face instruction” (US Department of Education, 2010, p. xiv). Few rigorous research studies assessing the effectiveness of e-learning for youth were found.

Many studies comparing ICT-meditated learning to traditional face-to-face instruction are also of limited relevance and values for two main reasons. First, it is impossible to establish a benchmark for making a meaningful comparison. Second, several years of educational research spent comparing methods of instruction have failed to inform practice. The Aptitude by Treatment Interaction research indicates that an instructional treatment interacts with the learner’s characteristics to produce differential learning gains. (Snow, 1976) argued: “No matter how you try to make an instructional treatment better for someone you will make (it) worse for someone else” (Snow, 1976, P. 292). Additionally, according to (Messick, 1976) “No matter how you try to make an instructional treatment better in regard to one outcome, you will make (it) worse to some other outcomes” (Messick, 1976, p. 266). Clearly, there is a need for developing a conceptual framework to guide research in ICT-mediated learning and there is also an urgent need to impose more rigor on research in this area.

After conducting a thorough review of research on online delivery of education and training, Brennan, McFadden and Law (2001, p. 65) concluded that there are many tensions in the literature regarding the effectiveness of online teaching and learning. In an attempt to explain these tensions Baalen and Moratis (2001) argued that assessing the effectiveness and efficiency of ICT-mediated learning using empirical research results provides only a very narrow perspective on the true value of learning technologies. They suggested that the effectiveness and efficiency of ICT-mediated learning is “emergent”. By this they meant that it is only through experimentation and experience that the true value of learning technologies can be realized.

ICT-mediated learning appears to hold great promise for achieving the goals of education for all, such as reducing poverty and promoting social inclusion. However, the integration of ICTs in education requires considerable investment in time and resources. Consequently, when planning to integrate ICT in education and training, policy-makers should be able to use evidence-based information for making sound decisions. In spite of the critical importance of sound research to guide policy and practice, it appears that there is a lack of valid and reliable evidence-based information in the field of learning technology. Many studies conducted during the past 70 years have failed to establish a significant difference in effectiveness between learning technology and traditional methods. While these findings tend to suggest that learning technology does not considerably improve learning, the fundamental question that remains unanswered is: “were the researchers assessing the effectiveness of ICTs or were they simply assessing the effectiveness of instructional treatments that were initially less than perfect? If the instructional treatment is weak or flawed it may lead the researcher to make either: (1) a type 1 error that is rejecting the null hypothesis when it is true; or (2) a type 2 error, that is failing to reject the null hypothesis when it is false; and lead the researcher to reach false conclusions” (Chinien & Boutin, 2005).

Learn More

A Framework to Examine the Effectiveness of Computerized Training

Posted by on Dec 7, 2013 in Blog

This entry is part 1 of 5 in the series Instructional Effectiveness

Two key elements must be taken into account when considering the effectiveness of technology-mediated training, namely instructional effectiveness; and instructional efficiency. Instructional effectiveness and efficiency are two elusive terms for which no accurate definitions can readily be found in the literature. The difficulty in defining these terms is probably due to the number of factors extraneous to the material itself, which confounds measurement related to the quality of instruction.

In previous studies the efficiency and effectiveness of an instructional product have been used as dependent variables. Nathenson and Henderson (1980) note that research has had a very narrow focus with regards to the effectiveness of instructional materials. In many studies effectiveness has been viewed only in terms of learning gains on post-tests. The authors argued that although improved student performance is an important element, it should not be the only indicator of instructional material effectiveness (Nathenson & Henderson, 1980). Chinien (1990) suggests that instructional material effectiveness should be viewed within a framework, which encapsulates three major elements: achievement, study time, and the students’ attitude toward the material (Chinien, 1990).

Achievement 

Several studies (see Chinien & Boutin, 1994) have demonstrated that the quality of instructional material can help to significantly improve students’ achievement on post-tests. Two indicators of instructional material effectiveness are used with respect to achievement. The first relates to the ability of the material to help a predetermined percentage of students reach a designated level of mastery on post-tests.  The gain in learning is a second indicator of effectiveness related to achievement. Learning gain is usually expressed as the difference between post-test and pretest scores (learning gain equals post-test score minus pre-test score, (Romiszowski, 1986).

Study Time

The amount of time that students spend interacting with an instructional product is another critical element of instructional material effectiveness. Nathenson and Henderson (1980) cite many research studies that have reported achievement at the expense of increased study time. These authors quote (Faw, H. W.; Waller, 1976) to emphasize the critical relationship between study time and the achievement component of instructional material effectiveness: “ [Since] the single most important determinant of how much is learned is probably total study time…it is hardly surprising that the manipulation which tend to extend the period of time spent in study…are in general accompanied by superior levels of learning.” There are also some studies demonstrating improved student performances on post-tests while keeping study time constant. Study time is also commonly referred to as a measure of efficiency (Davis, R. H.; Alexander, L. T.; Yelon, 1974) , (Futrell, H. K.; Geisert, 1984).

Attitude 

A third dimension of instructional material effectiveness is the student’s attitude toward the material. Studies conducted by (Abedor, 1972), (Stolovitch, 1975), and (Wager, 1980) indicate that effective instructional materials generate more positive student attitudes. On the other hand, (Berthelot, 1978) and Chinien (1990) found no significant differences in students’ attitude related to the quality of instructional material. Romiszowski (1986) cautioned that the novelty effects may confound measures of students’ attitude. He argues that the novelty may not only inspire negative attitudes that diminish over time, but may also generate excessive praise and enthusiasm that may also disappear. Although research on time-on-task indicates that a positive correlation between achievement and time engaged in learning tasks, time is not generally used as an independent variable in research on distance education.

The effectiveness of instructional material can be conceptualized within a framework of three major elements: student achievement, study time, and student attitude. All three elements are important and need to be considered collectively when assessing instructional material.  Nesbit and his colleagues have developed a useful instrument for evaluating e-learning objects, which can be adapted and used for Neurogenesis. This instrument comprises nine key elements as described below (Nesbit, J.; Belfer, K.; Leacock, n.d.):  

Table 11. Effectiveness of E-learning Objects

Elements Description of elements
Content quality Veracity, accuracy, balanced presentation of ideas and appropriate level of detail
Learning goal alignment Alignment among learning goals, activities, assessments, and learner characteristics
Feedback adaptation Adaptive content or feedback driven by differential learner input or learner modeling
Motivation  Ability to motivate and interest an identified population of learners
Presentation design Design and auditory information for enhanced learning and efficient mental processing
Interaction usability Ease of navigation, predictability of the user interface, and the quality of the interface help features
Accessibility Design of controls and presentations format to accommodate disabled and mobile learners
Reusability Ability to use in varying learning contexts and with learners from different backgrounds
Standards compliance  Adherence to international standards and specifications

 

Learn More

Gaming and Brain Training

Posted by on Dec 7, 2013 in Blog

This entry is part 3 of 3 in the series Maximizing Instructional Effectiveness Through Gamification

Bavelier, Green, Pouget & Schrater (2012) reviewed empirical research on the relationship between playing action video games and brain plasticity and learning. Table 7 summarizes these findings and provides a better understanding of the effect of playing videogames on different aspects of the cognitive functions of the players.

Table 7. Research on the effect on cognition of playing video games 

Research references Aspects of cognition
(Green & Bavelier, 2006)
  • Videogame Players (VGPs) are better at multitasking than Non-Videogame Players (N-VGPs).
(Andrews, Murphy, & Vanchevsky, 2006); (Boot, Kramer, Simons, Fabiani, Gratton, 2008a); (Cain, Landau, & Shimamura, 2012); (Colzato, 2010); (Green, Sugarman, Medford, Klobusicky, & Bavelier, 2012); (Karle, Watter, & Shedden, 2010); (Strobach, Frensch, & Schubert, 2012)
  • VGPs  have better task switching abilities than N-VGPs.
(Anderson, A.F., Kludt, R., Bavelier, 2011);

(Boot, W.R., Kramer, A.F., Simons, D.J., Fabiani, M., Gratton, 2008)

  • VGPs have better short-term memory than N-VGPs.
(Greenfield, 2009); (McClurg, Chaille, 1987); (Subrahmanyam & Greenfield, 1994)
  • VGPs display better spatial cognition than N-VGPs.
(Feng, Spence, & Pratt, 2007)
  • VG playing enhances mental rotation abilities.
  • VG playing “can eliminate gender difference in spatial attention and simultaneously decrease the gender disparity in mental rotation ability, a higher-level process in spatial cognition…after only 10 hours of training…with women benefiting more than men (Feng et al., 2007, p. 850).
(Okagaki & Frensch, 1994)
  • Video game playing improves mental rotation time and spatial visualization time in both males and females.
  • There are “reliable and consistent differences between males and females were only obtained on complex mental rotation tasks” (Okagaki & Frensch, 1994, p. 33).
(Subrahmanyam & Greenfield, 1994)
  • “video game practice was more effective for children who started out with relatively poor spatial skills” (Subrahmanyam & Greenfield, 1994, p. 13).
  • “video games may be useful in equalizing individual differences in spatial skill performance, including those associated with gender” (Subrahmanyam & Greenfield, 1994, p. 13).
(Anderson, Kludt, Bavelier, 2011a); (Chisholm, Hickey, Theeuwes, & Kingstone, 2010); (Colzato, 2010); (Karle et al., 2010)
  • VGPs are better than N-VGPs on some aspects of executive function.
(Bavelier, D. Green, C. S., Pouget, A., Schrater, 2012)
  • VGPs better employ executive strategies to reduce the effects of distraction, especially in highly complex environments than N-VGPs.
  • The extent of the suppression of irrelevant information is directly proportional to the speed of their responses.
  • VGPs seem to focus better on the task.
(Green & Bavelier, 2006)
  • Video game playing enhances the sense of acuity.
(Bavelier, Green, Pouget, Schrater, 2012a)

(Bavelier, Achtman, Mani, & Föcker, 2011)

 

  • VGPs have greater flexibility of resource allocation than N-VGPs.
  • Resource allocation of VGPs is more automatic and speedy.
  • “Video game play leads to not only enhanced resources, but also a more intelligent allocation of these resources given the goals at hand. This is one of the ways action game play may result in learning to learn” (Bavelier, Green, Pouget, Schrater, 2012a, p. 408).
(Green, Pouget, & Bavelier, 2010)
  • VG playing enhances learning to learn.
  • VG playing enhances transfer of learning.
  • VG playing “enhances performance in a wide variety of tasks” (Green, Pouget & Bavelier, 2010, p. 1573).
  • “VGPS perform better than N-VGPs do on tasks neither group had previously experienced and that are, as noted earlier, quite different in nature from action game play” (Bavelier, D. Green, C. S., Pouget, A., Schrater, 2012a, p. 399).

(Adapted from Bavelier, Green, Pouget & Schrater, 2012)

Research findings provide substantial indication that playing videogames enhances a number of different aspects of the cognitive functions of the players. Videogame players have better task switching and multi-tasking abilities than non-videogame players; they outperformed non-videogame players on tasks requiring mental rotation and spatial cognition; videogame players employed executive strategies to reduce the effects of distraction, and to suppress irrelevant information, especially in highly complex environments; they displayed enhanced resources and demonstrated a greater flexibility in resources allocation and a wiser use of these resources. These enhanced cognitive functions appear to contribute to the development of the learning to learn ability and to promote the transfer of learning.

A recent review of research findings on the effect of playing videogames on the cognitive ability of older adults revealed that “Not only do video games improve specific cognitive domains for older adults, evidence suggests they can affect global cognitive functioning as well” even when most of these video games were not originally design to improve cognitive skills (Kueider et al., 2012). Kueider & al.’s findings provide further evidence that videogame playing enhances many of the same aspects of the cognitive functions as those identified in Bavelier et al.’s review cited above. Table 8 summarizes Kueider & al.’s findings.

Table 8. Research on the effects of playing video games on cognition of older adults (50 to 86 years of age).

Research references Intervention Duration Significant findings on Aspects of Cognition
(Goldstein et al., 1997) SuperTetris 5 weeks: at least

300 min/week; playing

time varied: 25.5–36.5 hrs.

Improved reaction time.

 

(Ackerman, Kanfer, & Calderwood, 2010) Wii Big Brain Academy 4 weeks: 5 times/week

for 60 min

Improved on task-specific fluid, crystallized and perceptual speed measures.
(Basak, Boot, Voss, & Kramer, 2008) Rise of Nations 4–5 weeks: 3 times/week

for 90 min

Improved memory, executive function,

and visuo-spatial abilities.

(Belchior & Mann, 2007) UFOV or Medal of Honor 2 weeks: 2–3 times/week

for 90 min

Useful field of view

Improved processing speed

No difference between Medal of Honor and Tetris groups.

(Clark, Lanphear, & Riddick, 1987) Pac Man or Donkey Kong 7 weeks: 120 min/week Improved reaction time
(Drew Benjamin Waters, 1986) Atari Crystal Castles 8 weeks: 2 times/week

for 60 min

Improved psychomotor speed and global

cognition.

(Dustman, Emmerson, Steinhaus, Shearer, & Dustman, 1992) Breakout, Galazian, Frogger,

Kaboom, Ms. Pacman,

Pengo, and Qix

11 weeks: 3 times/week

for 60 min

Improved Reaction Time.

Improved executive function.

(Torres, 2011) QBeez, Super Granny 3,

ZooKeeper, Penguin Push,

Bricks, Pingyn, memory games

8 weeks: 1 time/week Showed less cognitive decline.

Adapted from (Kueider, Parisi, Gross, & Rebok, 2012b, p. 7)

In summary, research findings seem to indicate that playing videogames enhances the global cognition of older adults and slows their cognitive decline. Findings also suggest that the older adult video game players improve their reaction time, enhance their processing speed and other cognitive functions such as memory, executive function, and visuo-spatial abilities.

A number of companies have developed video game training programs designed at improving the cognitive abilities of older adults (Green & Bavelier, 2008). These training programs require the users to perform tasks “that are highly similar in content and structure with tests used on psychological assessment scales” (C . S. Green & Bavelier, 2008, p. 7). According Green & Bavelier, these types of training “have shown clear improvements in abilities specific to those trained as well as maintenance of those gains from 3 months to 5 years” (p. 7); however they added that there is a lack of empirical evidence on the transfer of those skills in real life situations. Bavelier et al. (2012) came to similar conclusions: “changes in knowledge produce benefits only to the extent to which new tasks share structure with action video games. No benefits are expected in tasks that share no such structure” (p. 410), even though it is well recognized that “some mechanisms of learning appear to be shared across domains” (Green & Bavelier, 2008, p. 8).

The difference between videogame playing and cognitive training is that in video game playing, the user is usually engaged in more than one cognitive function at once; however cognitive training programs are mostly designed to train users in one specific cognitive domain at a time (Green & Bavelier, 2008). The separation of domains for training “leads to faster learning during the acquisition phase, yet it can be detrimental during the retention phase, leading to less robust retention and to lesser transfer across tasks” (Green & Bavelier, 2008, p. 7). On the other hand, these authors point out that “variability in learning experience will result in less extensive learning during the acquisition phase but larger transfer to new tasks during retention tests (Green & Bavelier, 2008, p. 8)”. In the same way, “tasks that require very low-level representations will show less generalization of learning than those that rely on higher levels of representation” (C S Green & Bavelier, 2008, p. 8). It is important to keep in mind that Green & Bavelier cautioned that this theory still needs to be tested.

Kueider at al. (2012) performed a review of research findings (a) on the effect of cognitive training on the cognition of older adults (Table 9) and (b) on the effect of training using neuropsychological software on the cognition of older adults (Table 10). It is interesting to note that research on these two types of training come to similar findings and that these findings are in line with those on the effect of playing video games on the cognition of older adults. 

Table 9. The effect of cognitive training on the cognition of older adults 

Research references Intervention Duration Significant findings on Aspects of Cognition
(Bherer et al., 2005) Dual task training: variable or fixed priority 3 weeks: 2 times/week

for 60 min

Both groups (variable or fixed) improved reaction time, no difference between groups.
(Bherer et al., 2008) Dual task training: variable or

fixed priority

3 weeks: 2 times/week

for 60 min

Reaction time decreased and task accuracy improved in both groups (variable or fixed), no difference between groups.
(Buschkuehl et al., 2008) Working memory training 12 weeks: 2 times/week

for 45 min

Improved on all working memory and reaction time measures and several non-trained memory measures.
(Dahlin, Neely, Larsson, Bäckman, & Nyberg, 2008) Executive function training 5 weeks: 3 times/week

for 45 min

Improved training-specific executive function

measures.

(Edwards et al., 2002) Processing speed training 2 weeks: 2 times/week

for 60 min

Improved processing speed;

control group improved verbal fluency/executive function.

(Edwards, J. D., Wadley, V.G., Vance, D.E., Wood, K., Roenker, D.L., 2007) Processing speed training 5 weeks: 2 times/week

for 60 min

Improved processing speed.
(Hinman, 2002) Biodex Balance System 4 weeks: 3 times/week

for 20 min

Reaction time did not improve.
(Klusmann, V., Evers, A., Schwarzer, R., Schlattmann, P., Reischies, E. M., Heuser, I., and Dimeo, n.d.) Complex cognitive tasks 24 weeks: 3 times/week

for 90 min

Improved memory and executive function.
(Mozolic, Long, Morgan, Rawley-Payne, & Laurienti, 2011) Selective visual and auditory

attention training

8 weeks: 1 time/week

for 60 min

Improved executive function, working memory and reaction time.
(Roenker, Cissell, Ball, Wadley, & Edwards, 2003) Processing speed training 2 weeks for a total

of 4.5 hours

IG UFOV performance equal to no contact

group at post-test, improved RT.

(Slegers, Van Boxtell, & Joles, 2009) Training: introduced and practiced with computers; Intervention: equipped with a computer and internet access,

received no specific instructions

Training: 2 weeks:

3  4 hour sessions

Intervention:

52 weeks

Improved memory and executive functions;

no training groups showed decreased performance.

(Vance et al., 2007) Processing speed training, discussed how speed of processing was related to

everyday activities

3–68 weeks (M = 12 weeks): ten 60 min sessions Improved processing speed and attention. Improved visuo-spatial abilities, psychomotor speed and memory.
(Bisson, Contant, Sveistrup, & Lajoie, 2007) Virtual reality group & computer- based biofeedback training 10 weeks: 2 times/week

for 30 min

Both groups improved functional balance and mobility and decreased reaction time with training.
(Cassavaugh & Kramer, 2009) Attention, visuo-spatial working memory, and manual control tasks 8, 90 min sessions Improved executive function, attention, processing speed, and increased accuracy.
(Finkel & Yesavage, 1989) Computer Assisted Instruction: using method of loci mnemonic;

amount of training not specified

Total of 14 hours

for 2 hrs/day

Improved memory.
(Jennings, Webster, Kleykamp, & Dagenbach, 2005) Repetition lag memory training: recollection or recognition practice 3 weeks: 2 times/week

for 60 min

Recollection group improved memory,

psycho-motor speed, and executive function accuracy.

(Lajoie, 2004) Balance training 8 weeks: 2 times/week

for 60 min

Improved reaction time.
(Li et al., 2008) Spatial working memory training 12 weeks: 45 daily sessions for 15 min Improved spatial working memory and executive function.
(Lustig & Flegal, 2008) Memory-training procedures (1) under

specific strategy instructions designed to encourage semantic, integrative encoding, or (2) in a condition that

encouraged time and attention to encoding but allowed participants to choose their own strategy

3 weeks: 8  110 min

sessions

Both groups improved training-specific memory measure and executive function.

Integrated sentences group improved on a non-trained memory measure.

 

(Ralls, 1997) Logical reasoning and spatial ability training: basic computer course Training: 3 120 min

sessions; Computer

course: 6 weeks:

once a week for 90 min

Improved spatial orientation.
(Wadley et al., 2006) Lab or home-based Processing

speed training

5 weeks: 2 times/week

for 60 min

Both groups improved processing speed;

No difference between groups.

Adapted from (Kueider et al., 2012b, p. 4, 5)

Table 10. The effect of training using neuropsychological software on cognition of older adults. 

Research references Intervention Duration Significant findings on Aspects of Cognition
(Blackford, 1989) Einstein Memory Trainer: focused on names and faces,

method of loci, peg word, important dates and phone numbers; or classroom instruction: Einstein Memory

manual

8 weeks: Twice/week Classroom group improved more than

computer group on visuo-spatial abilities.

Computer group improved more than no contact controls.

Classroom group improved more than no contact controls on delayed measure of visuo-spatial ability.

(Bottiroli & Cavallini, 2009) NeuroPsychological Training 3 weeks: once/week

for 120 min

Improved training-specific memory measures.

Improved on transfer memory measures.

(Eckroth-Bucher & Siberski, 2009) Sound Smart and Captain’s Log programs, paper and

pencil-based activities

6 weeks: twice/week

for 45 min

Non-impaired group improved logical memory.
(Mahncke et al., 2006) Memory, sensation, motor control, and cognition tasks 8–10 weeks: 5 times/week

for 60 min

Improved on task-specific measures,

gains generalized to non-trained measures of memory.

(Peretz et al., 2011) CogniFit Personal CoachH 12 weeks: 3 times/week

for 20–30 min

Improved focused and sustained attention, Improved memory recognition, and mental flexibility;

Improved memory recall, visuo-spatial learning/working memory, and executive function.

Participants with lower baseline scores benefited the most.

(Rasmusson, Rebok, Bylsma, & Brandt, 1999) Colorado Neuropsychological

Test memory tasks

9 weeks: once/week for

90 min

Improved memory.

Performance decreased on prospective memory compared to control group.

(Rebok, Rasmusson and Brandt, 1996) Colorado Neuropsychological

Test memory tasks

9 weeks: once/week for

90 min

Improved on implicit and explicit

Memory.

(Smith et al., 2009) Posit Science Brain Fitness Program 8–10 weeks: 4–5 times/week

for 60 min

Improved auditory memory/attention,

memory, and processing speed.

(Berry et al., 2010) Lab or home-based Posit Science

Sweep Seeker visual training

3–5 weeks: 3–5 times/week

for 40 min

Improved on trained and untrained perceptual tasks.

Adapted from (Kueider et al., 2012b, p. 6)

 

 

Learn More

Best Practices and Lessons Learned from Gaming in Education

Posted by on Dec 7, 2013 in Blog

This entry is part 2 of 3 in the series Maximizing Instructional Effectiveness Through Gamification

Following a review of gaming in education, Larson McClarty & al (2012) indicated that well-designed digital games are based on a number of learning principles, for example, they usually provide continuous and repeated practice for improvement with limited consequences for failing; they also provide “the opportunity to think, understand, prepare and execute actions” (p. 8) in simulated environments; they incorporate clear goals and provide immediate constructive feedback which are important characteristics of formative evaluation. From the evidence gathered, they concluded that only modest evidence was found that the skills developed while playing can be transferred to real life situations. However these authors pointed out that: “Skills may be easier to transfer outside of games than specific content” (p. 9).

During the game, well-designed systems gather information on the actions of the players to identify the players’ competencies. The game can then be adapted to the players’ strengths and weaknesses allowing them to move from simple to complex levels: “Other scaffolding can be achieved through the use of graphics, such as navigation maps, which can lower a player’s cognitive load while playing” (Larson McClarty & al., p. 10). Digital games offer a unique learning progression, allowing ample practice time to suit the players’ need, since “digital games inherently force the player to master a concept in order to advance” (Larson McClarty & al., p. 11). Digital games usually provide players with support and choices which promote active learning and players’ engagement; according to Larson McClarty & al. “The most common error in online education activities is a failure to provide the learner with an appropriate level of agency. Agency refers to the learner’s ability to interact with the material and feelings of belongness and socio-emotional support in the situation.” (p. 11).

Larson McClarty & al.’s review of gaming in education revealed that well-designed games provide attainable challenges so that players have to stretch to reach their maximum potential, with some guidance and support. Digital games provide “a state of pleasant frustration–challenging but doable–is an ideal state for learning … similar to Vygotsky’s zone of proximal development” (p. 12). In the same way, Green & Bavelier posited that since motivation level is closely linked to the individual’s own belief in his ability to succeed, one of the “principles for learning rules” used in video games is to provide only a small increase in the difficulty level so that the players have to master the new skills and techniques before they are allowed to move on to the next level. Without this principle, mastery of the game would be impossible (Green & Bavelier, 2008).

Green & Bavelier’s analysis of research findings on learning transfer using video game indicate that “when a task was started at a difficult level…learning was slow and specific for the trained orientation and location…When the task was made easier…learning progressed quickly and transferred to novel orientations” (p. 9). This finding supports Vygotsky’s theory which stipulates that “motivation is highest and learning is most efficient when tasks are made just slightly more difficult than can be matched by the individual’s current ability” and provides further evidence to the Yerkes-Dodson law (Yerkes, R.M. & Dodson, 1908) which predicts that “learning is a U-shaped function of arousal level” (Green & Bavelier, 2008, p. 10), meaning that a low level of arousal in skill learning leads to minimum learning while a high level of arousal found in video games will most certainly lead to higher level of learning.

Gamers have ample opportunities to take risks with limited negative consequences and to learn from their mistakes since they receive immediate feedback, which in turn enhances their motivation and their engagement to advance to a more complex level. Larson McClarty & al.’s analysis also revealed that when games are used in education they should be coupled with effective pedagogy “in order for the lessons learned in computer games to transfer to other contexts” (p.13). Larson McClarty & al.’s research indicates that: “Games support many of the components of flow such as clear goals, direct and immediate feedback, balance between ability level and challenge, and sense of control. These components can increase student engagement, and student engagement is strongly associated with student achievement” (Larson McClarty & al., 2012, p. 14).

The added value of the digital gaming environment is that it will “sustain engagement and motivation across time, particularly with more challenging learning tasks” (Larson McClarty & al., p. 13). This finding becomes even more important given that lack of engagement and motivation appears to be the root cause of Canadian school dropouts as shown by the recent data from the Canadian Education Association (2009) on the intellectual engagement of Canadian youths in the school system. Intellectual Engagement for this study was operationalized as: “A serious emotional and cognitive investment in learning, using higher order thinking skills (such as analysis and evaluation) to increase understanding, solve complex problems, or construct new knowledge” (Canadian Education Association, 2009, p. 7). The report states that a disproportionate percentage of Canadian students (37%) are not intellectually engaged in the study of important subjects such as mathematics and language arts. Additionally, the findings suggested that: “intellectual engagement decreases steadily and significantly from Grade 6 to Grade 12. The longer students remain in school, the less likely they are to be intellectually engaged” (Canadian Education Association, 2009, p. 31). Larson McClarty & al. (2012), indicated that the gaming experience of youths shapes their expectations of the nature of learning environments.  Therefore it is not surprising that a great number of youths find school boring and that 70% of school dropouts said that they were not motivated or inspired to work hard (Larson McClarty & al., 2012, p. 13). These authors report similar findings from studies performed in Europe and Scotland.

Larson McClarty & al. (2012) summarize the 21st century skills that videogames foster. According to their review, videogames “capture the players’ attention and engage them in complex thinking and problem solving” (p. 16). Their review also identified a number of skills that videogames can measure: “collaboration, innovation, production, and design” (p. 16). Advocates of the use of videogames indicate that, rather than simply reading about hypothetical situations, games provide players with ample opportunities to experience a variety of situations, to think systematically, to analyse relationships, to develop decision-making and problem-solving skills; they also foster “collaboration, problem solving, and procedural thinking” (p. 16). Based on (McFarlane, A., Sparrowhawk, A., & Heald, 2002), Larson McClarty & al. posited that the reason for neglecting the use of videogames in the classroom may be due to the fact that the skills videogames develop “are not currently tested or explicitly valued in educational systems” (p. 17).

Learn More

Benefits of Gamification

Posted by on Dec 7, 2013 in Blog

This entry is part 1 of 3 in the series Maximizing Instructional Effectiveness Through Gamification

Technology is impacting what, where, when and how we learn. Easy access to videos, electronic and interactive games, laptop, tablets and cell phones plays an important role in formal and informal learning. The use of gaming for learning and assessment has increased in recent years and research is starting to investigate its benefits and potential drawbacks. Claims have been made that digital games can “teach and reinforce skills important for future jobs such as collaboration, problem-solving, and communication” (Larson McClarty, Orr, Frey, Dolan, Vassileva, McVay, 2012, p. 4). According to the Federation of American Scientists, the Entertainment Software Association and the National Science Foundation, employers are seeking “many of the skills required for success in games such as thinking, planning, learning, and technical skills”” (Larson McClarty, K., Orr, A., Frey, P. M., Dolan, R. P., Vassileva, V., McVay, 2012, p. 4). Given the apparent importance of gaming in learning, this section of the review provides a brief definition of games and analyzes research evidence on the use of digital games in learning.

Games are usually described using some typical key words such as “enjoyment, fun, rules, systems, challenge, goals, interaction” and are usually opposite to “work”. Salem and Zimmerman (2004) define games as a “system in which players engage in artificial conflict, defined by rules, that results in a quantifiable outcome” (p. 80) in (Larson McClarty, K., Orr, A., Frey, P. M., Dolan, R. P., Vassileva, V., McVay, 2012, p. 5). According to Larson McClarty & al. (2012) digital games can be described in the same terms with the added technology dimension. These authors examined the empirical evidence behind five claims that have been made regarding the benefits of digital games for learning:

  1. Games are built on sound learning principles;
  2. Games provide personalized learning opportunities;
  3. Games provide more engagement for the learner;
  4. Games teach 21st century skills; and
  5. Games provide an environment for authentic and relevant assessment” (Larson McClarty & al., 2012, p.7).

Their findings indicated that most studies were based on data gathered from students’ and teachers’ surveys and that results “consist of descriptive analysis of the impact games have on students’ attitude towards the subject being taught and their motivation to attend and engage in class” (p. 21). These authors argued that only a few studies investigated the relationship between games and academic performance and “their results are mixed because of the difference in definitions and methodologies” (p. 22). Larson McClarty & al. (2012) stress the need to identify an agreed upon set of features and to create definitions and models for the different games attributes so that a more coherent research approach can be taken to measure the efficacy of games. They added that unless research can produce clear evidence of the impact of the use of games for learning, games will continue to be considered solely as motivational device.

Summarizing the benefits of games in learning Larson McClarty & al. indicated that “what is most unique about digital games – as opposed to any other learning innovation – is the combination of motivation, engagement, adaptivity, simulation, collaboration, and data collection that can’t be achieved at scale any other way” (p. 22). They concluded that games seem to increase higher-order thinking skills and that “in general, the research supports that digital games can facilitate learning” (p. 23).

 

Learn More

Critics of Brain Training

Posted by on Dec 7, 2013 in Blog

This entry is part 17 of 17 in the series Best Practices in Brain Training

In spite of the growing interest and demands for brain training it appears that there is a paucity of rigorous scientific evidence-based information regarding the effectiveness of these programs. Of all the brain-training providers, Posit Science appears to have the most well documented empirical evidence to support the benefits claimed for their products. Co-founder and Chief Scientific Officer, Michael Merzenich, who is known for three decades of pioneer work in brain plasticity research, head the company’s scientific team. Some attempts being made to investigate this unchecked industry are briefly described below.

Thomas and Baker conducted a critical review of 20 studies that tested for training-related structural plasticity in the adult human brain using MRI methods. Only one of these studies provided strong evidence that the brain-training task has particular effects to particular regions of the brain. They attributed this shortcoming to “limitation of the experimental design, statistical methods, and methodological artifacts may underlie many of the reported effects, seriously undermining the evidence for training-dependent changes in adult humans” (Thomas & Baker, 2012, p. 225).

A team of British researchers conducted a study to investigate whether regular brain training resulted in significant improvement in cognitive function. The participants for this study consisted of 11,430 adults aged 18 – 60, who participated in a six-week online training program specifically designed to improve reasoning, memory, planning, visuo-spatial skills and attention. Results indicated that although the study group showed improvement in these six cognitive tasks, there was no evidence to support that these skills were broadly transferable to other tasks, even if they were cognitively closely related. The conclusion reached was that: “the widely held belief that commercially available computerized brain-training programs improve general cognitive function in the wider population in our opinion lacks empirical support” (Owen et al., 2010, p. 1). This study has itself been criticized for allocating insufficient training time to participants and for the poor quality control (Harrell, 2010).

In 2008, the United Kingdom-based consumer agency ‘’Which?’’ asked the manufacturers of popular brain training programs Brain Training, Test and Improve Your Memory, Mindfit, Lumosity and MindSpa to describe the benefit of using their program and to provide scientific evidence to support their claims. Which? then asked three experts to review the specified benefits and claims made by these companies. These experts reached the conclusions that in general there was no empirical evidence to support the claims regarding the effectiveness of these programs. Additionally, in most cases the research design was flawed, the methodology lacked rigor and the results were not vetted through peer review. A more detailed description of the claims made by the companies and the experts’ deliberation is provided below. A similar conclusion about brain training programs was also reached by expert video game blogger Rachel Ponce: “But for all of the many video game developers who have jumped on this recent brain-game bandwagon, none have been able to show that their particular games offer real, scientifically validated cognitive effects” (Ponce, 2012, p. 1). Ponce made a thorough review of Lumosity and concluded that the benefits advocated and claims made are largely unsubstantiated.

Table 6. Reviews of Brain Training Programs

Brain training programs Price Company claims Which? Expert reviews
Brain Training (Which?, 2009, p. 1)  £110 The Nintendo exercises increase blood flow to the brain and can help stimulate memory. Research has not been published in recognized scientific journals. Increased brain activity in terms of blood flow does not provide the evidence that the brain is being trained or altered.
Test and Improve Your Memory (Which?, 2009, p. 2) £9.99 The training can improve thinking ability, prevent brain aging, and hone memory, language, concentration, visual/spatial skills and executive function. There is no evidence that the training helped to improve general mental ability.
Mindfit £88 Training can improve short-term memory, spatial memory, visual perception, scanning, divided attention, shifting, awareness, hand-eye coordination, time estimation, planning and inhibition. Research has not been published in recognized scientific journals. However, a rigorous unpublished study tends to support the claims.
Lumosity £4.99 a month The training can improve memory, attention, processing speed and cognitive control. The research methodology is flawed and the evidence does not support the claims regarding the effectiveness of the training. 
MindSpa £175  The program can help children and adults to generate alpha brainwave activity to improve cognitive performance. The research methodology is flawed and the evidence does not support the claims regarding the effectiveness of the training.

Source: Adapted from (Which?, 2009).

Research shows that the beneficial effects of brain training programs can be narrow in scope and be limited to the specific training tasks. Hertzog and colleagues noted: “if enrichment effects were operative at the level of broad cognitive abilities then it would make sense, from applied perspective, to train individuals on a test that is known to be a reliable and valid indicator of the ability in question, regardless of whether performance on the test, taken by itself, is of direct practical utility” (Hertzog et al., 2008, p. 9). Citing (Salthouse, 2006) the researchers also noted that: “one can only evaluate the benefit of mental exercise program interventions with very long term follow-up studies that show sustained and continuing benefits; one should not overgeneralized from positive results in short-term interventions” (p. 46).

Hertzog et al. deplored the fact that several firms are aggressively marketing brain training products that have not been empirically validated and reported according to scientific standards of good practice. They strongly argued for the establishment of standards for brain training programs: “The majority of software programs marketed as enhancing cognition or brain function lack supporting empirical evidence for training and transfer effects. Clearly, there is a need to introduce standards of good practice in this area. Software developers should be urged to report the reliability and validity of the trained tasks, the magnitude of training effects, the scope and maintenance of transfer to untrained tasks, and the population to which effects are likely to generalize” (p. 48).

Learn More
%d bloggers like this: