Instructional Effectiveness

A Framework to Examine the Effectiveness of Computerized Training

Posted by on Dec 7, 2013 in Blog

This entry is part 1 of 5 in the series Instructional Effectiveness

Two key elements must be taken into account when considering the effectiveness of technology-mediated training, namely instructional effectiveness; and instructional efficiency. Instructional effectiveness and efficiency are two elusive terms for which no accurate definitions can readily be found in the literature. The difficulty in defining these terms is probably due to the number of factors extraneous to the material itself, which confounds measurement related to the quality of instruction.

In previous studies the efficiency and effectiveness of an instructional product have been used as dependent variables. Nathenson and Henderson (1980) note that research has had a very narrow focus with regards to the effectiveness of instructional materials. In many studies effectiveness has been viewed only in terms of learning gains on post-tests. The authors argued that although improved student performance is an important element, it should not be the only indicator of instructional material effectiveness (Nathenson & Henderson, 1980). Chinien (1990) suggests that instructional material effectiveness should be viewed within a framework, which encapsulates three major elements: achievement, study time, and the students’ attitude toward the material (Chinien, 1990).


Several studies (see Chinien & Boutin, 1994) have demonstrated that the quality of instructional material can help to significantly improve students’ achievement on post-tests. Two indicators of instructional material effectiveness are used with respect to achievement. The first relates to the ability of the material to help a predetermined percentage of students reach a designated level of mastery on post-tests.  The gain in learning is a second indicator of effectiveness related to achievement. Learning gain is usually expressed as the difference between post-test and pretest scores (learning gain equals post-test score minus pre-test score, (Romiszowski, 1986).

Study Time

The amount of time that students spend interacting with an instructional product is another critical element of instructional material effectiveness. Nathenson and Henderson (1980) cite many research studies that have reported achievement at the expense of increased study time. These authors quote (Faw, H. W.; Waller, 1976) to emphasize the critical relationship between study time and the achievement component of instructional material effectiveness: “ [Since] the single most important determinant of how much is learned is probably total study time…it is hardly surprising that the manipulation which tend to extend the period of time spent in study…are in general accompanied by superior levels of learning.” There are also some studies demonstrating improved student performances on post-tests while keeping study time constant. Study time is also commonly referred to as a measure of efficiency (Davis, R. H.; Alexander, L. T.; Yelon, 1974) , (Futrell, H. K.; Geisert, 1984).


A third dimension of instructional material effectiveness is the student’s attitude toward the material. Studies conducted by (Abedor, 1972), (Stolovitch, 1975), and (Wager, 1980) indicate that effective instructional materials generate more positive student attitudes. On the other hand, (Berthelot, 1978) and Chinien (1990) found no significant differences in students’ attitude related to the quality of instructional material. Romiszowski (1986) cautioned that the novelty effects may confound measures of students’ attitude. He argues that the novelty may not only inspire negative attitudes that diminish over time, but may also generate excessive praise and enthusiasm that may also disappear. Although research on time-on-task indicates that a positive correlation between achievement and time engaged in learning tasks, time is not generally used as an independent variable in research on distance education.

The effectiveness of instructional material can be conceptualized within a framework of three major elements: student achievement, study time, and student attitude. All three elements are important and need to be considered collectively when assessing instructional material.  Nesbit and his colleagues have developed a useful instrument for evaluating e-learning objects, which can be adapted and used for Neurogenesis. This instrument comprises nine key elements as described below (Nesbit, J.; Belfer, K.; Leacock, n.d.):  

Table 11. Effectiveness of E-learning Objects

Elements Description of elements
Content quality Veracity, accuracy, balanced presentation of ideas and appropriate level of detail
Learning goal alignment Alignment among learning goals, activities, assessments, and learner characteristics
Feedback adaptation Adaptive content or feedback driven by differential learner input or learner modeling
Motivation  Ability to motivate and interest an identified population of learners
Presentation design Design and auditory information for enhanced learning and efficient mental processing
Interaction usability Ease of navigation, predictability of the user interface, and the quality of the interface help features
Accessibility Design of controls and presentations format to accommodate disabled and mobile learners
Reusability Ability to use in varying learning contexts and with learners from different backgrounds
Standards compliance  Adherence to international standards and specifications


Learn More

Elements of Program Quality

Posted by on Dec 7, 2013 in Blog, Quality Standards for E-Learning

This entry is part 2 of 5 in the series Instructional Effectiveness

Proponents of quality in technology-mediated learning or e-learning do not agree on what constitutes quality. Given the proliferation of e-learning some attempts are being made to develop quality standards for the use of technology for teaching and learning. The purpose of e-learning quality standards are:

  • Maximize the validity, integrity and portability of e-learning;
  • Ensure that resource development follows internationally accepted specifications and that the technologies and applications used to build and deliver the resources ensure the most consistent operation and widest possible use and reuse of those resources;cilitate interoperability of learning resources and systems, and remove barriers to e-learning (AFLF, 2013, p.4). 

E-learning quality standards also ensure that the learner will acquire content skills and knowledge that are relevant and transferable to real world situations (Baker, 2002).

The program design must be underpinned by a sound learning theory in order to ensure the effectiveness and efficiency of the instructional product. The product will also have the following positive outcomes for learners:

  • Captures the interest of learners and motivate them to learn the material;
  • Gives learners a sense of ownership in their learning;
  • Ensures that learners are cognitively stimulated and engaged in the learning process;
  • Give learners ample opportunity to practice skills being learned;
  • Learning is scaffolded and supported;
  • Encourages the deployment of metacognition (AFLF, 2010, p. 4). 

In defining the conditions of learning, the education theorist Robert Gagné has proposed nine events of instruction, which activate the processes of information processing to support effective learning.  Gagné’s (1965) nine events of learning are commonly used as a basic framework for developing instructional materials. These events are:

  • Gaining attention;
  • Informing the learner of the objective;
  • Stimulating recall of prerequisite learned capabilities;
  • Presenting the stimulus material;
  • Providing learning guidance;
  • Eliciting performance;
  • Providing feedback about performance correctness;
  • Assessing the performance; and
  • Enhancing retention and transfer. 

The use of systematic instructional design and development processes to develop instructional materials for augmenting analytical skills is important. A good program design includes the following steps:

  • Analyze learning needs;
  • Design instructional materials;
  • Develop instructional materials;
  • Evaluate instructional materials; and
  • Assess the learnability, effectiveness and efficiency of the instructional materials (AFLF, 2010).
Learn More

Research Assessing the Effectiveness of ICT-Mediated Learning

Posted by on Dec 7, 2013 in Blog, Training Effectiveness Framework

This entry is part 3 of 5 in the series Instructional Effectiveness

An analysis of the extensive amount of research conducted by (Russell, 1999) to assess the effectiveness of ICT-mediated learning leads to the conclusion that there is no significant difference to be observed in performance measures between learning with and without technology. A meta-analysis of over 500 studies conducted by (Kulik, 1994) as cited by (Baalen & Moratis, 2001) indicated that students receiving computer-based instruction tend to learn more in less time. Baalen and Moratis (2001) identified some interesting trends from these studies: “The preference of students for face-to-face instruction reported in the 1950s and 1960s can perhaps be attributed to their unfamiliarity to the technology. Recent research tends to show a developing preference for distance learning among post-secondary learners.” Earlier studies were designed to demonstrate that technology would not have a negative impact on learners’ performance. The goal was to prove the non-significant difference. In contrast, more recent studies have attempted to determine if technology-based learning was more effective that face-to-face instruction. Although most of these studies report no significant difference in outcome measures, many other studies reported equal or superior achievement over traditional classroom instruction.

Earlier attempts to use technology for learning were restricted to drill and practice and tutorial programs. With today’s enabling technology ICT-mediated learning engages learners in authentic learning tasks that allow them to use the technologies to communicate, collaborate, analyze data and access information sources. Although research on these innovative applications of ICTs in education is not extensive, some studies have demonstrated positive learning outcomes in support of ICTs. After reviewing the literature and research on distance education, (Merisotis, J.P.; Phipps, 1999) concluded: “It may not be prudent to accept these findings at face value. Several problems with the conclusions reached through these studies are apparent. The most significant problem is that the overall quality of original research is questionable and thereby renders many of the findings inconclusive” (p. 3). Some of the shortcomings identified are: much of the research failed to control for extraneous variables; most studies failed to use randomly selected subjects; instrument of questionable validity and reliability were used; and many studies failed to control for reactive effects.

Brennan, McFadden and Law (2001) also concluded that: “the gaps between the often rhetorical claims of ‘effectiveness’ and the reality of well-researched studies are not often bridged” (p. 64)… thereby renders many of the findings inconclusive” (Brennan, R.; McFadden, M.; Law, 2001, P.3). A more recent systematic attempt to shed light on the effectiveness of e-learning was conducted by the US Department of Education in 2010. The department conducted a meta-analysis of 50 e-learning studies involving older learners and the general conclusion reached was that: “students in online conditions performed modestly better, on average, than those learning the same material through traditional face-to-face instruction” (US Department of Education, 2010, p. xiv). Few rigorous research studies assessing the effectiveness of e-learning for youth were found.

Many studies comparing ICT-meditated learning to traditional face-to-face instruction are also of limited relevance and values for two main reasons. First, it is impossible to establish a benchmark for making a meaningful comparison. Second, several years of educational research spent comparing methods of instruction have failed to inform practice. The Aptitude by Treatment Interaction research indicates that an instructional treatment interacts with the learner’s characteristics to produce differential learning gains. (Snow, 1976) argued: “No matter how you try to make an instructional treatment better for someone you will make (it) worse for someone else” (Snow, 1976, P. 292). Additionally, according to (Messick, 1976) “No matter how you try to make an instructional treatment better in regard to one outcome, you will make (it) worse to some other outcomes” (Messick, 1976, p. 266). Clearly, there is a need for developing a conceptual framework to guide research in ICT-mediated learning and there is also an urgent need to impose more rigor on research in this area.

After conducting a thorough review of research on online delivery of education and training, Brennan, McFadden and Law (2001, p. 65) concluded that there are many tensions in the literature regarding the effectiveness of online teaching and learning. In an attempt to explain these tensions Baalen and Moratis (2001) argued that assessing the effectiveness and efficiency of ICT-mediated learning using empirical research results provides only a very narrow perspective on the true value of learning technologies. They suggested that the effectiveness and efficiency of ICT-mediated learning is “emergent”. By this they meant that it is only through experimentation and experience that the true value of learning technologies can be realized.

ICT-mediated learning appears to hold great promise for achieving the goals of education for all, such as reducing poverty and promoting social inclusion. However, the integration of ICTs in education requires considerable investment in time and resources. Consequently, when planning to integrate ICT in education and training, policy-makers should be able to use evidence-based information for making sound decisions. In spite of the critical importance of sound research to guide policy and practice, it appears that there is a lack of valid and reliable evidence-based information in the field of learning technology. Many studies conducted during the past 70 years have failed to establish a significant difference in effectiveness between learning technology and traditional methods. While these findings tend to suggest that learning technology does not considerably improve learning, the fundamental question that remains unanswered is: “were the researchers assessing the effectiveness of ICTs or were they simply assessing the effectiveness of instructional treatments that were initially less than perfect? If the instructional treatment is weak or flawed it may lead the researcher to make either: (1) a type 1 error that is rejecting the null hypothesis when it is true; or (2) a type 2 error, that is failing to reject the null hypothesis when it is false; and lead the researcher to reach false conclusions” (Chinien & Boutin, 2005).

Learn More

Developmental Testing of ICT-Mediated Learning Materials

Posted by on Dec 7, 2013 in Blog, Quality Standards for E-Learning

This entry is part 4 of 5 in the series Instructional Effectiveness

While a general consensus is emerging regarding the need to integrate ICTs in teaching and learning, there is little empirical evidence to support the decision-making process. In fact, over 350 research projects conducted during the past 70 years have failed to establish a significant difference in effectiveness between ICT and traditional methods (Baalen and Moratis, 2001). While these findings tend to suggest that ICTs do not considerably improve teaching and learning, the fundamental question that remains unanswered is: Were the researchers assessing the effectiveness of ICTs or were they simply assessing the effectiveness of instructional products that were less than perfect?

In spite of considerable progress made in the development of instructional materials through the adoption of systematic instructional design, practitioners still have difficulty in producing efficient and effective instructional materials because our knowledge of human learning is still limited. Many of the critical assumptions that are made during the design and development of instructional products are based on learning theories that are weak. The final product is therefore less than perfect (Dick, W.; & Carey, 1990), (Gagne, R. M.; Briggs, 1979). Conscious of this inherent difficulty, and recognizing that the design process is not foolproof, instructional developer have included a formative evaluation component in their models (Geis,  Weston, & Burt, 1984). The purpose of formative evaluation is to provide instructional developers with an opportunity to identify and correct errors and problems within a set of instructional materials while they are still in a developmental stage (Baker and Alkin, 1984). Formative evaluation is defined as the “evaluation of educational programmes while they are still in some stage of development” (Baker & Alkin, 1984, p. 230). Formative evaluation is: “the empirical validation of many of the theoretical constructs, which are included in earlier components of the instructional design model. If the theory is weak the product is less than properly effective. Since our present theories and practices are imperfect, we need empirical data as a basis for improving the product” (Dick, 1977, p. 312).

Formative evaluation of instructional material is an essential activity in the design and development of instruction, because there is a lack of comprehensive theory of learning to guide practice (Nathenson, M. B.; Henderson, 1980). Formative evaluation attempts to appraise such programs in order to inform the program developers how to ameliorate deficiencies in their instructions. The heart of the formative evaluator’s strategy is to gather empirical evidence regarding the efficacy of various components of the instructional sequence and then consider the evidence in order to isolate deficits and suggest modifications (Popham, 1975). Earlier attempts for trying out and revising instructional materials date back to the 1920s, with educational films and radio (Cambre, 1981). There are two broad questions addressed by formative evaluation activities. The first relates to the content and the technical quality of the material, and the second pertains to its learnability. The evaluation of content and technical quality is addressed through expert verification and revision. It is generally believed that the students are most qualified for providing feedback data to assess the learnability (Nathenson and Henderson, 1980).

Expert Evaluation and Revision

The use of expert opinion in assessing the worth of an instructional product is probably the oldest evaluation strategy used in education. Expert opinion is an important evaluation tool because it is quick, it is cost-effective, and it tends to enhance the credibility of an instructional product. Additionally, expert opinion can be used to modify a product before it is used by students. Types of experts are commonly used for the evaluation process, namely: content, language, target, media, format, and delivery system experts:

  • The content expert will ensure that the content is relevant, accurate and up-to-date.
  • The language expert will ensure that the language is appropriate for the target population.
  • The target population expert will ensure that the material is appropriate for the designated group that will be using it. If the target population is adult learners, then the expert will ascertain that the material being evaluated is in agreement with the basic principles, philosophies, assumptions, and established theories in adult education.
  • The media expert will focus on the cost-effectiveness of the proposed materials. Typical cost considerations include: capital costs, installation/renovation costs, time cost, support personnel, training, maintenance, cost of alternatives, as well as shared costs. The expert can also assess the societal costs of not implementing a technology-based product.
  • The media expert will assess the particular characteristics of the learning technology in order to determine its appropriateness for addressing the learning needs of the target population.
  • The format expert will determine if the material has been packaged to maximize its effectiveness and efficiency.
  • The delivery expert will ascertain that the material meets standards established by best practices. The effectiveness of instructional material depends to a large extent on how well instructional developers have been able to support internal learning processes with external events.

Learner Verification and Revision (LVR)

Learner Verification and Revision (LVR) consists of a three-stage approach (Dick and Carey, 1985). These stages are: one-to-one evaluation, small group evaluation, and field test.

One-to-One Evaluation

The one-to-one evaluation occurs in the earlier phase of development (Dick and Carey, 1985). It serves to “identify and remove the most obvious errors in the instruction, as well as to obtain the initial student’s reaction to the content” (p. 199). At least three students representative of the target population should be selected for this process: one with above average ability, another with average ability and a third with below average ability. In a one-to-one evaluation the student is exposed to the instructional materials as well as to all pre-tests, post-tests and embedded tests within the material. The one-to-one evaluation is an interactive process between student and evaluator. Data are collected through observation, interview, embedded tests, post-tests, and an attitude questionnaire. The data can either be used for making on the spot revisions for minor problems or delayed revisions for more complex ones. The one-to-one evaluation can enable the developer to uncover gross misconceptions in information processing. Once these misconceptions are uncovered, the material can be easily modified to address the problems.

Small Group Evaluation

The second stage of formative evaluation is conducted with a group of eight to twenty students representative of the target population (Dick and Carey, 1985). The small group evaluation has two main purposes: to validate modifications made to the material following the one-to-one evaluation, and to ascertain if the student can use the material without the help of the evaluator. The term “small group” refers only to the number of students involved in the evaluation process. Small group evaluation does not imply that all students should be assembled in one location, and be evaluated all at once. In a small group evaluation, the students are provided with all instructional materials and tests. They are instructed to study the material at their own pace. The evaluator intercedes only if a major problem occurs prohibiting the student from proceeding without help. After interacting with the materials and tests, the students are given an attitude questionnaire in order to obtain their reactions. Data gathered during the small group evaluation are used to further refine the instructional material.

Field Test

The field test or summative developmental evaluation is designed to verify the effectiveness of previous verifications and revisions performed during earlier phases of evaluation. The field testing also helps to ascertain if the instructional material will function smoothly, and whether it will be accepted by students, teachers, and administrators in the intended setting (Dick and Carey, 1985). The focus of the evaluation is on the merit of the instructional product in terms of achievement, attitude and study time.

Risk Assessment

In spite of the importance of formative evaluation, most instructional products in current use have been systematically evaluated. The costs and time required are two main deterrents to including formative evaluation in the instructional development process. A risk assessment can help to weigh the time and the costs constraints against the consequences of making an inappropriate decision when adopting a technology-based learning product.  Although most experts recommend a three-stage formative evaluation process , there is some empirical evidence in the literature (Wager, 1980b), and (Kandaswamy, 1976) suggesting that small group evaluation can be eliminated without significantly affecting the overall effectiveness of the revised product.

Although the importance of formative evaluation is well evidenced in the literature, the state of the art is still an underdeveloped, underconceptualized field of inquiry. There is a paucity of empirical foundations or rationales to support the guidelines and recommendations for the process. Research efforts are needed to improve and validate formative evaluation methodologies in current use, so as to give more credibility to the formative evaluation process.

Learn More

Canadian Guidelines for E-learning Quality Assurance

Posted by on Dec 7, 2013 in Blog, Quality Standards for E-Learning

This entry is part 5 of 5 in the series Instructional Effectiveness

Baker (2002) has developed some guidelines for assessing the quality of e-learning in Canada. These quality guidelines are generic and are therefore broadly applicable to any area and level of education. Following is a brief summary of these guidelines adapted to the specific needs of this project.

Learner management:

Instructional product/service information for potential learners is: 

  • Clear;
  • Current;
  • Accurate;
  • Comprehensive;
  • Complete; and
  • Readily available.

Advertising, recruiting and admissions information includes: 

  • Pre-requisites and entry requirements;
  • The program overview;
  • Specific delivery format;
  • Course level and credit points;
  • Course length and degree requirements;
  • Types of assignments and grading methods;
  • Learning assessment procedures and evaluation criteria; and
  • All applicable fees, if any. 

Registration procedures that include: 

  • A clear statement of expectations of learners;
  • An intake and placement procedures that provide for individualized program and assessment and recognition of prior learning; and
  • An orientation procedure. 

Management of learners’ records 

  • Document learners enroute and final achievement;
  • Ensure confidentiality of records; and
  • Give learners access to their records. 

Technological support for the delivery and management of learning is: 

  • Navigable;
  • Reliable;
  • Sensitive to bandwidth constraints of students;
  • Compliant with current technology and ICT standards;
  • Appropriate to the subject matter content and skills;
  • Appropriate to the intended learning outcomes;
  • Appropriate to the characteristics and circumstances of the learner;
  • Easily updateable and frequently updated;
  • Designed to promote active learning;
  • Designed to support prior learning;
  • Designed to support collaborative learning and social networking;
  • Designed to support flexible learning;
  • Designed to include assistive devices for persons with disabilities; and
  • Designed to assist learners to use the technology system for accessing the learning program. 

Learning assessment is:

  • Authentic;
  • Competency-based;
  • valid and reliable;
  • Frequent and timely; and
  • Immediate feedback to learners. 

Instructional materials are: 

  • Designed and developed by experts;
  • Learner friendly;
  • Interesting in content;
  • Appealing to learners;
  • Well-organized;
  • Free of cultural, racial, class, age, and gender bias;
  • Accessible to those with disabilities;
  • Free from errors; and
  • Customizable and adaptable to learner needs and abilities. 

Learning content is: 

  • Credible with sources identified;
  • Accurate;
  • Relevant; and
  • Culturally sensitive;. 

Learning package includes: 

  • Course description;
  • Learning objectives;
  • Assessment and completion requirements;
  • Learning resources;
  • Course activities and assignments;
  • Quizzes and examinations; and
  • Access to answers for questions/quizzes. 

Appropriate and necessary personnel include: 

  • Qualified support staff with teaching experience and relevant work experience and/or current knowledge in the field;
  • Appropriate skills for teaching online; and
  • Process support persons. 

Continuous improvements based on routine reviews and evaluation of:

  • Learner support services;
  • Course content and objectives;
  • Learning materials;
  • Instructional design;
  • Student learning and student achievement;
  • Policies and management practices;
  • Operational procedures; and
  • Learner satisfaction;
Learn More
%d bloggers like this: