Quality Standards for E-Learning

Developmental Testing of ICT-Mediated Learning Materials

Posted by on Dec 7, 2013 in Blog, Quality Standards for E-Learning

This entry is part 4 of 5 in the series Instructional Effectiveness

While a general consensus is emerging regarding the need to integrate ICTs in teaching and learning, there is little empirical evidence to support the decision-making process. In fact, over 350 research projects conducted during the past 70 years have failed to establish a significant difference in effectiveness between ICT and traditional methods (Baalen and Moratis, 2001). While these findings tend to suggest that ICTs do not considerably improve teaching and learning, the fundamental question that remains unanswered is: Were the researchers assessing the effectiveness of ICTs or were they simply assessing the effectiveness of instructional products that were less than perfect?

In spite of considerable progress made in the development of instructional materials through the adoption of systematic instructional design, practitioners still have difficulty in producing efficient and effective instructional materials because our knowledge of human learning is still limited. Many of the critical assumptions that are made during the design and development of instructional products are based on learning theories that are weak. The final product is therefore less than perfect (Dick, W.; & Carey, 1990), (Gagne, R. M.; Briggs, 1979). Conscious of this inherent difficulty, and recognizing that the design process is not foolproof, instructional developer have included a formative evaluation component in their models (Geis,  Weston, & Burt, 1984). The purpose of formative evaluation is to provide instructional developers with an opportunity to identify and correct errors and problems within a set of instructional materials while they are still in a developmental stage (Baker and Alkin, 1984). Formative evaluation is defined as the “evaluation of educational programmes while they are still in some stage of development” (Baker & Alkin, 1984, p. 230). Formative evaluation is: “the empirical validation of many of the theoretical constructs, which are included in earlier components of the instructional design model. If the theory is weak the product is less than properly effective. Since our present theories and practices are imperfect, we need empirical data as a basis for improving the product” (Dick, 1977, p. 312).

Formative evaluation of instructional material is an essential activity in the design and development of instruction, because there is a lack of comprehensive theory of learning to guide practice (Nathenson, M. B.; Henderson, 1980). Formative evaluation attempts to appraise such programs in order to inform the program developers how to ameliorate deficiencies in their instructions. The heart of the formative evaluator’s strategy is to gather empirical evidence regarding the efficacy of various components of the instructional sequence and then consider the evidence in order to isolate deficits and suggest modifications (Popham, 1975). Earlier attempts for trying out and revising instructional materials date back to the 1920s, with educational films and radio (Cambre, 1981). There are two broad questions addressed by formative evaluation activities. The first relates to the content and the technical quality of the material, and the second pertains to its learnability. The evaluation of content and technical quality is addressed through expert verification and revision. It is generally believed that the students are most qualified for providing feedback data to assess the learnability (Nathenson and Henderson, 1980).

Expert Evaluation and Revision

The use of expert opinion in assessing the worth of an instructional product is probably the oldest evaluation strategy used in education. Expert opinion is an important evaluation tool because it is quick, it is cost-effective, and it tends to enhance the credibility of an instructional product. Additionally, expert opinion can be used to modify a product before it is used by students. Types of experts are commonly used for the evaluation process, namely: content, language, target, media, format, and delivery system experts:

  • The content expert will ensure that the content is relevant, accurate and up-to-date.
  • The language expert will ensure that the language is appropriate for the target population.
  • The target population expert will ensure that the material is appropriate for the designated group that will be using it. If the target population is adult learners, then the expert will ascertain that the material being evaluated is in agreement with the basic principles, philosophies, assumptions, and established theories in adult education.
  • The media expert will focus on the cost-effectiveness of the proposed materials. Typical cost considerations include: capital costs, installation/renovation costs, time cost, support personnel, training, maintenance, cost of alternatives, as well as shared costs. The expert can also assess the societal costs of not implementing a technology-based product.
  • The media expert will assess the particular characteristics of the learning technology in order to determine its appropriateness for addressing the learning needs of the target population.
  • The format expert will determine if the material has been packaged to maximize its effectiveness and efficiency.
  • The delivery expert will ascertain that the material meets standards established by best practices. The effectiveness of instructional material depends to a large extent on how well instructional developers have been able to support internal learning processes with external events.

Learner Verification and Revision (LVR)

Learner Verification and Revision (LVR) consists of a three-stage approach (Dick and Carey, 1985). These stages are: one-to-one evaluation, small group evaluation, and field test.

One-to-One Evaluation

The one-to-one evaluation occurs in the earlier phase of development (Dick and Carey, 1985). It serves to “identify and remove the most obvious errors in the instruction, as well as to obtain the initial student’s reaction to the content” (p. 199). At least three students representative of the target population should be selected for this process: one with above average ability, another with average ability and a third with below average ability. In a one-to-one evaluation the student is exposed to the instructional materials as well as to all pre-tests, post-tests and embedded tests within the material. The one-to-one evaluation is an interactive process between student and evaluator. Data are collected through observation, interview, embedded tests, post-tests, and an attitude questionnaire. The data can either be used for making on the spot revisions for minor problems or delayed revisions for more complex ones. The one-to-one evaluation can enable the developer to uncover gross misconceptions in information processing. Once these misconceptions are uncovered, the material can be easily modified to address the problems.

Small Group Evaluation

The second stage of formative evaluation is conducted with a group of eight to twenty students representative of the target population (Dick and Carey, 1985). The small group evaluation has two main purposes: to validate modifications made to the material following the one-to-one evaluation, and to ascertain if the student can use the material without the help of the evaluator. The term “small group” refers only to the number of students involved in the evaluation process. Small group evaluation does not imply that all students should be assembled in one location, and be evaluated all at once. In a small group evaluation, the students are provided with all instructional materials and tests. They are instructed to study the material at their own pace. The evaluator intercedes only if a major problem occurs prohibiting the student from proceeding without help. After interacting with the materials and tests, the students are given an attitude questionnaire in order to obtain their reactions. Data gathered during the small group evaluation are used to further refine the instructional material.

Field Test

The field test or summative developmental evaluation is designed to verify the effectiveness of previous verifications and revisions performed during earlier phases of evaluation. The field testing also helps to ascertain if the instructional material will function smoothly, and whether it will be accepted by students, teachers, and administrators in the intended setting (Dick and Carey, 1985). The focus of the evaluation is on the merit of the instructional product in terms of achievement, attitude and study time.

Risk Assessment

In spite of the importance of formative evaluation, most instructional products in current use have been systematically evaluated. The costs and time required are two main deterrents to including formative evaluation in the instructional development process. A risk assessment can help to weigh the time and the costs constraints against the consequences of making an inappropriate decision when adopting a technology-based learning product.  Although most experts recommend a three-stage formative evaluation process , there is some empirical evidence in the literature (Wager, 1980b), and (Kandaswamy, 1976) suggesting that small group evaluation can be eliminated without significantly affecting the overall effectiveness of the revised product.

Although the importance of formative evaluation is well evidenced in the literature, the state of the art is still an underdeveloped, underconceptualized field of inquiry. There is a paucity of empirical foundations or rationales to support the guidelines and recommendations for the process. Research efforts are needed to improve and validate formative evaluation methodologies in current use, so as to give more credibility to the formative evaluation process.

Learn More

Canadian Guidelines for E-learning Quality Assurance

Posted by on Dec 7, 2013 in Blog, Quality Standards for E-Learning

This entry is part 5 of 5 in the series Instructional Effectiveness

Baker (2002) has developed some guidelines for assessing the quality of e-learning in Canada. These quality guidelines are generic and are therefore broadly applicable to any area and level of education. Following is a brief summary of these guidelines adapted to the specific needs of this project.

Learner management:

Instructional product/service information for potential learners is: 

  • Clear;
  • Current;
  • Accurate;
  • Comprehensive;
  • Complete; and
  • Readily available.

Advertising, recruiting and admissions information includes: 

  • Pre-requisites and entry requirements;
  • The program overview;
  • Specific delivery format;
  • Course level and credit points;
  • Course length and degree requirements;
  • Types of assignments and grading methods;
  • Learning assessment procedures and evaluation criteria; and
  • All applicable fees, if any. 

Registration procedures that include: 

  • A clear statement of expectations of learners;
  • An intake and placement procedures that provide for individualized program and assessment and recognition of prior learning; and
  • An orientation procedure. 

Management of learners’ records 

  • Document learners enroute and final achievement;
  • Ensure confidentiality of records; and
  • Give learners access to their records. 

Technological support for the delivery and management of learning is: 

  • Navigable;
  • Reliable;
  • Sensitive to bandwidth constraints of students;
  • Compliant with current technology and ICT standards;
  • Appropriate to the subject matter content and skills;
  • Appropriate to the intended learning outcomes;
  • Appropriate to the characteristics and circumstances of the learner;
  • Easily updateable and frequently updated;
  • Designed to promote active learning;
  • Designed to support prior learning;
  • Designed to support collaborative learning and social networking;
  • Designed to support flexible learning;
  • Designed to include assistive devices for persons with disabilities; and
  • Designed to assist learners to use the technology system for accessing the learning program. 

Learning assessment is:

  • Authentic;
  • Competency-based;
  • valid and reliable;
  • Frequent and timely; and
  • Immediate feedback to learners. 

Instructional materials are: 

  • Designed and developed by experts;
  • Learner friendly;
  • Interesting in content;
  • Appealing to learners;
  • Well-organized;
  • Free of cultural, racial, class, age, and gender bias;
  • Accessible to those with disabilities;
  • Free from errors; and
  • Customizable and adaptable to learner needs and abilities. 

Learning content is: 

  • Credible with sources identified;
  • Accurate;
  • Relevant; and
  • Culturally sensitive;. 

Learning package includes: 

  • Course description;
  • Learning objectives;
  • Assessment and completion requirements;
  • Learning resources;
  • Course activities and assignments;
  • Quizzes and examinations; and
  • Access to answers for questions/quizzes. 

Appropriate and necessary personnel include: 

  • Qualified support staff with teaching experience and relevant work experience and/or current knowledge in the field;
  • Appropriate skills for teaching online; and
  • Process support persons. 

Continuous improvements based on routine reviews and evaluation of:

  • Learner support services;
  • Course content and objectives;
  • Learning materials;
  • Instructional design;
  • Student learning and student achievement;
  • Policies and management practices;
  • Operational procedures; and
  • Learner satisfaction;
Learn More

Elements of Program Quality

Posted by on Dec 7, 2013 in Blog, Quality Standards for E-Learning

This entry is part 2 of 5 in the series Instructional Effectiveness

Proponents of quality in technology-mediated learning or e-learning do not agree on what constitutes quality. Given the proliferation of e-learning some attempts are being made to develop quality standards for the use of technology for teaching and learning. The purpose of e-learning quality standards are:

  • Maximize the validity, integrity and portability of e-learning;
  • Ensure that resource development follows internationally accepted specifications and that the technologies and applications used to build and deliver the resources ensure the most consistent operation and widest possible use and reuse of those resources;cilitate interoperability of learning resources and systems, and remove barriers to e-learning (AFLF, 2013, p.4). 

E-learning quality standards also ensure that the learner will acquire content skills and knowledge that are relevant and transferable to real world situations (Baker, 2002).

The program design must be underpinned by a sound learning theory in order to ensure the effectiveness and efficiency of the instructional product. The product will also have the following positive outcomes for learners:

  • Captures the interest of learners and motivate them to learn the material;
  • Gives learners a sense of ownership in their learning;
  • Ensures that learners are cognitively stimulated and engaged in the learning process;
  • Give learners ample opportunity to practice skills being learned;
  • Learning is scaffolded and supported;
  • Encourages the deployment of metacognition (AFLF, 2010, p. 4). 

In defining the conditions of learning, the education theorist Robert Gagné has proposed nine events of instruction, which activate the processes of information processing to support effective learning.  Gagné’s (1965) nine events of learning are commonly used as a basic framework for developing instructional materials. These events are:

  • Gaining attention;
  • Informing the learner of the objective;
  • Stimulating recall of prerequisite learned capabilities;
  • Presenting the stimulus material;
  • Providing learning guidance;
  • Eliciting performance;
  • Providing feedback about performance correctness;
  • Assessing the performance; and
  • Enhancing retention and transfer. 

The use of systematic instructional design and development processes to develop instructional materials for augmenting analytical skills is important. A good program design includes the following steps:

  • Analyze learning needs;
  • Design instructional materials;
  • Develop instructional materials;
  • Evaluate instructional materials; and
  • Assess the learnability, effectiveness and efficiency of the instructional materials (AFLF, 2010).
Learn More
%d bloggers like this: