- A Framework to Examine the Effectiveness of Computerized Training
- Elements of Program Quality
- Research Assessing the Effectiveness of ICT-Mediated Learning
- Developmental Testing of ICT-Mediated Learning Materials
- Canadian Guidelines for E-learning Quality Assurance
While a general consensus is emerging regarding the need to integrate ICTs in teaching and learning, there is little empirical evidence to support the decision-making process. In fact, over 350 research projects conducted during the past 70 years have failed to establish a significant difference in effectiveness between ICT and traditional methods (Baalen and Moratis, 2001). While these findings tend to suggest that ICTs do not considerably improve teaching and learning, the fundamental question that remains unanswered is: Were the researchers assessing the effectiveness of ICTs or were they simply assessing the effectiveness of instructional products that were less than perfect?
In spite of considerable progress made in the development of instructional materials through the adoption of systematic instructional design, practitioners still have difficulty in producing efficient and effective instructional materials because our knowledge of human learning is still limited. Many of the critical assumptions that are made during the design and development of instructional products are based on learning theories that are weak. The final product is therefore less than perfect (Dick, W.; & Carey, 1990), (Gagne, R. M.; Briggs, 1979). Conscious of this inherent difficulty, and recognizing that the design process is not foolproof, instructional developer have included a formative evaluation component in their models (Geis, Weston, & Burt, 1984). The purpose of formative evaluation is to provide instructional developers with an opportunity to identify and correct errors and problems within a set of instructional materials while they are still in a developmental stage (Baker and Alkin, 1984). Formative evaluation is defined as the “evaluation of educational programmes while they are still in some stage of development” (Baker & Alkin, 1984, p. 230). Formative evaluation is: “the empirical validation of many of the theoretical constructs, which are included in earlier components of the instructional design model. If the theory is weak the product is less than properly effective. Since our present theories and practices are imperfect, we need empirical data as a basis for improving the product” (Dick, 1977, p. 312).
Formative evaluation of instructional material is an essential activity in the design and development of instruction, because there is a lack of comprehensive theory of learning to guide practice (Nathenson, M. B.; Henderson, 1980). Formative evaluation attempts to appraise such programs in order to inform the program developers how to ameliorate deficiencies in their instructions. The heart of the formative evaluator’s strategy is to gather empirical evidence regarding the efficacy of various components of the instructional sequence and then consider the evidence in order to isolate deficits and suggest modifications (Popham, 1975). Earlier attempts for trying out and revising instructional materials date back to the 1920s, with educational films and radio (Cambre, 1981). There are two broad questions addressed by formative evaluation activities. The first relates to the content and the technical quality of the material, and the second pertains to its learnability. The evaluation of content and technical quality is addressed through expert verification and revision. It is generally believed that the students are most qualified for providing feedback data to assess the learnability (Nathenson and Henderson, 1980).
Expert Evaluation and Revision
The use of expert opinion in assessing the worth of an instructional product is probably the oldest evaluation strategy used in education. Expert opinion is an important evaluation tool because it is quick, it is cost-effective, and it tends to enhance the credibility of an instructional product. Additionally, expert opinion can be used to modify a product before it is used by students. Types of experts are commonly used for the evaluation process, namely: content, language, target, media, format, and delivery system experts:
- The content expert will ensure that the content is relevant, accurate and up-to-date.
- The language expert will ensure that the language is appropriate for the target population.
- The target population expert will ensure that the material is appropriate for the designated group that will be using it. If the target population is adult learners, then the expert will ascertain that the material being evaluated is in agreement with the basic principles, philosophies, assumptions, and established theories in adult education.
- The media expert will focus on the cost-effectiveness of the proposed materials. Typical cost considerations include: capital costs, installation/renovation costs, time cost, support personnel, training, maintenance, cost of alternatives, as well as shared costs. The expert can also assess the societal costs of not implementing a technology-based product.
- The media expert will assess the particular characteristics of the learning technology in order to determine its appropriateness for addressing the learning needs of the target population.
- The format expert will determine if the material has been packaged to maximize its effectiveness and efficiency.
- The delivery expert will ascertain that the material meets standards established by best practices. The effectiveness of instructional material depends to a large extent on how well instructional developers have been able to support internal learning processes with external events.
Learner Verification and Revision (LVR)
Learner Verification and Revision (LVR) consists of a three-stage approach (Dick and Carey, 1985). These stages are: one-to-one evaluation, small group evaluation, and field test.
The one-to-one evaluation occurs in the earlier phase of development (Dick and Carey, 1985). It serves to “identify and remove the most obvious errors in the instruction, as well as to obtain the initial student’s reaction to the content” (p. 199). At least three students representative of the target population should be selected for this process: one with above average ability, another with average ability and a third with below average ability. In a one-to-one evaluation the student is exposed to the instructional materials as well as to all pre-tests, post-tests and embedded tests within the material. The one-to-one evaluation is an interactive process between student and evaluator. Data are collected through observation, interview, embedded tests, post-tests, and an attitude questionnaire. The data can either be used for making on the spot revisions for minor problems or delayed revisions for more complex ones. The one-to-one evaluation can enable the developer to uncover gross misconceptions in information processing. Once these misconceptions are uncovered, the material can be easily modified to address the problems.
Small Group Evaluation
The second stage of formative evaluation is conducted with a group of eight to twenty students representative of the target population (Dick and Carey, 1985). The small group evaluation has two main purposes: to validate modifications made to the material following the one-to-one evaluation, and to ascertain if the student can use the material without the help of the evaluator. The term “small group” refers only to the number of students involved in the evaluation process. Small group evaluation does not imply that all students should be assembled in one location, and be evaluated all at once. In a small group evaluation, the students are provided with all instructional materials and tests. They are instructed to study the material at their own pace. The evaluator intercedes only if a major problem occurs prohibiting the student from proceeding without help. After interacting with the materials and tests, the students are given an attitude questionnaire in order to obtain their reactions. Data gathered during the small group evaluation are used to further refine the instructional material.
The field test or summative developmental evaluation is designed to verify the effectiveness of previous verifications and revisions performed during earlier phases of evaluation. The field testing also helps to ascertain if the instructional material will function smoothly, and whether it will be accepted by students, teachers, and administrators in the intended setting (Dick and Carey, 1985). The focus of the evaluation is on the merit of the instructional product in terms of achievement, attitude and study time.
In spite of the importance of formative evaluation, most instructional products in current use have been systematically evaluated. The costs and time required are two main deterrents to including formative evaluation in the instructional development process. A risk assessment can help to weigh the time and the costs constraints against the consequences of making an inappropriate decision when adopting a technology-based learning product. Although most experts recommend a three-stage formative evaluation process , there is some empirical evidence in the literature (Wager, 1980b), and (Kandaswamy, 1976) suggesting that small group evaluation can be eliminated without significantly affecting the overall effectiveness of the revised product.
Although the importance of formative evaluation is well evidenced in the literature, the state of the art is still an underdeveloped, underconceptualized field of inquiry. There is a paucity of empirical foundations or rationales to support the guidelines and recommendations for the process. Research efforts are needed to improve and validate formative evaluation methodologies in current use, so as to give more credibility to the formative evaluation process.