USF Libraries
USF Digital Collections

Effects of constructed response contingencies in web-based programmed instruction on graphing compared to cued-text pres...

MISSING IMAGE

Material Information

Title:
Effects of constructed response contingencies in web-based programmed instruction on graphing compared to cued-text presentation of the same information
Physical Description:
Book
Language:
English
Creator:
Canton, Reinaldo L
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Instructional technology
Teaching methods
World wide web
Instructional design
Academic behavior
Learner control
Computer-based instruction
Dissertations, Academic -- Educational Leadership -- Doctoral -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: Web-based lessons teaching graph construction techniques (via the internet) were presented to 144 undergraduate and graduate college students. One group experienced program-controlled tutorials requiring them to construct answers in a defined sequence. A second group experienced identical lesson material in the form of typographically cued text presentations. The programmed instruction students performed significantly better than the cued-text group on an immediate computer-based posttest assessing comprehension of the graphing lesson material. The cued-text group performed better on an applied graphing assignment. The experiment did not account for individuals internet study habits or the metacognitive approaches to learning employed by the study participants.
Thesis:
Thesis (Ph.D.)--University of South Florida, 2005.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Reinaldo L. Canton.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 117 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001670389
oclc - 62330015
usfldc doi - E14-SFE0001259
usfldc handle - e14.1259
System ID:
SFS0025580:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Effects of Constructed Response Contingencies in Web-Based Programmed Instruction on Graphing Compared to Cued-Text Presentation of the Same Information by Reinaldo L. Canton A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Curriculum and Instruction College of Education University of South Florida Co-Major Professor: James A. White, Ph.D. Co-Major Professor: William A. Kealy, Ph.D. Darrel E. Bostow, Ph.D. Stanley B. Supinski, Ph.D. Date of Approval: April 14, 2005 Keywords: Instructional Technology, Teaching Methods, World Wide Web, Instructional Design, Academic Behavior, Learner Control, Computer-Based Instruction Copyright 2005, Reinaldo L. Canton

PAGE 2

Dedication This dissertation is dedicated to my children, Randy, Robby, Raegan, Risa and Ryan Canton, to my parents, Ildara and Reinaldo, and to my wife, Julie. My children, you are the breath in my lungs and the beat in my heart. Everything I do, I do for you. Mom and Dad, you made me who I am. You taught me that I could achieve that which I choose to strive for and to strive for the yearnings of my heart. Julie’s unfa iling support and belief in me through the many stages of the dissertation process sustained me over the years and enabled me to reach this goal. She has been patient with me in my moments of self-doubt and hesitation. S he has spent countless hours alone during the past two years enduring the final stages of this process, often only watching from a distance as I ruminated over each word, each comma, and the wording of each phrase. Hers has been the kind of support that any dissertating student would wish for and more. Thank you Julie, for everything. You are the light of my life. Today I emerge from this cocoon to celebrate and enter into a new phase of our relationship . without the dissertation.

PAGE 3

Acknowledgements The author wishes to thank those people whose invaluable assistance contributed to the completion of this study. Appreciation goes out to his fellow graduate students, not only those who participated in the study, but particularly Mike Cohen, Darrel Davis and “Gummi” Heimisson who personally supported this undertaking, putting in their personal time to help to gather the experimental data. Thanks are especially extended to his major professors, Dr. James White and Dr. William Kealy for their advice and inspiration. Gratitude is extended also to Dr. Stanley Supinski, a mentor, advisor and friend. Finally, this dissertation would not have been possible without the expert guidance of the author’s esteemed advisor, Dr. Darrel Bostow. Not only was he readily available, as he so generously is for all of his students, but he always responded quickly and honestly to the myriad of issues t hat the author presented in the process of this endeavor. Although not a man of many words, his oral and written comments are always extremely perceptive, helpful, and appropriate. Of course, despite all the assistance provided by Dr. Bostow and others, the author alone remains responsible for the content of the following, including any errors or omissions, which may unwittingly remain.

PAGE 4

i TABLE OF CONTENTS List of Tables iii List of Figures iv Abstract v Introduction 1 Literature Review 9 The Experimental Analysis of Behavior 10 Contingency of Reinforcement 11 Programmed Instruction 13 Feedback 19 Learner Control 21 Textual Learning 25 Typographic Cueing 26 Computerand Web-based Instruction 28 Theoretical Assumptions and Their Link to Specific Experimental Variables 30 Reasoning for the Present Study: A Continuing Line of Research 31 Method 34 Participants 34 Apparatus 34 Treatment Conditions 35 Procedure 37 Experimental Design and Data Analysis 41 Results 42 Discussion 49 Limitations of This Study 50 Implications of This Study 51 Summary 52 References 53

PAGE 5

ii Appendices 63 Appendix 1. Screen Capture Programmed Instruction 64 Appendix 2. Screen Capture Cued Text 65 Appendix 3. Posttest Questions 66 Appendix 4. Applied Graphing Assignment 72 Appendix 5. Rubric for Applied Graphing Assignment 73 Appendix 6. Expected Output A pplied Graphing Assignment 74 Appendix 7. Post-Tutorial Questionnaire 75 Appendix 8. Follow-up Online Questionnaire 76 Appendix 9. Narrative Comments Question #6 78 Appendix 10. Sample PERL Code for PI Treatment 81 Appendix 11. Sample HTML for Cued Text Treatment 94 Appendix 12. Creating Computer Programmed Instruction 102 Appendix 13. Treatment Assignment Notification 105 Appendix 14. Reliab ility Calculations Templates 107 About the Author End Page

PAGE 6

iii LIST OF TABLES Table 1. Description of the Independent Variable 7 Table 2. Text as a Teaching Medium 25 Table 3. Summary of the Experimental Conditions 41 Table 4. ANOVA and Means Computer Posttest 42 Table 5. ANOVA and Means Applied Graphing Task 44

PAGE 7

iv LIST OF FIGURES Figure 1. Distribution of Computer Posttest Scores (PI Group) 42 Figure 2. Box Plots of Computer Posttest Means 43 Figure 3. Distribution of Computer Posttest Scores (Cued Text Group) 44 Figure 4. Box Plots of Applied Graphing Task Means 45 Figure 5. Distribution of Applied Task Scores (PI Group) 45 Figure 6. Distribution of Applied Task Scores (Cued Text Group) 46 Figure 7. Questionnaire Responses by Treatment 46 Figure 8. Follow-up Questionnaire Responses 48

PAGE 8

v EFFECTS OF CONSTRUCTED R ESPONSE CONTINGENCIES IN WEB-BASED PROGRAMMED INSTRUCTION ON GRAPHING COMPARED TO CUED-TEXT PRESENTATION OF THE SAME INFORMATION Reinaldo L. Canton ABSTRACT Web-based lessons teaching graph construction techniques (via the internet) were presented to 144 undergraduate and graduate college students. One group experienced program-controlled tutorials requiring them to construct answers in a defined sequence. A second group experienced identical lesson material in the form of typographically cued text presentations. The programmed instruction students performed significantly better than the cued-text group on an immediate computerbased posttest assessing comprehension of the graphing lesson material. The cuedtext group performed better on an applied graphing assignment. The experiment did not account for individual’s internet study habits or the metacognitive approaches to learning employed by the study participants. Responses on post-tutorial questionnaires revealed that many students copied screens and took notes--studying these materials immediately prior to the computer posttest and applied task, which were accomplished under controlled lab conditions.

PAGE 9

1 INTRODUCTION “I believe that consciousness is essentially motor or impulsive; that conscious states tend to project themselves in action.” This excerpt from philosopher and educational theorist John Dewey’s “My Pedagogic Creed” (Dewey, 1897) was later expounded upon in what could arguably be his most important work in the field of educational theory (Dewey, 1916). In “Democracy and Education,” his assertion was straightforward. Students learn by doing. Empi rical support for this assertion, in the context of active response during instruction, has been afforded by substantial and mounting research in education and behavior. Using both group-comparison and single-participant experimental approaches, researchers have come to the same conclusion: Learning is enhanced when the frequency with which students actively respond during instruction is increased. (Bostow, Kritch, & Tompkins, 1995; Cronbach & Snow 1977; Gropper, 1987; Kritch & Bostow, 1998; Kritch, Bostow, & Dedrick, 1995; Lunts, 2002; Rabinowitz & Craik, 1986; Ricka rds & August 1975; Skinner, 1950, 68, 69, 72; Thomas & Bostow, 1991; Tudor, 1995; Tudor & Bostow, 1991; W illiams, 1996). In programmed instruction, this active response allows the learner to control the advancement of the tutorial, incrementally progressing though the lesson material, and sequentially building up to the desired terminal behavior. “Learner control” in this behaviorist perspective, is defined in terms of reinforced response to discriminative stimulus. This perspective holds that a student will learn as a result of being positively reinforced for having exhibited a specific observable behavior based on a particular contingent situation. (Skinner, 1969) Education in general, and the cited research in particular, has gone though an evolutionary progression. Programmed instruction grew from verbal and paper-based

PAGE 10

2 programs of study to teaching machines that provided automated-instruction and facilitated learning by providing for immediate reinforcement, indivi dual pace setting, and active responding. The emergence of technology in the last century and its continued advancement has broadened the perspectives of educational research. Studies using computer-based methods for delivering programmed instruction (Bostow, Kritch, & Tompkins, 1995; Kritch & Bostow, 1998; Kritch, Bostow & Dedrick, 1995) have validated the significance of technology and it application in educational research and methods. A more recent influx in the field is the growing availab ility of high-s peed, internet-based distance learning. Despite these studies and the ostensible value of active learner response during instruction, much of what currently passes for computer and web-based instruction does not use the basic contingency-response-feedback sequence. A learner can survey most web-based learning landscapes at his/her discretion “clicking” hyperlinks here or there, as desired, and advance to new material based upon his/her own criteria. Rather than progressing though a programmed course of material to focus the learner’s attention on the desired behaviors, the student is allowed to follow his own interests, potentially skipping material that may seem uninteresting, to advance without complete understanding, and so on. (Butson, 2003) Part of the reason for this could be that evaluation of a learner’s performance on a website is more difficult than in the traditional classroom environment. In the classroom, a teacher can observe a student’s response, facial expressions and provide more personalized instruction. This close student-teacher environment is a challenge to replicate in a web-delivered course and it is easier for instructional web designers to build instructional material that is static and browse-able rather than material that provides feedback, as well as adjusted stimulus, based on learner response.

PAGE 11

3 Perhaps a more critical reason, however, for a passive presentation of lesson material may relate to the creator’s philosophy of instruction. The role and importance of program-delivered instruction and correction is possibly not well understood or -of possibly greater concern -even discounted. It is argued, on one hand, that the student must construct his own knowledge, while others maintain that control and guidance of the student in sequential, programmed steps of active response bring about more complete skills and capab ilities. To date, these lines of reasoning have been tested and compared using paper based lessons, teaching machines and, more recently, computer-based methods of instruction. The advent of personal computing and the exponential growth of educational technology have generated many questions as to how the computer can supplement, improve, or perhaps replace established teaching methodologies. The internet is becoming a large part of the educator’s toolbox. Webbased offerings in many academics disciplines are redefining the educational landscape and readily available high-speed access to the World Wide Web is shaping the field of distance learning. In 1998, Kritch and Bostow studied the effect to which the degree of Constructed-Response Interaction affected learning outcomes in computer-based programmed instruction. This study evaluated the importance of learner activity in computer programmed instruction. Four groups of undergraduate students experienced computer delivered instructional programs, with varying degrees of interaction, which taught the use of a computer authoring language. Results revealed a clear superiority in both posttest and application performance with respect to those students who experienced the high density of active and meaningful participation. Performance of the passive group was the poorest. The present systematic r eplication was developed, in part, to substantiate the reliab ility and

PAGE 12

4 generality of the Kritch & Bostow (1998) findings. Contributing to mounting empirical data, this study extends the line of research in the field of “constructed-response interaction” in computer-based programmed instruction. The present research, however, identified some potential deficiencies in Kritch & Bostow (1998) that helped to direct its devel opment as a systematic replication. This study hopes to address the following questions: € Are the results generalizable to different types of curriculum material? € Did Kritch & Bostow (1998) account for the possib ility of cueing in their highdensity active group, compared to the text-based passive group? € Would the technology available today, in terms of web-based instruction, have any effect on the results found by Kritch & Bostow? (1998) To address the issue of generalization of the results, the present study changed the subject matter content and type of the lesson material. Kritch & Bostow (1998) presented a lesson in computer programming. The level of abstraction of the material presented was analyzed by applying Bloom’s (1964) Taxonomy. While the outcome measures used by Kritch & Bostow (1998) tested the actual ut ility of the program produced by the participant students, the logical, sequential, analytical sk ills needed for computer language programming are identified in the “analysis” category of Bloom’s (1964) Cognitive Domain. At this level, the learner is able to assess lesson material in its component parts so that its organizational structure may be understood. This sk ill may include the identification of the parts, analysis of the relationship between parts, and recognition of the organizational principles involved. By contrast, the lessons presented in the present study taught proper techniques for presenting data by way of graphing. Achievement of the terminal objectives was measured by the final product in

PAGE 13

5 the form of a hand-drawn graph, and results of a computer-administered test. While levels of analysis and recognition were st ill in play for these lessons, incorporating aspects of comprehension from Bloom’s Cognitive Domain, the particular spatial and manual sk ills r equisite in drawing a graph from given data can be attributed to the third and fourth categories, “precision” and “articulation,” of the Psychomotor Domain. At this level, skill has been attained. Proficiency is indicated by quick, smooth, accurate performance, requiring a minimum of energy. The overt response is complex and performed without hesitation. In some cases the sk ills might be so well devel oped that the individual can modify movement patterns to fit special requirements or to meet a problem situation. (Bloom, 1964) The present study varied the type and category of the lesson material presented using the active and passive treatments. This was intended to expand upon Kritch & Bostow (1998) thereby generalizing the results to more varied academic disciplines. In previous research, the comparison between active response and passive reading harbored a basic flaw. Participants who actively responded to instructional frames by “filling the blank” may have been inadvertently “cued” to the critical material in the lesson. The passive readers, however, had no point of reference or clue as to the critical material in their lessons. Answers to the posttest questions for students who had previously “filled the blank” might have been more easily recalled than by those who were not “cued” to the crucial material in the lesson. In the present study, this issue of “cueing” was dealt with by a slight adaptati on of the text-based, passive treatment condition. This adaptation entailed the identific ation in the text-based materials of the key words and phrases by means of italicized text. The Publication Manual of the American Psychological Association describes the appropriate use of italics to

PAGE 14

6 emphasize “a new, technical or key term or label.” Thus, to overcome the possibly confounding variable of “cueing” in Kritch & Bostow, (1998) the present study afforded the text-based passive learning group “cues” by the italicized emphasis of the key words and phrases in the material. The question of delivery method derived from Kritch & Bostow (1998) led the present study to bring the lesson presentati on up to date. The Internet is the biggest, most powerful computer network in the world. It includes 1.3 m illion computers used by millions of people in over fifty countries. As connections to the Internet have increased and availab ility of high-s peed service has grown, educators have more possib ilities to overcome time and distance to reach students. Distance learning is the “new frontier” of education. The present study focused and modernized the question of constructed response and its effect on learning by presenting the lessons using the World Wide Web as the medium of delivery. Two web-based tutorials, one using programmed instruction and the other using text and graphics-based web pages, were employed to deliver identical lesson content teaching the methods of measuring and graphically recording active human behavior. For this study, programmed instruction is defined as the use of technology to deliver educational course material in sequentially arranged contingencies of reinforcement. This process, using computer and web-based apparatus, enhances the paper-based teaching machines of the late fifties and early sixties. After completing the online lessons the participants’ performance was assessed by directly observed, overt responses. The expected terminal performances for the tutorials in this instruction were 1) the appropriate selection from a variety of optional methods and visual arrays, 2) the formatting of data recording sheets appropriate to the behavior and setting, and 3)

PAGE 15

7 accurate selection of the proper recording method. “Instructional Method” was the independent variable for this study. This variable had two levels – “active” Programmed Instruction and “passive” Cued-Text and Graphics. Inherent in each of the two methods of web-based presentation are distinct levels of learner participation and control of lesson advancement. For the present experiment, Programmed Instruction represents “active” learner participation and “program advanced” lesson material. Learner participation in the Cued-Text and Graphics presentation is distinguished by “passive” reading of the lesson material and “learner advanced” lesson materials. Table 1 describes the relationship between the conditions, as well as the learner participation and lesson control assumptions in the independent variable. Instructional Method (Web Delivered) Learner Participation Lesson Advancement Programmed Instruction ACTIVE PROGRAM CONTROLLED Cued-Text & Graphics PASSIVE LEARNER CONTROLLED Table 1. Description of the Independe nt Variable (Instructional Method) To evaluate the relation between instructional method and performance, two dependent variables were identified in the present study. Both dependent variables were assessment results. The first was a computer-based posttest that measured the student’s retention of the lesson material, and the other was a learned sk ill application that appraised the student’s ab ility to utilize the skill sets lear ned by actually assessing a set of data and presenting it graphically as taught by the lesson. The present research expounds upon theories of learning stemming from an

PAGE 16

8 experimental science. To make use of the rapidly growing field of web-based distance learning, the focus was to identify and validate a crucial component of interactive computer-programmed instruction. The study centered on a fundamental research question: In two types of web-based tutorials, distinguished by the existence of constructed response contingencies, is there a significant difference in performance outcome, based on learner participation, and the control of lesson advancement? Specifically, “Will teaching met hod be related to graded outcome on a computer-based test?” and, “W ill teaching met hod be related to outcome on the graded results of an applied task?”

PAGE 17

9 LITERATURE REVIEW Science renders knowledge public through the application of experimental investigation, both quantitative and qualitative. In the field of educational research, this investigative study manifests itself as historical, qualitative, descriptive, correlational, causal-comparative or experimental research. The scientific community self-regulates and provide for internal che cks and balances by making use of processes such as peer review, cooperative research, journal publication and such appraisal mechanisms as meta-analysis and systematic r eplication. A systematic r eplication repeats or duplicates a previous experiment, varying a number of conditions, such as task, setting or other parameters of the basic procedur e. In systematic r eplication, the sa me hypothesis or hypotheses is tested again, using different participants and specific differences in methods. Obtaining similar results in the replicated study provides evidence of the generality of the original findings, by the principle of converging evidence (Durso & Mellgren, 1989; Kerlinger, 1986). In the present research, Kritch and Bo stow (1998) is systematic ally replicated with variations in 1) the method of lesson delivery (lab computers vs. web-based presentation) 2) the type of learning involv ed in the lesson content (logical, sequential analysis sk ills needed for computer language programming vs. the spatial and manual skills r equisite in drawing a graph from given data) and 3) the identification to the participants in both groups of key lesson concepts (overt, constructed response vs. passive, italicized cued-text). It should be mentioned that while not a specific modification of the previous study, the general technological background, in particular, computer literacy, of the participants in this study is conceivably higher. Computers and technology represent a paradigm shift in academic media and today’s students are

PAGE 18

10 increasingly more exposed to technology than students of only a few years past. The present research, is logically related, and imparted a different perspective into the experimental conditions undertaken in Kritch and Bostow (1998). The Experimental Analysis of Behavior The approaches employed in the present research stem from lessons learned in an experimental approach to learning. They are based in what has been called "the experimental analysis of behavior," (EAB) a phrase coined by B. F. Skinner (1969, 1972) to address a specific category of the natural sciences. This category refers to the functional interactions between directly measurable behaviors and specific historical and immediate environments. The EAB presupposes that the formation and behavior of organisms are a result of natural selection, i.e., evolutionary processes (Skinner, 1969). According to the behavioral perspective, learning is identified as a permanent change in behavior due to experience or practice. The focus of this approach is on how overt behavior is affected by the learning environment (Huitt & Hummel, 1998). Predictable interactions between the behavior of living organisms and environmental variables are referred to as "functional relations." Johnston and Pennypacker (1980) describe a "functional relation" as the variation in responding that is a direct function of variation in a specific aspect of the environment. Not to imply a "cause and effect" association, but rather to demonstrate how observed environmental and behavioral events occur collectively in distinct ways under specific conditions. In the experimental analysis of behavior, "behavior" is defined as "any directly measurable thing an organism does" (Sulzer-Azaroff & Mayer, 1991). And, as Skinner (1969) characterized it, it is a measurable change in the status of an organism. For precise measurement, behavior must be identified objectively as an observable

PAGE 19

11 occurrence, open to thorough scientific analysis (Cooper et al. 1987). Likewise stated, behavior isn’t a mere "expression" of other processes, rather a unit of measurement "An emphasis upon the occurrence of a repeatable unit distinguishes an experimental analysis of behavior from historical or anecdotal accounts" (Skinner, 1969). The Contingency of Reinforcement From the point of view of the EAB, t he "contingency of reinforcement" is held to be the core of the process through which most practical behavior develops (Skinner, 1968). There are three variables that compose a contingency of reinforcement under which learning takes place. These variables are 1) an occasion upon which behavior occurs, 2) the behavior itself, and 3) the consequences of the behavior (Skinner, 1968). The term "contingency" was initially understood as something similar to “contiguously”-where events closely precede, follow, or coincide with another. However, an if/then, behavior/consequence, dependency is not necessary for the consequence to have a strengthening effect upon the behavior. All that is necessary is contiguous occurrence (Skinner, 1969). In the process of operant reinforcement, precursor or concurrent stimuli attain the capacity to increase the likelihood of occurrence in the future. Laboratory research suggests that learning does not occur by merely watching or even performing, as Aristotle asserted; operant behavior is modified only when significant consequences are involved (Skinner, 1938). Simple execution doesn’t determine behavior and make it more likely to occur again; “practice” on its own, does not “make perfect.” The most apparent implication obtained from the operant l aboratory is this: Strengthening, i.e., to increase the probab ility of future occurrence, behavior must be both emitted and then reinforced (Skinner, 1969).

PAGE 20

12 To recapitulate, the experimental analysis of behavior presupposes that the basic building block of most of behavior is the "contingency of reinforcement." It is the key "learning unit" of the process of instruction (Skinner, 1968). The term "reflex" has never been a satisfactory means of expression to account for most behavior. Practical everyday behavior (which could arguably be called the motivation of nearly all instruction) is operant behavior, not respondent. The functional relations of operant behavior are, those central to the process of instruction. Therefore, to sk illfully develop behavior, the teacher must be able to correctly identify and arrange reinforcement contingencies (Skinner, 1968). To be appropriately referred to as a "contingency," a situation must consist of the environment, behavior, and a strengthening consequence. Instructional technologies can be their most powerful when they present carefully arranged, sequential contingencies of reinforcement. Contingencies are deemed "programmed" when they are arranged in a tight, well-planned sequence. During this sequence, behavior is gradually strengthened and brought under the control of stimuli through the process of differential reinforcement of successive approximations. This organization of sequential contingencies is called "programmed instruction" (Skinner, 1968, 1969). Educational practices have been greatly shaped by increased knowledge about operant conditioning. All learners exhibit behavior. Educators are, by definition, behavior modifiers as a result of their influence in the classroom. Behavioral studies in classroom settings have established methods to organize and arrange the physical environment and lesson presentation to produce desired academic behavior. Programmed instruction is one such method. Programmed instruction requires that learning be done in small steps, with the learner being an active participant (rather than

PAGE 21

13 passive), and that immediate corrective feedback is provided at each step (Huitt & Hummel, 1998). Programmed Instruction Programmed Instruction, in the simplest terms, is a teaching technology that features educational practice resulting from laboratory and applied research in the area of Experimental Analysis of Behavior. Some of the practice derived includes active student responding, priming, prompting, f ading, and shaping. Educational content is said to be "programmed" when constructed, as Burton (1996) quotes B. F. Skinner, “of carefully arranged sequences of contingencies leading to the terminal performances which are the object of education.” (Incidenta lly, the Center for Programmed Instruction offers a free, hands-on demonstration, in a brief, web-based tutorial at: http://www.centerforpi.com/cgi-local/WhatIsPI_MainM enu.pl that provides a concise introduction to this teaching technology.) As a teaching technology, PI has its roots in behavioral science, which is now entering its ninth decade (Burton 1996). Developed from Skinner’s “teaching machine” concepts, PI established its effectiveness across disciplines, and was once the preferred method for teaching. The evolution of so-called, cognitive learning theories has not boded well for the theories of behaviorism, being misrepresented and even excluded from contemporary programs of study. Programmed instruction has, however, been established as an effective method of instruction. Boden (2000) reviewed 30 independent studies comparing programmed instruction to conventional teaching methods. Using meta-analytical techniques, Boden integrated the findings from these studies to make evident that programmed instruction results in higher student achievement. The primary focus of Boden’s study was to find a

PAGE 22

14 correlation between class size and achievement. However, no significant correlation was found. Nevertheless, an increase was noted in the Effect Size for this study compared to a previous meta-analytical study. This increase was partially attributed to more effective use of programmed instruction in more recent years. The essence of the results of this study is that programmed instruction was more effective than conventional methods of instruction. Despite many years of popular use, and the continued improvement in the effectiveness of its application, programmed instruction has become an anathema to some. While getting a couple of conceptual details correct, Slavin (2000) appeared to misrepresent programmed instruction as an impractical approach to instruction. He expressed several points to identify PI as “self-instructional” and condemning it for establishing a setting where “students are expected to learn (at least in large part) from the materials, rather than principally from the teacher.” And despite previous research into the use and effectiveness of PI, Slavin (2000) opined, “the programmed instruction techniques that were developed in the 1960s and 1970s generally failed to show any achievement benefits.” Continuing his analysis, Slavin alleged “programmed instruction methods have not lived up to expectations…” and blamed the “expense and difficulty of using programmed instruction” as the reason why “this strategy is seldom used today as a primary approach to instruction.” Notwithstanding the potential influx of criticism from advocates of nonbehaviorist approaches, Bostow, Kritch & Tompkins (1995) discussed the interaction of learners as being more significant in cases where the learner must “overtly” respond. This overt response, or behavior, is strengthened with successful interaction and results in increases in motivation for student and teacher. These interactions involve

PAGE 23

15 “learning units” which are described in behavioral terms as reinforcement contingencies. Recognizing the evolution and expansion of computers in the classroom, Bostow, et al. (1995) pointed out several areas where computers can make dramatic improvements, but emphasized the need for highly disciplined application of the various techniques of programmed instruction. Referring to computers as “modernday teaching machines,” they pointed out tha t, while the computer is an instrument with the potential for delivering differential re inforcement in programmed instruction, software is developed for aesthetic and commercial appeal instead of tapping into the vast potential of these machines. Bostow, et al. also suggested the use of computers as testing devices, to make test administration and scoring easier, and improve the security of test information. Their conclusion was that the actual instruction itself could be accomplished by properly designed program of instruction. “Computers as teachers” can work if the programmer/teacher is not only well versed in the tenets of programmed instruction, but also possesses an understanding of a science of behavior. Programming the course content into effective programmed instruction allows the computer to “teach” and frees the instructor for direct student contact and mentoring. To his credit, Slavin (2000) properly described the “learning units” mentioned above, identifying the reinforcement contingencies as “small subsk ills.” Slavin went on to illustrate the fr equent and immediate feedback associated with programmed instruction “so that students can check the correctness of their work,” and conceded “similar approaches are quite common in computer-based instruction.” The concept of “overt response” or “active student responding” was studied more closely by Tudor (1995) in an experiment that evaluated the effects of overt answer construction in computer-based programmed instruction. This study

PAGE 24

16 incorporated practical application in addition to the statistical analysis of the data. Tudor pointed out that previous research had not generated convincing support for the need to use overtly constructed responses, citing issues with consistency of instructional programs across studies. Testing methods were also referred to, as well as program quality, and prior fam iliarity with subject matter. T udor proposed, “the rules that might guide the designer of better instructional software cannot be easily extracted from past research.” For this study, 75 students were placed into one of five groups to receive the programmed instruction, teaching the development of frames for PI, in varying levels of student interaction with the materials. All the groups showed significant improvement from pre-test to posttest. The groups performed progressively better as the level of student interaction increased. The result being t hat this student interaction, be it in the form of overt or covert answer construction, resulted in a 13% better performance on a fill-in-the-blank posttest, and showed a better grasp of the concepts when later applied to constructing PI frames. Tudor pointed out that the differences are comparatively larger than in previous programmed instruction research and may have educational importance. The question raised addresses the functional significance of the behaviors an instructional program is designed to produce. “Can teachers design frames that actually change behavior? In other words, can students use a washing machine correctly after completing a program?” Tudor recognized a need for future studies to identify “behavior change produced by interactive instruction.” A significantly smaller sample participated in Tudor’s (1995) study to isolate the effects of active responding in computer-based instruction. The four students in this experiment worked through a set of programmed instruction that alternat ed between frames with blanks that required overt answer construction and all-inclusive frames without blanks. Every one of the

PAGE 25

17 students produced a higher percentage on posttest questions that corresponded to program segments that called for construction of overt answers. Regardless of the small sample, this study does confirm the importance of active responding in the effectiveness of instructional programs. The “constructed response contingency” could be associated to the “generation effect” studied in depth by Rabinowitz and Craik (1986). The generation effect suggests that verbal material that is actively generated (such as the overtly constructed response) during the presentation of lesson material is later recalled more readily than material that is simply read. Study partici pants either read or generated target words in the existence of particular “generation” cues. The recall of the target words was studied using variation in the cues. In the instructional phase, when the target words were generated, prompted by associately related or rhyming cues, an observable generation effect was noted when the posttest used similar “retrieval” cues. This effect was not noted with weak relations between the cues and the targets. Semantic similarities between the cues and the targets did tend to yield an observable generation effect. Rabinowitz and Craik (1986) suggested that not only wa there a strenthened memory of the generated target word, as a result of direct guidance by an associated cue word, but that the generation enhanced information spec ific to the cue-target realtionship. The information used to guide the generation process for the learner is what is enhanced by generating, as compared to readi ng. This study substantiated the need for both associative and semantic origins of the cue words or phrases used in developing effective programmed instruction. The word interactive has become a commercial selling tool for software developers and a se lling point for hardware manufacturers. For the domain of

PAGE 26

18 educational technology, interactivity should refer to the behavior of the learner (Kritch 1995). Kritch also addressed the theme of constructed-response by learners using interactive computer-based instruction. In a double-pronged experiment, Kritch confirmed recent studies that identifi ed the need for constructed answers in the application of instructional programs. This study, confirmed the greater effectiveness of “constructed-response” when compared to “click-to-continue” or “passive viewing formats,” and corroborated Tudor (1991) and Thomas (1991). Using a second experiment, internal to the study, Kritch (1995) upheld the findings of the first experiment using a counterbalanced (ABAB-BABA) design with a sample from each of the three groups from the first experiment, identifying high, moderate and low ab ility students. Effectiveness in the first experiment was measured by posttest achievement by 101 college students. Not surprisingly, the achievement results for the constructedresponse group were significantly different from the click-to-continue and passive observation groups. Results for the latter two groups were not significantly different. “Supplying missing words in frames required students to read more slowly, carefully, and to reread frames.” Results of the second experiment in this study “confirm that active construction promotes recall and evidence indicates that programmed instruction is appropriate for all student ab ility levels.” (Kritch 1995) The study identifies itself as “a first step in the search for currently established (especially practical) functional relations.” I.e. getting the student to properly operate a washing machine through the use of programmed instruction. In 1998, Kritch and Bostow extended the available research and literature in the arena of programmed instruction by revisiting the issue of functional relations among varying levels (densities) of constructed-response contingencies. 155 undergraduate

PAGE 27

19 students were presented with a lesson in the use of computer program authoring language, by way of the programmed instruction at three levels of constructedresponse contingencies, high, low and zero, to which the students were randomly assigned. Student achievement was measured with a computer-delivered posttest, and also by an evaluation of practical application of the relevant applied skill (authoring program code.) The students in the high-density condition produced higher achievement scores in both forms of assessment. The results of this study support the position that increased interactivity (as a function of student behavior) produces increased learning. The suggestion for future research advocates a closer examination of the relation between increased constructed-response contingencies and outcome measures by perhaps using a finer continuum of varying densities of “learning units.” The concept of programmed instruction, evolving and adapting since derived from the tenets of a science of behavior, nearly a century in the process, has found new and effective application through the use of computer-based, and more recently, web-based instruction. The ongoing improvements in computer and communication technologies have opened new avenues for the precepts of behavioral analysis in education. The present study endeavored to refine this research by essentially replicating the conditions, using new less on content with a new presentation medium, to evaluate the study’s generality. Feedback Examining how feedback functions within a wide variety of learning domains is the first recommendation offered by Mory (1996). Overt standards such as concept acquisition, rule use, and problem solving are identified as sources for researchers to explore. Unfortunately, this article on feedback research also charges the reader to

PAGE 28

20 analyze cognitive aspects such as learner motivations and attitudes, focusing on difficult to measure ideas such as “tenacity, self-efficacy, attributions, expectancy and goal structure.” Mory asserted, “no learning would occur unless some type of feedback mechanism was at work.” He identified feedback as carrying out a crucial purpose in the acquisition of knowledge. Across the va ried learning paradigms that the field of education has to choose from, feedback, as a part of instruction, remains a constant. In 1995, Azevedo & Bernard synthesized twenty-two studies in a meta-analytical analysis to investigate the effect of feedback in computer-based instruction. Azevedo (1995) put forward that the concept of f eedback as reinforcement in the stimulusresponse model is now outdated, leaning toward more contemporary cognitive perspectives. This study did, however, concur with the idea that feedback is a critical component of instruction. Azevedo cited variations in types of feedback ranging from “the very simple issuing of right-wrong statements,” as presented in the programmed instruction condition of the current research, to more elaborate corrective statements. Adaptive feedback was also mentioned as a progression developed to adjust to the individual learning needs of students. The meta-analysis focused on the relative effectiveness of feedback in general based on various computer-based instruction environments. Four previous meta-analyses in the general area of feedback were identified, 1991, 1988, 1983, and 1982, only one (1983) that examined the effects of feedback on learners in computerized and programmed instruction. It found a medium effect size of .47. Since this study included paper-based as well as computer-based instruction, Azevedo gives good reason for studying the pure effects of feedback in computer-based instruction with a new meta-analysis. The “new” meta-analysis that Azevedo presented indicates an overall weighted effect size of .80 suggesting that

PAGE 29

21 achievement outcomes were greater for the feedback group than the no-feedback group. Concurring with Mory (1996) and sharing in the general consensus that feedback is one of the most critical components of CBI, Azevedo’s analysis found the higher performance of learner achievement was attributable to the large effect size for the feedback group. However, Azevedo identifies potential flaws in his analysis, due to the number of rejected studies. This “bespeaks the somewhat methodologically weak state of research in the area.” (Azevedo, 1995) In general, the value of feedback cannot be overlooked in the design of computer-based instructional materials. Feedback can guide the learner through a tutorial, prompting correction, review, and in some cases encourage the motivation to successfully continue. As presented in t he third leg of the S-R-R method, feedback offers the discriminative reinforcement necessary to shape learner behavior toward the objectives of the particular lesson. Learner Control “Learner control,” a concept that is readily described in terms of autonomy and independence, is generally defined as an instructional deliv ery system “w here learners make their own decisions regarding some aspect of the ‘path,’ ‘flow,’ or ‘events’ of instruction.” (Williams, 1996) After reviewing many analyses that compared learner control to program control in CBI, Williams pointed to the dis appointing empirical findings that did not show learner control to be superior to program control in computerbased instruction. He later asked, “Can a comprehensive, integrative, deductive, prescriptive and testable theory of learner control be developed?” His impression, that such a theory may not be scientifically disproved by a valid deductive argument, led to an alternative question. He suggested that we ask “whether we can st ill develop

PAGE 30

22 instructional prescriptions for the use of learner control which are at least pragmatic and are grounded in some reasonable psychological and educational principles.” In this, Williams was optimistic and cited several reviews that indicate examples of application of the concept of learner control. Perhaps not quite as optimistic is the critique presented by Reeves (1993) that puts forward the premise that learner c ontrol research is psuedo-science because, being contrasted to program control, it does not meet major theoretical and methodological assumptions generally accepted in the research methodologies of the scientific, quantitative paradigm. Learner control is, in his characterization, a “design feature of computer-based instruction that enables learners to choose freely the path, rate content and nature of feedback in instruction.“ Reeves cited poor definition of the concept of learner control. The definition seems clear and important, but it is so loosely defined in practice that the definition means very little. While clearly identified as a scientific construct, as a matter of scientific study, the concept of learner control must be well defined and readily measurable. Reeves also referred to the brevity of the instructional treatments used in various studies of learner control. Interaction time of 29 minutes 4 seconds, 29 minutes 6 seconds, were noted and other studies reported average treatment time of 25 to 30 minutes ranging as low as 13 minutes in the various presentation conditions, hypertext, computer-assisted instruction, programmed instruction, etc., where learner control was being studied. This Reeves contrasted to the guidance of Cronbach and Snow (1977) that ten or more separate interactive sessions were necessary to acquaint students with innovative instructional treatments. “How,” Reeves asks, “can a dimension as complex as learner control be expected to have an effect in one session treatments lasting less than an hour?” A second criticism

PAGE 31

23 of the research into learner control pointed to a lack or consequential or relevant outcome measures. The participants in learner control research, he stated, should be engaged in learning that is meaningful on a personal basis and has real consequences for them. He also addressed issues of small sample sizes and the concern over exclusion of participants who correctly answered all questions in the interactive session, raising the question as to whether the parti cipant really “experienced” the treatment variables. Reeves did suggest some new directions. Primarily, he proposed that researcher and graduate students improve their understanding of contemporary philosophy of science. This would expose us to a larger spectrum of approaches to scientific inquiry. He also suggested that researchers change the questions they are asking to determine why the field is not moving forward. Reeves noted, “without observations of the whole system of interrelated events, hypotheses to be tested could easily pertain to the educationally least significant and pertinent aspects, a not too infrequent occurrence” opining that such is the case of learner control research. The disappointment in learner control theory was identified as a matter of definition and measurement of learner control by Lunts (2002) who published a very comprehensive review of learner control research. Stating the frustration in finding valid, reliable instruments to assess quantity and quality of learner control, Lunts also acknowledged Reeves (1993) in the short duration of student exposure to the experimental treatments in various studies. Despite the brief encounters with the treatments, a few studies were mentioned that present a positive effect on achievement, but the author warned that t he optimistic findings should be interpreted with caution, specifying the varying effects of content, sequence and advisory control, the three major components of learner contro l. Studies were identified that make

PAGE 32

24 reference to intrinsic motivation and self-determination. Lunts’ article actually classified learner control research into three primary categories: “those that did not find any effect of learner control on students’ motivation and attitudes toward learning, those that found a positive effect, and those that found a negative effect.” One of the studies identified by Lunts (2002) that did not find any effect of learner control on students’ motivation and attitude was Cho’s (1995) research studying the nature of cognitive processes that learners use under the conditions of learner controlled and program controlled environments. The qualitative aspect of the study, wanting for a scientific basis of measurement, was fuel for Reeves’ position that learner control research is at best a psuedo-science. Regardless, the study collected student data on 1) a self-reported questionnaire providing data such as SAT scores, student experiences with HyperCard learning and lesson content knowledge, 2) audio and videotapes presenting participants’ learning “behavior” during the HyperCard instruction, 3) recorded verbal data acquired from participants’ think-aloud, stimulated recall, and interview data, 4) learning paths and time on task recorded by the HyperCard program, and 5) estimates of learning outcomes from the results of posttests. Cho indicated that learners’ cognitive processes did not differ much between the learner control and the program control groups. It would be reasonable to imagine Reeves asking, “How exactly did you validate the measurement of the ‘cognitive processes’ of the participants in this study?” This study is representative of the perceived shortcomings of learner control research. Perhaps in response to Ehrmann’s (1995) call for a “guiding light” to piece together all the great ideas in educational technology, Molenda (2002) attempted to shine his light on A New Framework for Teaching in the Cognitive Domain Combining

PAGE 33

25 the best of all worlds, Mol enda identified programmed instru ction, cognitive psychology, Gagn’s (1985) Events of Instruction and constructivist influences to synthesize an inclusive framework that more unambiguously pointed toward the growing consensus that “meaning-making” (constructing?) is at the heart of cognitive learning. The impression, however, from learner control research is that more is needed with regard to standardizing not only in measurement instruments, but also to identify what aspects of behaviors are valid, effective sources to measure learner control. Textual Learning Traditionally, in providing new information and curriculum material to students, texts have always had a very prominent place in education. The written word is a historical standard in teaching, and an accepted method of transmitting information. Siemens (2003) recognized text as the venerable backbone of learning. The majority of learners are quite comfortable with text-based learning, perhaps because of the many years spent using this medium. Table 2 (Seimens, 2003) summarizes the pros and cons of text as a teaching medium for the web. Positives Negatives Use for outcomes: Surveyable Easy to produce Low bandwidth Familiar Many readers Not much specialization Overused Passive 100% learner motivation Time lag Simple to complex Suited to synthesis/evaluation Reflection – due to time lag Table 2. Text as a Teaching Medium Text-based learning and memory retention based on isolation, the setting of a text item apart, has been studied at length (Rickards & August, 1975; Fowler & Barker,

PAGE 34

26 1974; Cashen & Leicht, 1970) in the academic setting. The Isolation Effect (Cashen & Leicht, 1970) indicates an improvement in item recall when text from a reading of course-related materials were set apart by underlining. Additionally, students retained material in the texts, adjacent to the highlighted materials, and showed a higher recall than students in the non-highlighted treatment tested on the same material. Fowler & Barker (1974) assessed the correlation between highlighting text as an alternative to typographical cueing (capital letters, italics, and colored fonts) to determine its effectiveness in improving retention. In this study, the experimenter highlighted (EHL) group performed slightly better than their control group (no highlighted material). The study concluded that highlighting, as well as traditional underlining, could produce improved retention of text material. Primarily studying the difference between student highlighting and experimenter highlighting, Rickards & August (1975) did find that material highlighted by the students fell lower on the rating scale than those materials of high structural importance identified by the experimenter. Better student recall of experimenter highlighted text was noted. Techniques used by the programmer (teacher) in the construction of educational material, to cue key information, can lead to better recall and improved learning. Wegner & Holloway (1999) posited that the role of the instructor becomes one of preparing the instructional environment, anticipating the needs of the students in advance and providing contingencies. They become Socratic questioners, resource providers and motivators. Typographic Cueing Learners can be motivated and provided resources through the use of cues in the text of instructional material. This is accomplished by using titles, headings and sub-headings, bold print or italics, captions, and other text features. Text-structure cues

PAGE 35

27 give learners insight into the organizational patterns and key information in various types of texts. Glynn, Britton, and T illman ( 1985) reviewed studies on the effect of typographic cueing on learning. Typographic cueing, which generally refers to the use of bold or italic type, or underlining, is used to signal the important ideas in a text. There is little doubt that this kind of cueing does work in focusing attention to the cued material. The consensus is that readers are more likely to remember cued ideas than uncued ideas (Hartley, 1987). Students who attend to textual cues are better able to comprehend, organize, and remember information presented in texts than those who do not. (Manitoba Education, Citizenship and Youth, 2001) Dyson & Gregory (2002) attempted to extend the existing research on textbased cueing to typographic cueing on computer-presented material. They identified that one of the underlying assumptions behind typographic cuing is that the cued material is more likely to be noticed by the reader. The general consensus emerging from the literature is that typographic cueing can improve the recall of cued material. Dyson & Gregory highlighted either key phrases or whole sentences that referred to main facts or incidental details in the lesson material. While the study did not find a significant difference between the experimental conditions and their control, there were differences in the various cueing conditions. These differences suggested that cueing an entire sentence can hinder overall recall, but cueing specific details is helpful. Typographical cueing devices, such as font and color, help users assess the importance of the information they read and employ these keys in understanding and recalling the material. Within the content of a given lesson, the presented text is not a homogeneous structure in which all concepts have equal importance. The ideas often follow a pecking order and usually, contain central and subordinate elements.

PAGE 36

28 Highlighting techniques (or directive cues), such as italics, color, or underlining, can draw the reader's attention to these key parts of the text. This typographic cueing can direct and guide the reader through the lesson material and contribute to the recall of key information. (Allen & Eckols, 1997) Headings, margin notes, or content markers, give structure and organization to the material. They also present a general organization to the text that helps the reader understand the content and coalesce new material with existing knowledge. The Center for Learning, Instruction, & Performance Technologies at San Diego State University (Allen & Eckols, 1997) notes that the human eye is responsive to changing stimuli. Thus, boldface type set within a paragraph, or an italicized note within the text, will st and out from the rest of the display and draw the reader’s focus. Thus, drawing the learner’s focus meshes well with Gagn’s (1985) suggestion that gaining the attention of the student is the first step in successful instruction. Computerand Web-based Instruction Computer-based, and more recently, web-based instruction (CBI & WBI) has been incorporated and applied in many endeavors of transferring information for training and instruction. Realizing the Web’s full potential for learning is the vision of many educators. This realization is st ill hampered by various obstacles. Appropriate pedagogical practices (Fisher, 2000) and the bandwidth bottleneck (Saba, 2000). With regard to evaluation, there has been an incli nation for environmental comparison, such as the effectiveness of a technology relative to the conventional classroom (Wisher & Champagne, 2000). However, an appropriate assessment could be a comparison of the effectiveness of WBI to the historical findings on the effectiveness of conventional CBI. Unlike the fixed resources in conventional CBI, Web-based instruction can be

PAGE 37

29 easily modified and redistributed, readily accessed, and effectively linked to related sources of knowledge. Compare these features to, say, an educational CD-ROM where content is encoded in its final form, availab ility was limited to specific computers, and immediate access to a vast array of related materials, as available through the Internet, was not possible. Of course, key instructional features, such as learner control and feedback, are shared between Web-based and conventional CBI. When well-designed instruction is coupled with computer delivery, the potential exists for improvement in learning. “Behaviorism has had the greatest impact on the use of technology in education.” (Thompson, Simonson, & Hargrave, 1996) However, the field of education has moved away from the behavioral approach and begun to focus on internal processes that take place in learners. The upsurge of technology development and application in the field of education is encouraging, yet the focus, to develop the most well-designed instruction may be veiled by a misunderstanding of the principles of Applied Behavior Analysis. “Constructivism” is the contemporary buzzword for ideas in educational research, theory and policy (Duffy 1996). Phrases such as "flexible navigation," "richer context," "learner c entered," and "social context of learning," populate the literature on Web-based instruction. Despite the proven and enduring nature of the behavioral approach in educational settings, proponents of this new paradigm of constructivism are quick to characterize any approach, other than constructivist as promoting passive, rote and sterile learning. This shift puts a large stress on the issue of measurability since, by definition, the processes s upposedly involved in constructivist ideas are internal and not readily observable. Mergel (1998) nicely summarizes Behaviorism, Cognitivism and Constructivism and their histories in

PAGE 38

30 instructional design. “Eclectic” is a word used to describe the recommended approach to merging and applying the knowledge and insight garnered from each of the learning theories. This may be the first glimmer of the “guiding light” that Ehrmann (1995) suggests. The application of modern technology as a bridge between behavioral theory and ideas from the various new educational theories could perhaps be the first step in developing an effective, proactive method of course content presentation. The anticipated result is a sound approach, that when applied in the field of WBI, will benefit both the learner and the educator in terms of effectiveness and learner retention. Much of the existing research in technology and education reflects an interest in multimedia environments. Increasingly, however, this research is focusing on the consequence of technology in education with studies that take into account diverse educational theories. Ehrmann (1995) sought to synthesize some of the research on technology on the classroom and concludes that one problem, ostensibly, is that individual efforts in the field of technology application can be quite effective, but for the educational community to benefit, there must be some “guiding light” to piece together all the great ideas. This light, or “roadmap,” could give structure and direction to the blossoming efforts of many in instructional technology, a field that is developing in leaps and bounds. Both Clark (1983) and Kozma (1991) support the idea that some structure is appropriate, particularly to study which teaching/learning strategies are best (chiefly those not feasible without newer technologies) and to study which technologies are best for supporting those strategies. Theoretical Assumptions and Their Link to Specific Experimental Variables Techniques for developing and shaping behavioral repertoires have been acquired and established by the application of the Experimental Analysis of Behavior. It

PAGE 39

31 seems the crucial factor concerning these techniques is the presence of contingencies of reinforcement. Preceding research using text-based programmed instructional materials showed that learning takes place when what is emitted is subsequently reinforced (Holland, 1976). The instructional contingency (composed of stimuli that compel an overtly-constructed response, upon which the learner receives immediate reinforcement for being correct) represents the essential juncture at which strengthening takes place. The research cited here (using computer-programmed instructional materials) has made evident the influential effects of instructional contingencies. The experimental question is this: If the instructional contingency is indeed the critical factor in the learning process and, in lab controlled computer-based instruction, the existence of instructional contingencies has been previously shown to directly relate to how much or what is learned, is the process generalized to other mediums of learning and for other types of learning? To answer this question, the present experiment contained two versions of a web-based lesson. One was presented in a stringently controlled set of programmed instruction, and the other in a set of text and graphic-based web page presentations. Previous research using computerprogrammed instruction has not compared these conditions using the World Wide Web as the medium for presenting the lesson content. Reasoning for the Present Study: A Continuing Line of Research Perhaps confusion about the instructional principles derived from the scientific analysis of behavior has prevented their widespread use in the field of instructional technology. If these procedures and techniques were clearly understood, developers of instructional programs could begin to make the most of computer and web-based

PAGE 40

32 technologies to reinforce learner constructed responses by applying pertinent knowledge of contingencies of reinforcement within computer and web-based instruction. The field of Instructional Technology, and educational research in general, can reap benefits from the extended study of how contingencies of reinforcement and improved achievement are related. This research would, as its primary objective, investigate the practical relation betw een the learner’s behavior and the method of delivery of lesson content. The present research was a follow-up to the preceding review of germane literature, suggestions, and continued research in the field of computer and web-based programmed instruction. Foregoing research has made a strong case contending that the presence of instructional contingencies, enta iling overt, constructed responses generates higher achievement as measured by post-treatment examinations. Additionally, such contingencies may produce an effective motivational environment. The available research that has endeavored to study the relation of constructed response contingencies in computer-programmed instruction to practical implementation has also shown favorable results. A significant difference in the present study is the identification by italics, in the text and graphics-based treatment, of the key words or phrases that are identified in the Program Control treatment. These key words and phrases were identified by the constructed response contingencies in the programmed instruction tutorials. Identifying and emphasizing the key information in the passive treatment afforded the participants in that group the benefit of the retention and l earning identified by previous research in isolation. By italicizing the salient words or phrases in the text and graphics-based treatment, the present study generalized research in isolation and setting apart of text,

PAGE 41

33 (Rickards & August, 1975; Fowler & Barker, 1974; Cashen & Leicht, 1970) to the typographical cueing inherent in italicized text. Although constructed responding has been previously compared to mouse clicking, key tapping, and passive reading, the specific contingencies (text or phrases) eliciting the constructed response have not been highlighted in the compared methods. The purpose of the present experiment was to analyze the functional relation between constructed-response contingencies using web-based programmed instruction tutorials and two outcomes: 1) achievement measured by a computer-based posttest, and 2) the extent to which students can later apply the target sk ills. Using web-based media, the experiment compared the relative effectiveness of constructed-response, programmed instruction with passive reading of instructional materials. The research studied the correlation between learner "interaction" (overt, constructed responses elicited by instructional reinforcement contingencies) in programmed instruction and academic achiev ement. This extension of the ex isting research as a syst ematic replication stems from the emergent applicat ion of the World-Wide Web as a teaching tool and offers a new perspective on the use of programmed instruction within the field instructional technology. Besides including a traditional posttest evaluation and an applied performance measure, the study investigated the relation of several demographic characteristics of students with the research results using correlational analyses. The study included a survey to explore how participants viewed the instructional conditions and how they adhered to the plan of instruction for each treatment.

PAGE 42

34 METHOD Participants 144 graduate and undergraduate education majors from an educational foundations course at a large, state research university located in the southeastern US served as participants. Programmed instruction was used to deliver all course content except for the lessons delivered in the present study. Sixty-nine percent of the participants were female. The lessons presented in the experiment were a part of the course requirements and the students were adv ised that their participation would not have a detrimental effect on their cla ss grade. Lesson content was based on Students were randomly assigned to experimental conditions by a computer program. Apparatus The World Wide Web was used to deliver the instructional programs. Students could access the tutorials from anywhere they had access to the Internet. Students were instructed to complete the lessons provided only, without note taking or printing of the materials for off-line study. The instructional program used to present the programmed instruction was constructed using Practical Extraction and Report Language (PERL) version 5.8.3. PERL is Open Source software. It can be downloaded for free as a source code or as a pre-compiled binary distribution. PERL's process, file, and text manipulation fac ilities make it particularly well suited for tasks involvi ng database acce ss, graphical programming, networking, and World Wide Web programming. The instructional design principles and techniques prescribed for computer-based programmed instruction in the program called Creating Computer Programmed Instruction (Kritch & Bostow, 1994) were used to create the instructional program (see Appendix 12).

PAGE 43

35 Treatment Conditions An 11-set instructional program about graphing data for behavioral analysis was developed prior to the conduct of the present study. The content for these lessons was drawn from the text, Applied Behavior Analysis (Cooper, Heron & Heward 1987) and used consistently throughout the two treatment conditions. These lessons were fieldtested and revised using data from 4 graduate assistants in the Department of Psychological and Social Foundations. The Constructed Response, Programmed Instruction (PI) Condition. The 11 tutorials presented in the program-contro lled treatment contained 359 instructional contingencies, each providing a screen, (see Appendix 1) or "frame," of instructional material with one or more blanks to be f illed in by typing an overt, constructed response at the keyboard. One hundred twenty nine of these frames presented the user with a graphic image representing a particular rele vant concept being taught by the lesson material. The PI program frames contained a total of 374 blanks within the 359 instructional frames, each requiring the student to supply constructed responses. Two hundred twenty eight of these blanks contained formal prompt letters, and 120 blanks contained no formal prompting. Of the 228 bl anks that contained formal prompting, 124 were discrimination frames that required the user to construct an echoic response. In other words, these frames provided alter native choices (within parentheses) that the user was to construct at the keyboard. Alternative choices were not represented by a symbol the user had to type, and hence were not considered to be traditional multiplechoice items. There were 17 frames that required only the typing of “true” or “false” for the lesson to proceed. Due to programming limitations, however, the PI program included 9 traditional multiple-choice items in which at least two alternatives were

PAGE 44

36 presented. Here, the topography of the response involved typing a single letter symbol (e.g., a, b, c, etc.) which represented one of the alternative choices, instead of constructing an echoic, intraverbal, overt response. When a participant typed the correct response, the computer displayed "CORRECT!" in a green colored font at the center of the screen and asked the user to “Press Enter or Click to Continue.” The progr am then presented the next frame. If the answer given to a frame was incorrect, the program displayed "INCORRECT." in a red colored font at the center of the screen, displayed the correct answer on the screen, and presented the student with a “Conti nue” button for the next frame. The Passive Response, Cued Text (CT) Condition. (see Appendix 2) The second condition consisted of zero-density constructed response presentations. Students experiencing this treatment were not required to overtly respond to any constructed response contingencies. The material was divided similarly into 11 separate chunks each with identical lesson materials as the corresponding instructional set from the programmed instruction materials. The “chunks” were presented on 11 individual cued-text and graphics-based web pages with approximately 33 sentences per page. The lesson material was duplicated from the PI condition, but all blanks were filled in. The key lesson information, requiring a constructed response in the PI treatment, was typographically cued for the participants by the use of italics. Each word that represented the correct constructed re sponse was presented in this treatment in italicized text to implement the desired cuing. Participants read each instructional set, arranged in the same linear order, with the identical corresponding graphics, but passively tapped the spacebar or clicked the mouse to return to the menu to select the next page. The pages for the instructional sets, however, could be seen in any order.

PAGE 45

37 A 54-item fill-in-the-blank posttest of the lesson material and an application task were constructed to evaluate the degree of sk ills acquired by st udents in each of the treatment conditions. These answers were directly related to the key concepts taught by the instructional programs. Each posttest item evaluated a particular aspect of the lesson material whether on the nature of scientific data or the accepted rules for creating a graph suitable for journal publication. For example, A second _____ axis is sometimes used to show different scales for multiple data paths." (The correct answer in this case was “vertical.”) The second dependent variable was the student’s achievement in an applied task using the graphing sk ill lear ned in from the lesson materials. This application of knowledge required the student to analyze a set of behavioral observation data, and given graph paper and pencil, represent the data series using the rules and structures learned in the tutorial. Procedure To avoid exposure to the specific content of the lesson material, and potentially compromising the study by divulging key concepts before it’s initiation, a pretest was not included in the present experiment. During the third week of class students in each of the four participating classes were randomly assigned to one of the two treatment groups. Each participant was individually notified of treatment assignment by email (see Appendix 13) through the course website managed with WebCT. This email correspondence included specific instructions and provided an internet link. Students were instructed to complete the tutorials for the lessons on "Graphing in Applied Behavior Analysis" during the fourth week of class. Each participant was scheduled for a two-hour appointment for a

PAGE 46

38 "Graphing Quiz" at the computer lab after t he tutorial presentation week. Consideration was made in the schedule for the approximately 70 distance learners in the courses, allowing these students to choose a two-hour time frame during the week of testing. The schedule was posted in the “Bulletin Board” area of their course WebCT site. Assessment occurred from 11:00AM through 4: 00PM, the week immediately following the tutorial administration. The random assignment was done using the original rosters from the first day of class. These rosters indicated a sample size of 232 students, however, after the first week of class, a number of students had exited the course through the university drop/add process. This attrition resulted in slightly unequal groups. Student completed their assigned tutorials from the location of their choice, accessing them through the Internet. Ei ghteen students reported that they had not been given a link to begin the graphing tutorials and after confirming their treatment group, the experimenter immediately sent a new notification email to the individuals. The students proceeded as planned and 144 was the final tally of students completing the tutorials. The following Monday, students began to report to the computer lab at their scheduled appointment times. The experimenter ushered each participant to a randomly assigned computer station and each participant was given a brief overview of the testing procedure. The participants were first administered a computer-based 54 question examination (see Appendix 3) of key concepts and material from the graphing lessons. The lab manager constantly monitored the computer lab throughout the testing phase of the experiment. After completing the computer-based test, participants were directed to a

PAGE 47

39 separate classroom where another proctor administered the applied task assignment. A situation describing the gathering of particular behavioral data was given to each student. The proctor then presented each participant with a sheet of graph paper, pencil, and directions. Directions printed on the assignment asked each participant to assess the data and using the sk ills lear ned from the preceding lessons on graphing, make a proper graph(s) for the data presented (see Appendix 4). The posttest consisted of 54 fill-in-the-blank items in a frame-by-frame presentation similar to the programmed instruction that all students were fam iliar with from quizzes on other course material. Validity of the testing instruments was endorsed by subject matter experts, (SMEs) ensuring that knowledge of the lesson content was truly measured by the items of the test. Employing an objective approach to validation of the test, the SMEs utilized the text (C ooper, Heron, & Heward 1987) from which the lessons were created to compare and validate the test items. The representational acceptability criteria for each test item was derived from an analysis of the text content and used to assess the relevance and validity of each instrument. Both the computer posttest and the applied graphing task met the criterion derived by the SMEs for instrument validity. The computer posttest recorded each response, time taken to complete each item, and the percent correct score for each participant. However, students were not informed of their posttest scores (on either the product or computer test) to minimize post-experiment discussion with other students, and to avoid influencing participant motivation before proceeding to the applied graphing sk ill assessment. To test for internal reliability of the computer-based posttest, the K uderRichardson 20 (Borg & Gall, 1989) test for internal reliab ility was calculated post hoc

PAGE 48

40 and yielded a score of .87. A 27-question rubric was designed to identify the required elements for the graphing assignment. To determine and maintain rater reliab ility for scoring the assignment, the service of an external assistant, unfamiliar with the graphing lessons, was enlisted. The assistant scored a random sample of 25 products using the product rubric sheet and key. [Appendices 5 & 6]. Her scores were compared with those of the experimenter who scored products using the identical product grade sheet and 100 percent agreement occurred. The assistant then scored every tenth product yielding the same agreement. The rubric was clear and explicit requirements were specified to identify key aspects taught in the lessons. The Kuder-Richardson 20 test for internal reliability was also calculated post hoc for items on the applied task rubric and a reliab ility coefficient of .85 was obtai ned. Upon completion of the applied graphing assignment, participants were administered the post-tutorial questionnaire (see Appendix 7). This questionnaire attempted to assess student attitudes regarding the experiment, their computer sk ill, and satisfaction with their method of instruction. The questionnaires were anonymous with the exception of treatment group identif ication. As a follow-up, five additional questions were asked of the participants via an online survey using the capab ilities integral to the WebCT course management software (see Appendix 8). These questions were posed to validate the results from the initial questionnaire. Because appointments were scheduled at the same location throughout the week, discussion between students was antici pated. Therefore, each participant was given a "debriefing" immediately after completing the computer-based test. This interaction briefly described the importance of conducting educational research, asking participants not to discuss the experiment until later when results were provided.

PAGE 49

41 Experimental Design and Data Analysis Two one-way analysis of variance (ANOVA) was employed to evaluate differences in computer-based posttest scores and applied graphing products resulting from the experimental comparison conditions (Borg & Gall, 1989). A multiple analysis of variance (MANOVA) was also performed to assess the interaction between the two dependent variables (computer posttest results and applied task results) across the two independent variables (PI condition and cued-text condition). The MANOVA, however revealed no interaction effects among the variables (Pr > F : <.0001). Data for evaluation came from the PERL program which recorded percent correct scores on the computer posttest, the applied graphing rubric-scored products, and the questionnaires administered after completing all lesson and evaluation materials. Data records were assembled into summary charts used for the SAS statistical program. Table 3 summarizes the experimental conditions, response contingencies, and stimuli presented to the two independent variable groups. Programmed InstructionTreatment Cued Text Treatment Constructed Response No Constructed Response Overt responses to all Passive web-page reading, frames, program advanced advanced at student discretion 11 total tutorials 11 total web pages 359 total frames 359 total frames (sentences) 374 total blanks 0 total blanks 359 frames requiring overt responses 0 frames requiring overt responses 228 blanks w/formal prompting 124 discrimination frames 104 partial word prompt 120 blanks w/o formal prompting 9 multiple-choice frames 17 true-false frames Table 3. Summary of the Experimental Conditions

PAGE 50

42 Figure 1. Distribution of Computer Posttest Scores (PI Group) RESULTS Results of the ANOVA on posttest scores revealed significant differences between groups, F(1,142) = 5.67, p=0.0186. Table 4 presents ANOVA results and posttest means for the instructional conditions. In all statistical comparisons, a minimum alpha level of .05 was applied in assessing statistical significance. Source DF Squares Mean Square F Value Pr > F Model 1 1473.92063 1473.92063 5.67 0.0186 Error 142 36919.79937 259.99859 Total 143 38393.72000 Level of Computer Posttest Treatment N Mean SD 1 69 40.8521739 17.6234103 2 75 34.4480000 14.6121234 Table 4. ANOVA and Means Computer Posttest Distributions of the posttest scores for each group are illustrated in Figure 1 and Figure 3. The scores on the posttest for the PI group ranged from 83.3% to a low of 7.4%. Scores for the Cued Text group ranged from a high of 79.6% to a low of 11.1%. Box and whisker plots in PI Group PosttestDistribution0 5 10 15 20 020406080100 PercentFrequency

PAGE 51

43 Figure 2 indicate the positive relationship between exposure to lesson materials requiring constructed responses and participants' performance on the posttest. The PI group had a higher mean score (M = 40.85, SD = 17.62), than the CT group (M = 34.45, SD = 14.61) Programmed instruction, supplying contingencies of reinforcement that require overt constructed responses, is shown to be associated with higher posttest percent correct scores. These results were analyzed using a softwarebased tool (Devilly, 2004) | 90 + | | | 0 80 + | 0 | | | | 0 | | 70 + | | | | | | | | | | 60 + | | | | | | | | | | | 50 + +-----+ | | | | | | | | | | | | +-----+ 40 + *--+--* | | | | | | | | | | | + | | | | *-----* 30 + +-----+ | | | | | | | | | | | | +-----+ 20 + | | | | | | | | | | | 10 + | | | | | | 0 + ------------+-----------+----------PI CT Treatment Figure 2. Box Plots of Computer Posttest Means P e r ce n t Computer-Based Posttest

PAGE 52

44 Figure 3. Distribution of Computer Posttest Scores (Cued Text Group) Results of the ANOVA calculated from the applied graphing scores did not reveal significant differences betw een groups, F(1,142)=0.01 p=.9206. Table 5 presents the ANOVA results and product score means for the two instructional conditions. The Programmed Instruction group produced means (M =51.10, SD=18.59) nearly identical to those produced by the Cued Text group (M = 51.41, SD=18.21). The PI group scores on the applied graphing task ranged from a low of 3.7% to a high score of 85.2%. The Cued Text group scores ranged from 7.4% to a high of 81.5%. Source DF Squares Mean Square F Value Pr > F Model 1 3.37206 3.37206 0.01 0.9206 Error 142 48018.72016 338.16000 Total 143 48022.09222 Level of Applied Graphing Treatment N Mean SD 1 (PI) 69 51.1043478 18.5849436 2 (CT) 75 51.4106667 18.2073313 Table 5. ANOVA and Means Applied Graphing Task The near identical result in performance between the two treatment groups on the applied graphing task is represented by box and whisker plots in Figure 4. Cued Text Group PosttestDistribution0 5 10 15 20 25 30 020406080100 PercentFrequency

PAGE 53

45 Fi g ure 5. Distribution of A pp lied Task Scores ( PI Grou p) PI Group Applied TaskDistribution0 5 10 15 20 020406080100 PercentFrequency | 90 + | | | | | | | 80 + | | | | | | | | | | | 70 + | | | +-----+ +-----+ | | | | | | | | | | 60 + | | | | | | | | | | | | | | | *-----* *--+--* 50 + | + | | | | | | | | | | | | | | | | | | 40 + +-----+ | | | | +-----+ | | | | | | 30 + | | | | | | | | | | | 20 + | | | | | | | | | | | 10 + | | | | | | | | | | 0 + ------------+-----------+----------PI CT Treatment Figure 4. Box Plots of Applied Graphing Task Means Distributions of the applied graphing task scores for each group are illustrated in Figure 5 and Figure 6. Applied Graphing Task P e r ce n t

PAGE 54

46 Figure 6. Distribution of Applied Task Scores (Cued Text Group) Figure 7. Questionnaire Responses by Treatment Cued Text Group Applied TaskDistribution0 5 10 15 20 020406080100 PercentFrequency The post questionnaire consisted of 15 questions. The first two were for group identification only. Questions 3-15 were categorized as follows: Questions 3, 7, 11 : A Personal assessment of computer sk ills Questions 4, 8, 12 : B Satisfaction with teaching method Questions 5, 9, 13 : C Personal assessment of learning environment Questions 4, 8 : D Personal assessment of reading/retention sk ills Questions 14, 15 : E Personal assessment of adherence to tutorial instructions Questionnaire Results By Treatment Group0 1 2 3 4 5 A B C D E Question Categories5 point Lickert Scale Lo (disagree) Hi (agree) Programmed Text Based Treatment Group

PAGE 55

47 The graph in Figure 7 indicates the relationship between the treatment groups among the question categories. The largest difference between the treatment groups (0.8 for the Cued Text Based group and 1.3 for the Programmed Instruction group) was in “Satisfaction with Teaching Method.” Self-reported non-compliance with tutorial instructions was distributed throughout the two groups. (16/67 in the PI group and 25/66 in the Cued Text group) A post-hoc analysis for relationships of the questionaire responses yielded a Chi-square of 0.0344 and a correlation factor of 0.0354. While the plotted responses to questions in Category B indicated a possible significance, the post-hoc analysis revealed little or no real evidences against to indicate a relationship between the questionaire variables. Results from the follow-up questionnaire yielded similar results between treatment groups as indicated in Figure 8. Seven participants, five from the programmed instruction group and two from the cued text group, indicated on Question #2 that they studied with a partner during the tutorial presentation phase. One partici pant, from the cued text group, indicated on Question #3 that he or she found a way to experience both treatments. Fifty-six participants indicated they did additional studying for the graphing lessons with their response to Question #1 of the follow-up questionnaire [Appendix 8]. They elaborated with their responses to Question #6. The narrative comments are included [Appendix 9] for those participants who indicated that t hey did some form of additional studying, besides completing the assigned lessons, and outside the scope of the instructions for the experiment.

PAGE 56

48 Follow-up Questions PI CT 1 Tutorials only no additional studying? (1 Yes 2 No) 1.2 1.3 2 Confidence in graphing ability after tutorial? ( Lickert : 1 = low 5 = high) 1.6 1.4 3 Did you work with a partner? (1 Yes 2 No) 1.9 2.0 4 Programmed Instruction or Text B ased ? (1 PI 2CT) 1.0 2.0 5 Preferred instructional format? (Tally) P.I. 10 (15%) 26 (39%) Web Text 18 (27%) 3 ( 5%) Lecture 26 (39%) 24 (35%) Group Study 6 ( 9%) 6 (10%) 1-on-1 Tutor 7 (10%) 7 (11%) 6 Narrative expounding upon Question 1 [Appendix 9] Figure 8. Follow-up Questionnaire Responses

PAGE 57

49 DISCUSSION The existence of constructed response contingencies in web-based instruction is related to higher achievement on computer-based posttests. This finding generalizes previous results in the area of computer-based instruction to the delivery medium of the World Wide Web. These results contribute to the line of research that has identified a correlation between active, overt responding and higher achievement. Additionally, the results generalize some findings of Kritch & Bostow (1998) to course content that falls in a different category and domain of Bloom’s (1964) Taxonomy. Although the performance task, administered to each group in the form of a computer-based posttest, was verbal (Gagn, 1985) information/knowledge level (Bloom, 1964), the value of the constructed response contingency was validated for a verbal information (Gagn, 1985)/knowledge (Bloom, 1964) level outcome measure. The results did not prove as conclusive, or as supportive of previous research when the measure was based on student achi evement on the applied task. Without a statistically significant difference between the treatment groups, the results of the present study, as applies to student achievement on an assignment of practical activity, do not support Kritch & Bostow (1998). Initial analysis would suggest that cued-text in the non-programmed instruction treatment might be the likely explanation for the undistinguished findings. For all study participants, the lesson materials for the entire course, other than the graphing lessons for the Cued Text group, were presented using programmed instruction. To mitigate the possibility of the PI gr oup being exposed to a practice effect, the format of the questions delivered in the test instrument was significantly different than that of the tutorials. Test questions were terse with less

PAGE 58

50 cuing, requiring a higher level of recall for the graphing lesson content. It is noted that under the conditions of the present study, based on the marginal to poor scores on both the posttest and the applied task, the tr eatments seem to have failed to teach proper graphing technique. This fact may be explained by the possib ility of treatment novelty or participant uneasiness with the method of delivery of the testing instruments. Future studies should attempt to mitigate these possib ilities by familiarizing the participants with the presentation method. Additionally, quizzes covering non-related material could be presented in the same format as the study testing instruments. Limitations of This Study Of concern to the experimenter in the present research is the nearly 40% of the participants who admitted to doing additional study while experiencing the treatments for the experiment. Whether the student printed off screens while going through the material, took notes, or studied with a partner, the additional study potentially contaminated the validity of the treatment c onditions for those individuals. The potential could have existed for removing those specific individuals from the study, citing a compromise to treatment integrity. This idea was abandoned when the questionnaires were identified as anonymous and could not be related to a specific student. In any case, having random assignment for the present study, the lapse in treatment integrity is assumed to have been randomly distributed throughout. Specifically, since selfreported non-compliance with tutorial instructions was distributed throughout the two groups, it is also assumed that the differences realized in the evaluations is not related to this implied “cheating.” The present experiment has identified a potential problem for research, particularly in the administration of treatment conditions by use of the World Wide Web.

PAGE 59

51 The nature of the medium, and the varying preferences of individual students, with regard to study habits, makes it difficult, if not impossible to control a web-delivered treatment. It could be argued that the treat ment might be supervised, presented in a laboratory setting or somehow administered in a contrived control situation that forces the participants to participate exactly as the experiment specifies. This artificial control removes the students’ option for exercising any supplemental study sk ills and paints a sterile, inaccurate picture of web-based learning. If we control the options our participants have in our research, what external validity will our research have w hen compared to how students “really” do it? This may bring into question the external validity of treatments using laboratory controls in web-based experiments. Implications of This Study The present study identified several lessons for application in future use of the World Wide Web as a medium for delivery of experimental treatments. Researchers would do well to increase the focus on developing research treatments that are more effective. Students tend to perform better in learning envir onments that they are comfortable with, enjoy, and have confidence in. Additionally, analysis of the questionaires indicated that fifty-six participants found some way to supplement the lesson material that they were provided in the present experiment. It would be of value to consider the control issue regarding treatment integrity when choosing the web as the delivery tool. This of course, must be weighed against the risk of establishing situations of “contrived” control in the name of treatment integrity. The World Wide Web is a dynamic medium, allowing learners much leeway in applying previously conditioned behaviors in the process of learning. Placing artificial limits on student activity may give the results we seek, but not accurately represent the environment that the student will actually be experiencing. The

PAGE 60

52 present research has also identified two major points of method that are worthy of mention: 1) Survey data may have proven more applicable had there been a way to associate a particular questionnaire to a specific participant. Identifying the students who admitted to going outside the treatment requirements may have allowed the experimenter to remove those students and the results might have been markedly different. 2) The World Wide Web is a newer medium for education and as an increasing number of classes and coursework is administered this way, researchers are going to have to adapt to a certain lack of control over the variable of treatment integrity Students are going to do what they feel comfortable with in the process of studying The present experiment validates this. Summary This dissertation indeed demonstrated a statistically significant difference in one of the dependent variables. However, the numbers may not accurately reflect the contribution of the independent variable in t he treatment to the performance of the participants on either the computer posttest or the applied graphing task. While programmed instruction students performed better than the text group on a computer posttest, they failed to perform better on an applied graphing assignment. The results of the post-tutorial questionnaires revealed that a large number of students printed screens and took notes--studying these materi als immediately prior to the computer posttest and applied task. This research draws attention to the potential problem of "treatment integrity" when experimental re search is conducted over the web without accompanying supervision and insistence upon treatment delivery.

PAGE 61

53 REFERENCES Allen, B. S. & Eckols, S. L. (Ed.) (1997). Handbook of Usab ility Principles. 3.1.8. Use typographic cueing devices to direct the user’s attention San Diego State University Foundation & California State Employment Development Department [Online] http://clipt.sdsu.edu/posit/tx/posit.qry?function=Detail&Layout1_uid1=38 Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13 (2), 111-27. Bloom, B. S., Mesia, B. B., & Krathwohl, D. R. (1964). Taxonomy of Educational Objectives (two vols: The Affective Domain & The Cognitive Domain). New York. David McKay. Boden, A., Archwamety, T. & McFarland, M. (2000). Programmed Instruction in Secondary Education: A Meta-Analysis of the Impact of Class Size on Its Effectiveness Paper presented at the Annual Meeting of the National Association of School Psychologi sts. March 2000. New Orleans. Borg, W. & Gall, M. (1989). Educational Research (5th ed.). White Plains, NY: Longman Inc.

PAGE 62

54 Bostow, D. E., Kritch, K. M. & Tompkins, B. F. (1995). Computers and Pedagogy: Replacing Telling with Interactive Computer-Programmed Instruction. Behavior Research Methods, Instruments, & Computers, 27 (2) 297-300. Burton, J. K., Moore, D. M. & Magliaro, S. G. (1996). Behaviorism and Instructional Technology. In Jonassen, D. (Ed.), Handbook of Research for Educational Communications and Technology (Ch 2) New York: Simon & Schuster Macmillan. Butson, R. (2003). Learning Objects: weapons of mass instruction. British Journal of Educational Technology, 34 (5), 667-669 Cashen, V.M., & Leicht, K.L. (1970). Role of the isolation effect in a formal educational setting. Journal of E ducational P sychology, 61 (6), 484-486. Cho, Y. (1995). Learner Control, Cognitive Processes, and Hypertext Learning Environments. In Emerging Technologies, Lifelong Learning, NECC ’95 Paper presented at the Annual National Educ ational Computing Conference. June 1995. Baltimore. Clark, R. (1983). Reconsidering research on learning from media. Review of Educational Research, 53 (4), 445-459.

PAGE 63

55 Cooper, J., Heron, T., & Heward, W. (1987). Applied Behavior Analysis Columbus, OH: Merril. Cronbach, L. & Snow, R. (1977). Aptitudes and instructional methods: A handbook for research on interactions New York: Wiley & Sons. Devilly, G.J. ( 2004). The Effect Size Generator for Windows: Version 2.3 (computer program) Centre for Neuropsychology, Swinburne University, Australia. [Online] http://www.swin. edu.au/victims/resources/software/effectsize/effect_size_generator.html Dewey, J. (1897). My Pedagogic Creed, The School Journal, 54 (3)77-80 [Online] http://www.infed.org/archives/e-texts/e-dew-pc.htm Dewey, J. (1916). Democracy and Education New York, Macmillan. [Online] http://www.ilt.columbia. edu/publications/dewey.html Duffy, T. M. & Cunningham, D. J. (1996). Constructivism: Implications for the Design and Delivery of Instruction. In Jonassen, D. H. (Ed.), Handbook of Research for Educational Communications and Technology (Ch 7). New York: Simon & Schuster Macmillan. Durso, F. T., & Mellgren, R. L. (1989). Thinking about research St. Paul, MN: West Publishing.

PAGE 64

56 Dyson, M. C. & Gregory, J. (2002). Typographic cueing on screen, Visible Language, 36 (3), 326. Ehrmann, S. C. (1995). Asking the Right Ques tion: What Does Research Tell Us About Technology and Higher Learning. Change, The Magazine of Higher Learning, 27 (2), 20-27. Fisher, S. G. (2000). Web-based training: One size does not fit all. In Mantyla, K. (Ed.), The 2000/2001 Distance learning yearbook New York: McGraw-Hill Fowler, R., & Barker, A. (1974). Effectiveness of highlighting for retention of text material. Journal of Applied Psychology, 59 (3), 358-364. Gagn, R. (1985). The Conditions of Learning (4th ed.). New York: Holt, Rinehart & Winston. Glynn, S. M., Britton, B. K., & Tillman, M. H. ( 1985). Typographic cues in text: Management of the Reader's Attention. In Jonassen, D. H. (Ed.), The Technology of Text: Principles for Structuring, Designing, and Displaying Text (Vol 2). Englewood Cliffs, NJ: Educational Technology Publications. Gropper, G. L. (1987). A Lesson Based on a Behavioral Approach to Instructional Design. In Reigeluth, C. M. (Ed.), Instructional Theories in Action: Lessons Illustrating Selected Theories and Models (Ch 3). Hillsdale, NJ: Lawrence Erlbaum Associates.

PAGE 65

57 Hartley, J. (1987). Designing electronic text: The role of print-based research. Educational Communications and Technology Journal, 35 (1), 3-17. Huitt, W., & Hummel, J. (1997). An introduction to operant (instrumental) conditioning. Educational Psyc hology Interactive Valdosta, GA: Valdosta State University. [Online] http://chiron.valdosta.edu/ whuitt/col/beh sys/ operant.html Huitt, W. & Hummel, J. (1998). An overview of the behavioral perspective. Educational Psychology Interactive Valdosta GA: Valdosta State University. [Online] http://chiron.valdosta.edu/ whuitt/col/beh sys/behsys.html Johnston, J. M. & Pennypacker, H. S. (1980). Strategies and tactics of behavioral research New Jersey: Lawrence Erlbaum Associates, Inc. Keller, F.S. (1968). Goodbye, Teacher… Journal of Applied Behavioral Analysis, 1 (1), 79-89. Kerlinger, F. N. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart & Winston. Kozma, R. (1991). Learning with media. Review of Educational Research, 61 (2), 179211.

PAGE 66

58 Kritch, K. M. & Bostow, D. E. (1994). Creating Computer Programmed Instruction [Computer program]. Tampa FL: Customs Systems International, Inc. Kritch, K. M. & Bostow, D. E. (1998). Degree of Constructed-Response Interaction in Computer-Based Programmed Instruction. Journal of Applied Behavior Analysis, 31 (3) 387-398. Kritch, K. M., Bostow, D. E. & Dedrick, R. F. (1995). Level of Interactivity of Videodisc Instruction on College Students’ Recall of AIDS Information. Journal of Applied Behavior Analysis, 28 (1) 85-86. Lunts, E. (2002). What does the Literature Say about the Effectiveness of Learner Control in Computer Assisted Instruction?. Electronic Journal for the Integration of Technology in Education, 1 (2) 59-75. Lutz, J., Briggs, A., & Cain, K. (2003). An Examination of the Value of the Generation Effect for Learning New Material. Journal of Gener al Psychology, 130 (2) 171188. Manitoba Education, Citizenship and Youth (2001). Literacy Learning Through the Six Language Arts [Online] http://www. edu.gov.mb.ca/ks4/cur/ela/docs/litlearn3.html

PAGE 67

59 Mergel, B. (1998). Instructional Design and Learning Theory Occasional Paper, Educational Communications and Technology, University of Saskatchewan. [Online] www.usask.ca/ education/coursework/802papers/mergel/brenda.htm Merrill, M.D. ( 1987). A Lesson Based on the Component Display Theory. In Reigeluth, C. M. (Ed.), Instructional Theories in Action: Lessons Illustrating Selected Theories and Models (Ch 7). Hillsdale, NJ: Lawrence Erl baum Associates. Molenda, M. (2002). A New Framework for Teaching in the Cognitive Domain ERIC Clearing-house on Information & Technology, Syracuse, NY. ERIC Document: ED 470 983. Morey, E. H. (1996). Feedback Research. In Jonassen, D. H. (Ed.), Handbook of Research for Educational Communications and Technology (Ch 32). New York: Simon & Schuster Macmillan. Parker, K. (2000). Art, Science and the Importance of Aesthetics in Instructional Design Occasional Paper, Graduate Student, University of South Florida. Petry, B., Mouton, H. & Reigeluth, C. M. (1987). A Lesson Based on the Gagn-Briggs Theory of Instruction. In Reigeluth, C. M. (Ed.), Instructional Theories in Action: Lessons Illustrating Selected Theories and Models (Ch 2). Hillsdale, NJ: Lawrence Erlbaum Associates.

PAGE 68

60 Rabinowitz, J.C. & Craik, F.I.M. (1986). Specific enhancement effects associated with word generation. Journal of Memory and Language, 25 226-237. Reeves, T. C. (1993) Psuedoscience in Instructional Technology: The Case of Learner Control Research In Proceedings of Selected Research and Development Presentations at the 1993 Convention of the Association for Educational Communications and Technology. January 1993. New Orleans. Rickards, J.P., & August, J.G. (1975). Generative underlining strategies in prose recall. Journal of Educat ional Psychology, 67 (8), 860-865. Saba, F. (2000). Research in Distance E ducation: A Status Report. International Review of Research in Open and Distance Learning, 1 (1). [Online] http://www.irrodl.org/content/v1.1/far had.html Siemens, G. (2003). Evaluating media characteristics: Using multimedia to achieve learning outcomes Paper presented at AMTEC 2002 [Online] http://www.elearnspace.org/Articles/mediacharacteristics.htm Shoffner, M. B., Jones, M. & Harmon, S. W. (2000) Implications of New and Emerging Technologies for Learning and Cognition. Journal of Electronic Publishing, 6 (1). [Online] http://www.press.umich. edu/jep/06-01/shoffner.html

PAGE 69

61 Skinner, B. F. (1938). The Behavior of Organisms New York: Appleton. Skinner, B.F. (1950). Are theories of l earning necessary? Psychological Review, 57 (4), 193-216. [Online] http://psychcl assics.yorku.ca/Skinner/Theories/ Skinner, B. F. (1968) The technology of teaching New York: Meredith. Skinner, B. F. (1969). Contingencies of reinforcement. A theoretical analysis The Century Psychology Series. New York: Appleton. Skinner, B. F. (1972). Cumulative Record The Century Psychology Series. New York: Appleton. Slavin, R. E. (2000) Educati onal Psychology: Theory and Practice 6th ed. Needham Heights, MA. Allyn & Bacon. Sulzer-Azaroff, B. & Mayer, R. (1991). Behavior analysis for lasting change Chicago: Holt, Rinehart and Winston. Thomas, D. L. & Bostow, D. E. (1991). Evaluation of Pre-Therapy Computer-Interactive Instruction. Journal of Computer-Based Instruction, 18 (2), 66-70. Thompson, A., Simonson, M., & Hargrave, C. (1996). Educational technology: A Review of the Research 2nd ed. Association for Educational Communications and Technology. Washington, D.C.

PAGE 70

62 Tudor, R. M. (1995). Isolating the effects of active responding in computer-based instruction. Journal of Applied Behavior Analysis, 28 (3), 343-344. Tudor, R. M. & Bostow, D. E. (1991). Computer-Programmed Instruction: The Relation of Required Interaction to Practical Application. Journal of Applied Behavior Analysis, 24 (2), 361-368. Wegner, S. B. & Holloway, K. C. (1999). The Effects of Internet-Based Instruction on Student Learning. Journal of Asynchronous Learning Networks, 3 (2), 98-106. [Online] http://www.sl oan-c.org/publications/jaln/v3n2/pdf/v3n2_wegner.pdf Williams, M. D. ( 1996). Learner-control and instructional technologies. In Jonassen, D. H. (Ed.), Handbook of Research for Educational Communications and Technology (Ch 33). New York: Simon & Schuster Macmillan. Wisher, R.A., & Champagne, M. (2000). Distance learning and training: An evaluation perspective. In Tobias, S. & Fletcher, J. (Ed.), Training and retraining: A handbook for business, industry, government, and m ilitary New York: Macmillan.

PAGE 71

63 APPENDICES

PAGE 72

64 Appendix 1. Screen Capture Programmed Instruction

PAGE 73

65 Appendix 2. Screen Capture Cued Text Web Page

PAGE 74

66 Appendix 3. Posttest Questions Graphs _______ information. communicate _______ is indicated by the horizontal axis. Time Stretching the ordinate serves to _______ the appearance of an experimental effect. Magnify A vertical line on a cumulative record indicates a _______. Reset There are _______ coordinates on a Cartesian plane. two Vertical lines indicate _______ in experimental conditions. changes A mean without raw data gives no evidence _______ in the experimental data points. variations A _______ graph is better for showing differences in non-continuous data points. bar A _______ graph contains more than one data path for subjects, situations, or behaviors. complex A slope is _______ when the rate is higher. steeper A bar graph _______ sacrifices presentation of variation. sacrifices When the target behavior is one that can occur or not occur only once per observation session, the effects of any intervention are _______ to detect on a cumulative graph. easier The data from multiple _______ are often stacked vertically within a graph. individuals The _______ is the average of a set of data points. mean A scale break is used to indicate _______ in the progression of time on the horizontal axis. discontinuity The purpose of a graph is to highlight _______ ________. functional relationships

PAGE 75

Appendix 3. Posttest Questions (cont’d) 67 The vertical graphing of behaviors or situations is to determine whether changes in one variable are _______ _______ changes in other. accompanied by Depiction of data on a Cartesian plane is called a _______. graphic Something systematically manipulated by the researcher is called the _______ _______. independent variable A sequence of plotted data points is called a _______. path Abbreviations can cause _______. confusion The heart of behavior analysis is the _______ measurement of behavior. repeated Visual analysis is a _______ method of data analysis. conservative The scaling of the vertical axis should be _______ when small numerical changes in behavior are not socially important and the variability obscured in such a scale is not a significant factor. contracted In applied behavior analysis, behavior is monitored _______. continuously _______ is something an individual does. behavior In the school bus study, both the _______ of disruptions and their total duration in seconds for each bus trip were plotted against the same vertical axis in this figure. number Labels should be _______ but descriptive. brief Labels identify _______ conditions. experimental Major treatment changes are separated by _______ vertical lines. solid Ordinarily _______ _______ range of possible values are indicated on the vertical axis. the full Discontinuities in the time context should be clearly marked by _______ breaks. scale

PAGE 76

Appendix 3. Posttest Questions (cont’d) 68 The _______ _______ should also contain an explanation of any observed but unplanned events that may have affected the dependent variable at specific times of the study and should point out any potentially misleading or confusing features of the graph. figure legend In applied behavior analysis, graphs provide _______ access to the original data. direct In behavior analysis, behavior is the _______ variable. dependent Minor experimental manipulations are separated by _______ vertical lines. dashed The intersection to two axes is called the _______. origin Graphing one's own performance can be an effective _______. intervention _______ are printed beside and above a graph. labels The x-axis is a _______ line. horizontal "Data" in behavior analysis mean _______ results. quantitative In multiple-tier graphs, equal distances on each vertical axis should represent equal changes in behavior to aid the _______ of data across tiers. comparison _______ _______ are desirable when the total number of responses made over time is important or when progress toward a specific goal can be measured in aggregated units of behavior. cumulative records Graphs communicate without a _______ analysis. statistical In contrast to statistical evaluation, visual analysis imposes no predetermined or arbitrary level for evaluating the _______ of behavior change. significance Stretching or compressing the ordinate results in _______ of the data. distortion Variability is more conspicuous with an _______ _______ graph. equal interval

PAGE 77

Appendix 3. Posttest Questions (cont’d) 69 The connecting step in the progression of successive applications of the treatment is called a _______ _______. dog leg The line graph is based on a Cartesian plane, a two-dimensional area formed by the intersection of two _______ lines. Perpendicular _______ labels identify the different conditions within a phase. subordinate An "overall" response rate is the _______ rate of response over a given time period, such as during a specific session, phase, or condition of an experiment. average The term semi-logarithmic chart refers to graphs in which only one _______ is scaled proportionally axis The rate within a narrow range of time is called the _______ rate. local _______ data paths are also used to facilitate the simultaneous comparison of the effects of experimental manipulations on two or more different behaviors. multiple A sequence of connected measurements is called a _______ _______. data path In applied behavior analysis a _______ dimension of behavior is measured repeatedly. quantifiable A graph is an easily understood presentation of the degree and nature of the ________ of behavior to an environmental variable. relation The Standard _______ _______ provides a standardized means of charting and analyzing change in both absolute and relative rates of response. behavior chart On most graphs the vertical axis can be drawn approximately ______________ [include the hyphen in your answer] the length of the horizontal axis. two-thirds _______ _______ make the comparison between very high rates difficult. cumulative graphs An appropriate _______ ________ can be used to give the impression that changes are more important than they really are. scale break

PAGE 78

Appendix 3. Posttest Questions (cont’d) 70 The instructional decision-making system, called _______ _______ assumes that (1) learning is best measured as a change in response rate, (2) learning most often occurs through proportional changes in behavior, and (3) past changes in performance can predict future learning. precision teaching An instructional decision-making system, called Precision Teaching, has been developed for use with the _______ _______ _______. This figure is an example. standard behavior chart A scientific analysis evaluates the relation of behavior to its surrounding environment It targets some behavior and manipulates a (n) _________ variable. independent When two data sets travel exactly the same path, the lines should be drawn close to and _______ with one another to help clarify the situation. parallel Experimental changes are labeled at the _______ of a graph. top _______ is the frequency of responses emitted per unit of time, usually reported as responses per minute in applied behavior analysis rate The figure legend is a _______ statement. concise The _______ of a data path indicates the rate of behavior. slope The vertical axis, also called the _______-axis [include the hyphen with the word axis] Y When more than three data paths are displayed on the same graph, the benefits of making additional comparisons are often outweighed by the _______ of too much visual "noise." distraction Unplanned events that occur during the experiment or minor manipulations that do not warrant a condition change line can be indicated by placing small arrows, _______, or other symbols next to the relevant data points. asterisks When the same manipulation of an independent variable occurs at different points along the horizontal axes of multiple-tiered graphs, a dog-leg _______ the change lines of adjacent tiers makes it easy to follow the progression of events in the experiment connecting A label should be _______ along the y-axis. centered

PAGE 79

Appendix 3. Posttest Questions (cont’d) 71 In this figure, _______ change lines are drawn to coincide with the introduction or withdrawal of organized games. phase With a graph you can use your eyes When presented in a format that _______ displays the relationships among a series of measurements, the meaningful features of a set of behavioral data are more immediately apparent. Visually

PAGE 80

72 Appendix 4. Applied Graphing Assignment

PAGE 81

73 Appendix 5. Rubric for Applied Graphing Assignment

PAGE 82

74 Appendix 6. Expected Output Applied Graphing Assignment

PAGE 83

75 Appendix 7. Post-Tutorial Questionnaire 1 What course are you in? a 3214 b 3228 c 6211 d 6215 2 Which method of tutorial did you experience? a. Programmed b. Scrolling Text Strongly Strongly Agree -------Agree -------Neutral -------Disagree -------Disagree a. b c. d. e. 3 I feel very much at ease in using a computer. 4 This method of learning contributed to my understanding of the material in this lesson. 5 I usually had uninterrupted time in which to complete the tutorials for this segment of the class. 6 I am a fast reader, comprehending and retaining what I read. 7 I have participated in Distance Learning where the assignments were done and turned in online. 8 I would like to take other classes using the teaching technique I experience with this graphing tutorial. 9 I had a quiet, comfortable location to log in and complete the tutorials for this segment of the class. 10 I usually remember what I have read, and can repeat it to another, in my own words. 11 I am very comfortable with my skills at using a computer and the internet. 12 The way I completed these lessons is a great way to take a class. 13 While doing these online tutorials, I completed the lessons without interruption. 14 I took notes while completing the online lessons for this graphing segment of the class. 15 I viewed the 11 lessons in sequence from start to end, following instructions at the end of the tutorial.

PAGE 84

76 Appendix 8. Follow-up Online Questionnaire

PAGE 85

77 Appendix 8. Follow-up Online Questionnaire (cont’d)

PAGE 86

78 Appendix 9. Narrative Comments Question #6 Printed out some of the pages and read them over a couple of times. I took a few notes on some of the terms that I was having repeated problems with during the tutorials. I reviewed the notes briefly before the exam. I wrote my own notes then copied the study questions given to me then studied those I printed all of the questions from the programmed tutorials and reviewed them before the exam. I took notes as I went along the tutorials. I took notes and studied them. I printed out the text version I was assigned and highlighted what I felt was important. I read it over a few times. While I read the tutorials, I took notes on a separate sheet of paper. I printed the tutorial out so that I could take my time and study the information. I did the programmed more than once. I printed select pages of the tutorial and reviewed them, reviewed tutorials several times Printed out some of the tutorials and reviewed them before taking the test. I printed some tutorial pages out and tried to study them. Looked and read briefly chapter 4 graphing data. I print out the text tutorial and study them. I went through and highlighted and took notes on what I thought was the important part of the tutorial. I would look over the material for an hour and a half each day. The only type of studying I did besides the tutorials was a little bit of group discussion. My partner and I tried to help each other understand what was actually going on. I printed out the tutorials and studied them. I made some notes while working through the tutorials, and reviewed those right before the tests. I did do a little bit of extra studying. I read a few pages out of the text book and I even wrote down a few notes. I printed out the last three tutorials because I thought of them to be more of a review of all the tutorials.

PAGE 87

79 Appendix 9. Narrative Comments Question #6 (cont’d) I printed the pages, and read them but just once because we did not had enough time, I was a lot of material in just one week. Took some notes on the read text condition and reviewed them. printed out tutorial pages and studied them I printed the tutorial pages and studied them. I printed out the tutorials and studied them, mainly the graphs. I printed out the information and studied them. I paid particular attention to the words in italic print. I did print off the tutorials and study them. I printed out tutorial questions that I had trouble answering and studied them in addition to doing the tutorials. I did take some time to view other graphs in certain books and I also recalled working on graphs in a couple of math classes I had taken and what was involved in the construction of them. I printed out the tutorial pages and studied them. I took some notes from the online program instruction. I reviewed the questions twice before the exam by rereading most of the frames. I also printed some of the important questions I felt were necessary for studying. Reviewed a small number of notes that I had made while doing the tutorials. Online tutorials were followed and printed out for study. No research outside the online tutorial was done I printed out my tutorials and studied them at home. While I was doing the tutorials I tried to write down the information that seemed to be pertinent. Before the test I reviewed the notes. I wasn't able to print out the tutorials so I took notes from them directly. I wrote a few notes I took notes of concepts I thought I might need to look over before the test while working through the programmed instruction. I printed out the tutorials and studied them. I printed out the tutorial pages and read them a few times. I took notes for every tutorial I worked through.

PAGE 88

80 Appendix 9. Narrative Comments Question #6 (cont’d) I studied the graphs in chapters 4 & 5 in the textbook I decided to read chapter 4 in the Alberto book to try to understand what the graphing portion of the test was designed suppose to show us as educators. Printed out the tutorial pages as there was way too much information to read and absorb. I read chapter 4 and completed the study questions I printed the tutorial pages and highlighted them as I read them. I made notes as I read the printed pages. Then I reviewed my highlights and notes again before I went in to take the quiz. Looked over Chapter 4 in the Alberto book. I printed out the text pages and read them about 4 times and highlighted what I felt was the most important material. Then after reading the material thoroughly for the 4th time I only looked back at what I highlighted. I also tried to study a little before I actually went in and took the quiz. I read chapter 4 in our Alberto book plus I printed out the information from the tutorial and read it, twice. I memorized parts of the graph, etc., that apparently weren't important. It would have been helpful to know what you wanted from us. I just reread the tutorial over and over again I did a little bit of practice graph drawing. I performed the tutorial and then just looked over Chapter 4 on graphing in Alberto. printed out and made study cards I did print out the text I was assigned to read to further study it. Printed out the text tutorial and reviewed material. I practiced graphing by graphing other information found online

PAGE 89

81 Appendix 10. Sample PERL Code for PI Treatment #!/usr/local/bin/perl use CGI; use warnings; $query = new CGI; ################################################################ ################### ### MODIFY HERE ######### # do not use quotes otherwise you must escape them ie. \" ### Critical Changes ###### #0. The name of the table in the database containing the output for this tutorial set # Leave blank quotes for outfiles my $table = ''; #1. The title that will appear in the window title bar (up top) my $html_title = 'Graphing in Applied Behavior Analysis'; #2. this is the title and brief description of what the set is about (on the MAIN MENU) You can use vaild html but be careful with quotes, escape them. my $page_header = qq(
Graphing in Applied Behavior Analysis


The following instructional sets should be accomplished in serial order:
); #3. tutorial list setup. this is the radio button list along with the displayed description # file name followed by => then followed by the single-quoted dscription; finally a comma (except for the last entry my %tutorial_setup = ( 'graphingset1_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 1',

PAGE 90

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 82 'graphingset2_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 2', 'graphingset3_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 3', 'graphingset4_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 4', 'graphingset5_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 5', 'graphingset6_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 6', 'graphingset7_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 7', 'graphingset8_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 8', 'graphingset9_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 9', 'graphingset10_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 10', 'graphingset11_textfile.txt' => 'Graphing in Applied Behavior Analysis Set 11' ); #4. this is a list of the file names This is done so that the radio buttons are displayed in the correct order. my @tutorial_files = ( 'graphingset1_textfile.txt', 'graphingset2_textfile.txt', 'graphingset3_textfile.txt', 'graphingset4_textfile.txt', 'graphingset5_textfile.txt', 'graphingset6_textfile.txt', 'graphingset7_textfile.txt', 'graphingset8_textfile.txt', 'graphingset9_textfile.txt', 'graphingset10_textfile.txt', 'graphingset11_textfile.txt', ); #5. the tutorial that will be checked by default Must the same as one of the filenames above or none will be checked by default. my $default_tutorial = 'xx'; ##### Optional Changes ####### #percent required to continue with tutorials my $percentstartover = 20; ####### END MODIFICATIONS ##################

PAGE 91

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 83 ################################################################ ################### my $DSN = 'bostowtables'; $path_info = $query->path_info; $fulladdress = $query->url(); $base_dir = $query->url(); $relative = $query->url(-relative=>1); $base_dir =~ s/\/$relative//; my $absol= $query->url(-absolute=>1); $absol =~ s/\/$relative//; $absol =~ s/\//\\/g; my $absolute_dir = 'e:\inetpub\wwwroot\coedu'.$absol; chdir $absolute_dir; if ($path_info) { $path_info =~ s/\///; my ($key, $val) = split(/=/,$path_info); if (defined($val) ) { &$val; } else { &doMainMenu; } } else { &doMainMenu; } sub doMain { &GetParameters; &GetNumberOfQuestions; print $query->header(-type=>'text/html', -expires=>'now'); print $query->start_html(-title=>"PI PLAYER $html_title",author=>'Kale Kritch mod by Darrel Davis',-BGCOLOR=>'#FFFFFF'); print qq( );

PAGE 92

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 84 if ($UserAnswer eq "FirstVisit") { if ($QuestionNumber > $NumberOfQuestions) { $UserAnswer = "FINALSCORE"; $Percent = $AnsweredCorrectly / $NumberOfAttempts 100; $Percent = substr($Percent, 0, 4); print "
\n"; print "

You have reached the end of this program.

\n"; print "
\n"; print "
\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "\n"; print "
Number of frames in this tutorial$NumberOfQuestions
Number of frames you attempted$NumberOfAttempts
Number of attempted frames you answered correctly$AnsweredCorrectly
Percent correct score of attempted frames$Percent\%
\n"; print "
\n"; print "
\n"; print "
\n"; print "
Click here to return to the Main Menu

\n"; &WriteOutFile; exit; } $TryNumber = 1; &ShowFrame; &AskForResponse; &OutputVariables; } else { &EvaluateResponse; } print $query->end_html; }

PAGE 93

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 85 sub doMainMenu { print $query->header(-type=>'text/html', -expires=>'now'); print $query->start_html(-title=>$html_title,-author=>'Kale Kritch mod by Darrel Davis',-BGCOLOR=>'#66CCFF'); #print "
-------------absol= $absolute_dir -----------------
referer=$origin
fulladdress= $fulladdress
path_info= $path_info
base_dir= $base_dir
full= ",$query>url(),"
relative= ",$query->url(relative=>1),"
absolute=",$query->url(-absolute=>1),"
with path= ",$query->url(-path_info=>1),"
with path and query= ",$query>url(-path_info=>1,-query=>1),"
net location = ",$query>url(-base => 1),"
------------------
"; print qq(
$page_header

Main Menu

Follow the 4 Steps below to experience the on-line tutorials.



PAGE 94

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 86 Step 1 Type your full name (e.g. Mary Smith):

Step 2 Select a tutorial by clicking once in the radio button beside the tutorial:

Before selecting a tutorial, scroll down and note the tutorials you have already done.
Make sure your records show all 11 tutorials as completed, and be sure to type your name the same way every time.

); print $query->radio_group(-name=>'TutorialSelection', values=>\@tutorial_files, -default=>$default_tutorial, linebreak=>'true', -labels=>\%tutorial_setup); print qq( Step 3 Enter Frame Number (If you are working through the tutorial for the first time, leave as 1 If you are reviewing, enter the frame number you wish to begin working on):
-->

Step 3Click Begin Tutorial:



Completion List:
); my @complist; my $compname; my $comptut; open (COMPFILE, "completions.txt"); while () { push @complist,$_ } close (COMPFILE); @complist = sort {uc($a) cmp uc($b)} @complist; foreach (@complist) { ($compname, $comptut) = split('&&', $_); $compname=$compname.""; $comptut=$comptut.""; print ""; }

PAGE 95

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 87 print qq(
NameTutorial Completed
$compname $comptut
); print $query->end_html; } sub GetParameters { $MainMenuAddress = $query->param('MainMenuAddress'); $PercentStartOver = $query->param('PercentStartOver'); $UserAnswer = $query->param('UserAnswer'); $TutorialSelection = $query->param('TutorialSelection'); $StudentName = $query->param('StudentName'); $RemoteAddress = $query->param('REMOTE_ADDR'); $BrowserType = $query->param('HTTP_USER_AGENT'); $QuestionNumber = $query->param('QuestionNumber'); $TryNumber = $query->param('TryNumber'); $NumberOfQuestions = $query->param('NumberOfQuestions'); $NumberOfAttempts = $query->param('NumberOfAttempts'); $AnsweredCorrectly = $query->param('AnsweredCorrectly'); $Tries = $query->param('Tries'); $OutFileName = $TutorialSelection; $OutFileName =~ s/.txt/_Out.txt/; } sub GetNumberOfQuestions { $NumberOfQuestions = 0; open (CAIFILE, "$TutorialSelection"); while () { if (index($_,'@begin',0) > -1) { $NumberOfQuestions++; } } close (CAIFILE); } sub ShowFrame { print "Frame #: $QuestionNumber of $NumberOfQuestions
\n"; print "Try #: $TryNumber
\n"; if ($NumberOfAttempts > 1) { $Percent = $AnsweredCorrectly / $NumberOfAttempts 100; $Percent = substr($Percent, 0, 4); print "Correct %: $Percent
\n"; }

PAGE 96

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 88 if ($NumberOfAttempts > 4 and $Percent < $PercentStartOver){ $UserAnswer = "STARTOVER"; $Percent = $AnsweredCorrectly / $NumberOfAttempts 100; $Percent = substr($Percent, 0, 4); print "
\n"; print "
Your percent correct is less than $PercentStartOver after at least 5 frames.

\n"; print "
You are required to exit this tutorial and begin again.

\n"; print "
Total number of possible questions in this tutorial: $NumberOfQuestions

\n"; print "
Total number of questions you attempted: $NumberOfAttempts

\n"; print "
Number of attempted questions you answered correctly: $AnsweredCorrectly

\n"; print "
Percent score of attempted questions: $Percent\%

\n"; print "
Click here to return to the Main Menu

\n"; &WriteOutFile; exit; } print "

\n"; local ($Number); $Number = 0; open(CAIFILE,"$TutorialSelection"); while () { if (index($_,'@begin',0) > -1) { $Number++; if ($Number == $QuestionNumber) { $line = ; print "\n"; while (index($line,'@end',0) < 0) { print "$line
\n"; $line = ; } while (index($line,'@answer',0) < 0) { $line = ; } $CorrectAnswer = substr($line,8); chomp($CorrectAnswer); $CorrectAnswer = lc($CorrectAnswer); while (index($line,'@tries',0) < 0) { $line = ; } $Tries = substr($line,7); chomp($Tries); while (index($line,'@graphic',0) < 0) { $line = ; }

PAGE 97

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 89 $Graphic = substr($line,9); chomp($Graphic); while (index($line,'@video',0) < 0) { $line = ; } $Video = substr($line,7); chomp($Video); } } } close(CAIFILE); #local($index); if ($Graphic ne "none") { print "

\n"; } if ($Video ne "none") { print "

Click here to view the video

\n"; } } sub OutputVariables { print < EOT } sub AskForResponse { print <


PAGE 98

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 90 Type your answer here: EOT #print "

Total Possible Tries for this Frame: $Tries\n"; } sub EvaluateResponse { local ($Number); $Number = 0; # Look up what the correct answer should be here: open(CAIFILE,"$TutorialSelection"); while () { if (index($_,'@begin',0) > -1) { $Number++; if ($Number == $QuestionNumber) { $line = ; while (index($line,'@end',0) < 0) { $line = ; } while (index($line,'@answer',0) < 0) { $line = ; } $CorrectAnswer = substr($line,8); chomp($CorrectAnswer); } } } close(CAIFILE); if (lc($UserAnswer) eq lc($CorrectAnswer) and $TryNumber <= $Tries) { $FeedBack = "CORRECT"; $AnsweredCorrectly++; $NumberOfAttempts++; &WriteOutFile; &ShowFrame; print "
\n"; print "

Your answer $UserAnswer is $FeedBack!
\n"; print "
Press Enter or Click to Continue.
\n"; $QuestionNumber++; &ContinueButton; &OutputVariables; } if (lc($UserAnswer) ne lc($CorrectAnswer) and $TryNumber < $Tries) { $FeedBack = "INCORRECT";

PAGE 99

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 91 &WriteOutFile; $TryNumber = $TryNumber + 1; &ShowFrame; &AskForResponse; print "
\n"; print "
Your answer $UserAnswer is $FeedBack.
\n"; print "
Please try again.

"; &OutputVariables; } elsif (lc($UserAnswer) ne lc($CorrectAnswer) and $TryNumber >= $Tries) { $NumberOfAttempts++; $FeedBack = "INCORRECT"; &WriteOutFile; &ShowFrame; print "
\n"; print "
Your answer $UserAnswer is $FeedBack.
\n"; print "
The correct answer was $CorrectAnswer.
\n"; $QuestionNumber++; $TryNumber = 1; &ContinueButton; &OutputVariables; } } #end of EvaluateResponse sub ContinueButton { print <
EOT } sub PrintScalars { print "TutorialSelection = $TutorialSelection
\n"; print "UserAnswer = $UserAnswer
\n"; print "StudentName = $StudentName
\n";

PAGE 100

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 92 print "RemoteAddress = $RemoteAddress
\n"; print "BrowserType = $BrowserType
\n"; print "QuestionNumber = $QuestionNumber
\n"; print "TryNumber = $TryNumber
\n"; print "NumberOfQuestions = $NumberOfQuestions
\n"; print "NumberOfAttempts = $NumberOfAttempts
\n"; print "AnsweredCorrectly = $AnsweredCorrectly
\n"; print "CorrectAnswer = $CorrectAnswer
\n"; print "Graphic = $Graphic
\n"; print "Tries = $Tries
\n"; print "TutorialSelection = $TutorialSelection
\n"; print "OutFileName = $OutFileName
\n"; } sub ExitButton { print "
\n"; print ""; } sub WriteOutFile { $Percent = substr($Percent, 0, 4); $TimeStamp = localtime (time); open(OUTFILE,">>$OutFileName") or dienice("Can't open outfile.txt for writing: $!"); # This locks the file so no other CGI can write to it at the same time # flock(OUTFILE,2); # Reset the file pointer to the end of the file, in case someone wrote while we waited for lock seek(OUTFILE,0,2); print OUTFILE "$StudentName,"; print OUTFILE "$TutorialSelection,"; print OUTFILE "$QuestionNumber,"; print OUTFILE "$TryNumber,"; print OUTFILE "$CorrectAnswer,"; print OUTFILE "$UserAnswer,"; print OUTFILE "$FeedBack,"; print OUTFILE "$NumberOfQuestions,"; print OUTFILE "$NumberOfAttempts,"; print OUTFILE "$AnsweredCorrectly,"; print OUTFILE "$Percent,"; print OUTFILE "$TimeStamp\n"; close(OUTFILE); if ($UserAnswer eq "FINALSCORE") { my $tutsel=$TutorialSelection;

PAGE 101

Appendix 10. Sample PERL Code for PI Treatment (cont’d) 93 $tutsel =~ s/graphingset//; $tutsel =~ s/_textfile.txt//; open(CMPFILE,">>completions.txt") or dienice("Can't open completions_alb.txt for writing: $!"); print CMPFILE "$StudentName&&$tutsel\n"; close(CMPFILE); } } # The dienice subroutine, for handling errors sub dienice { my($errmsg) = @_; print "

Error

\n"; print "$errmsg

\n"; print "\n"; exit; }

PAGE 102

94 Appendix 11. Sample HTML for Cued Text Treatment PI PLAYER Graphing in Applied Behavior Analysis

Click the button at the end of the text when you have completed the reading

When more than three  data paths must be included on the same graph, other methods of display can be incorporated.


The bar graph, or histogram, is a  simple and versatile format for graphically summarizing behavioral data. Like the line graph, the bar graph is based on the Cartesian plane and shares most of the line graph's features with one primary difference: the bar graph does not have distinct data points representing successive response measures through time.





PAGE 103

Appendix 11. Sample HTML for Text & Graphics Treatment (cont’d) 95

 

 

Line graphs with the data points connected imply that the same variable is being measured across time-say, number of fights on the playground.  Bar graphs serve two major functions in the display of data. First, a bar graph is used when the sets of data to be compared are not related to one another by a common underlying dimension by which the horizontal axis can be scaled. The figure here is an example of a bar graph displaying and comparing such discrete data.   



 

 

 

 

The second most common use of the BAR graph is to give a visual summary of the performance of a subject or group of subjects during the different conditions of an experiment.

  


 

 

 

 

This figure shows two bar graphs (light and dark) that summarize the percentage of male and female juvenile offenders involved in criminal offenses before, during, and after treatment in a teaching family home.
   




PAGE 104

Appendix 11. Sample HTML for Text & Graphics Treatment (cont’d) 96 clear=all style='page-break-before:auto;'>

 

 

 

 

The bar  graph also permits comparison (upper and lower) of the subjects' incidence of criminal involvement with that of similar youths who received treatment in other group homes.
   


Although bar graphs can also be used to display range or trend, they are typically used to present a measure of central tendency, such as the mean or median score for each condition.

A bar graph sacrifices presentation of the variability and trends in behavior (which are apparent in a line graph) in exchange for the efficiency of summarizing and comparing large amounts of data in a simple, easy-to-interpret format.

Bar graphs can take a wide variety of forms to allow a quick and easy comparison of performance across subjects or conditions. However, bar graphs should be viewed with the understanding that they may mask important variability  in the data.


 

 

A cumulative graph is one that goes only up as responses (data) are accumulated.   




PAGE 105

Appendix 11. Sample HTML for Text & Graphics Treatment (cont’d) 97

The CUMULATIVE record (or graph) was developed by B. F. Skinner as the primary means of data collection and analysis in laboratory research in the experimental analysis of behavior.


 

 

Skinner's device, called the cumulative  recorder, enables an experimental subject to actually draw its own graph as it responds.
 



In a book cataloging 6 years of experimental research on schedules of reinforcement, Ferster and Skinner (1957) described cumulative graphs in the following manner:  "A graph showing the number of responses on the ordinate against time on the abscissa has proved to be the most convenient representation of the behavior observed in this research. Fortunately, such a "cumulative" record may be made directly at the time of the experiment. The record is raw data, but it also permits a direct inspection of rate and changes in rate not possible when the behavior is observed directly. Each time the bird responds, the pen moves one step across the paper." 




PAGE 106

Appendix 11. Sample HTML for Text & Graphics Treatment (cont’d) 98

 

 

At the same time, the paper feeds continuously. If the bird does not respond at all, a horizontal line is drawn in the direction of the paper feed.
   


 

 

The faster the person responds, the steeper the line.
   


When cumulative records are plotted by hand, which is most often the case in applied behavior analysis, the number of responses recorded during each observation period is added (thus the term cumulative) to the total number of responses recorded during all previous observation periods.

In a cumulative record, the Y-axis value of any data point represents the total number of responses recorded since the beginning of data collection.


 

In a cumulative record, the Yaxis value of any data point represents the total number of responses recorded since the beginning of data collection. The exception occurs when the total number of responses has exceeded the upper limit of the Y-axis scale, in which case cumulative curves reset

PAGE 107

Appendix 11. Sample HTML for Text & Graphics Treatment (cont’d) 99
to yhe 0 value of the Y-axis and begin their ascent again. 


Cumulative records are almost always used with frequency data, although other dimensions of behavior such as duration and latency can be displayed cumulatively.


This figure is an example of a cumulative  record from the applied behavior analysis literature. It shows the number of spelling words mastered by a mentally retarded man under three conditions.

The graph at the right shows that Subject 3 mastered a total of 1 word during the 12 sessions of baseline (social praise for correct spelling responses and rewriting incorrectly spelled words three times), a total of 22 words under the interspersal  condition (baseline procedures plus the presentation of a previously learned word after each unknown word), and a total of 11 words under the high density reinforcement condition (baseline procedures plus social praise given after each trial for task-related behaviors such as paying attention and writing neatly). 


Rate is the frequency of responses emitted per unit of time, usually reported as responses per minute in applied behavior analysis.

An "overall" response rate is the average rate of

PAGE 108

Appendix 11. Sample HTML for Text & Graphics Treatment (cont’d) 100 response over a given time period, such as during a specific session, phase, or condition of an experiment.

Overall rates are calculated by dividing the total number of responses recorded during the period by the number of observation periods-indicated on the horizontal axis.

In addition to the total number of responses recorded at any given point in time, cumulative records show the overall and "local" response rates.


 

 

In the figure at the right, the local rate at the point of the arrow is very high. 



 

In this figure, the overall response rates of words mastered per session are .46 for the interspersal  and .23 for high-density reinforcement conditions. 

 (Technically, data points do not represent true rates of response since the number of words spelled correctly was measured and not the rate, or speed, at which they were spelled. However, the slope of each data path does represent the different "rates" of mastering the spelling words in each session within the context of a total of 10 new words presented each day.)   




PAGE 109

Appendix 11. Sample HTML for Text & Graphics Treatment (cont’d) 101

On a cumulative graph, response rates are compared with one another by comparing the slope of each data path--the steeper the slope, the higher  the response rate.

On a cumulative graph, response rates are compared with one another by comparing the slope of each data path.


 

 

To produce a visual representation of an overall rate on a cumulative graph, the first and last data points of a given series of observations should be connected with a straight line. A straight line connecting Points a and c in this figure represents Subject 3's overall rate of mastering spelling words during the high density condition. 






PAGE 110

102 Appendix 12. Creating Computer Programmed Instruction About Programmed Instruction (API) Sets These programs introduce learners to the basic concepts of programmed instruction. Following are a list of the programs sets and the concepts that they teach. Set 1 frames, technology, programmed instruction, initial & terminal behavior. Set 2 observable behavior, probability, reinforcer, immediate reinforcement, emit. Set 3 discriminative stimulus, SD, S^, occasion, discrimination. Set 4 prompts, supplementary stimulation, fading. Set 5 formal and thematic prompts, fading. Set 6 control of observing behavior, blanks, formal prompts. Set 7 discrimination training, stimulus control, fading. Set 8 discrimination training, teach new concepts, stimulus control, fading. Set 9 defining concepts as behavior, examples and definitions, grammatical contexts. Set 10 frequent reinforcement, 10 percent error rate, revising. Set 11 change behavior, graphics, use information, control observing behavior. Set 12 controlled changes in behavior, technology that controls. Set 13 teaching machines, progress at own rate. Set 14 educators create programs, problems with multiple choice frames, constructed-response. Set 15 even and uneven distributions, evaluation, revision, program effectiveness. Set 16 review of previous concepts. Set 17 word erasing, control of observing behavior, location of blanks. Set 18 progression, wasteful frames, tally of responses, sequencing, programmer is first student of program. Set 19 contingency of reinforcement.

PAGE 111

103 Appendix 12. Creating Computer Programmed Instruction (Continued) Preparing Automated Instruction (PAI) Sets Set 1 frame, learning, observable behavior, change, immediate reinforcement, probability, strengthening, contingency of reinforcement. Set 2 contiguous pairing, contingency, conse quence, supplemental stimulus, prompt, fading, echoic behavior. Set 3 echoic, intraverbal, contiguous, fading, overt responses, frequent responses. Set 4 tact, intraverbal, echoic response, world of things, environment, application, functional relations. Set 5 frame, easy at first, conditioning history, linear vs. branching. Set 6 priming, prompting, history of conditioning, thematic prompt. Set 7 fading, planning ahead, improperly constructed programs, why past programs failed, terminal behaviors, terminal objectives, contingency. Set 8 generalization, specification of terminal objectives, subordinate objectives, content expert, application of learning principles. Set 9 rule, tact, contiguous pairing, rule/example, discrimination training, developmental order, list rules. Set 10 RULEG System for programmed instruction part 1. Set 11 RULEG System for programmed instruction part 2. Set 12 review of RULEG System, rule, compare, relationships, order, review frames, revised rule list, contiguous pairing. Set 13 generalization, intraverbal connections, blank at end of frame, everything in frame is important, applying rule, inductive/deductive frames. Set 14 small steps, examples as prompts, rules before examples, too few examples, rule first, order, review. Set 15 short frames, many examples, blank at end, graphics not necessary, principles of learning and programming. Set 16 authoring program, synonyms, key pairing, short frames, lecture frame, reviewing programs, examples, reintroduction of concepts in review frames, field test, formal prompt, prime. Set 17 immediate reinforcement, terminal objectives, intraverbal, tact, pretest/posttest, limits of PI, review of steps to create a program.

PAGE 112

104 Appendix 12. Creating Computer Programmed Instruction (Continued) Ruleg Frame Types These tutorials teach about how to use a set of systematic templates for constructing various kinds of instructional frames. Effective Characteristics of Instructional Programs These programs teach those characteristics and features of effective instructional programs. Program titles and the concepts they teach are listed. Set 1 Introduction: A rationale for the programs. Set 2 Instructional Objectives: instructional objectives, specification before instruction, stated in terms of observable, overt behavior, measuring program effectiveness. Set 3 Learner Prerequisites: inclusion of prerequisite statements, stated in terms of observable, overt behavior. Set 4 Learner Control: directions, arrangement of topics, time estimates, location indicators, easy access to segments, exiting. Set 5 Motivation: steps from simple to complex, degree of instructional steps, high rates of success, low error rates, reinforcement. Set 6 Screen Design: text-intensive materials, supplemental documents, justification, windows of scrolling text, electronic page turning. Set 7 Graphics, Audio, and Animation: to what degree do they help learners accomplish objectives, entertainment and instruction, distractions, correctly responding. Set 8 Lesson Design: self-paced progression, frequency of evoking student responses, feedback, demonstrate mastery before progression, review, private tutors. Set 9 Interaction: require responses, frequent & observable responses, responses relating to objectives, selecting and constructing responses, multiple-choice alternatives, Critical-response Rule, prompts and cues, gradually withdrawn, private tutors. Set 10 Individualized Programs: self-pacing, appropriate behavior, frequent interaction, small steps, low error rate, relevant examples, immediate feedback.

PAGE 113

105 Appendix 13. Treatment Assignment Notification Hello. As you know from Dr. Bostow's message, we will be having lessons on "Graphing in Applied Behavior Analysis." Your link to the lessons for this section is: http://www.coedu.usf.edu/bostow/rcanton/programmed Go to this URL. Read and follow the instructions at the BLUE menu screen CAREFULLY. The individual quiz times for these tutorials will be assigned by your course instructor. Complete all eleven tutorials before your assigned testing time. (Feb 2-7) Thank you.

PAGE 114

106 Appendix 13. Treatment Assignment Notification (Continued) Hello. As you know from Dr. Bostow's message, we will be having lessons on "Graphing in Applied Behavior Analysis." Your link to the lessons for this section is: http://www.coedu.usf.edu/bostow/rcanton/text Go to this URL. Read and follow the instructions at the BLUE menu screen CAREFULLY. The individual quiz times for these tutorials will be assigned by your course instructor. Complete all eleven tutorials before your assigned testing time. (Feb 2-7) Thank you.

PAGE 115

107 Appendix 14. Reliabilty Calculations Templates (Applied Graphing Task--excerpt) [online] http://www.gifted.uconn.edu/siegle/research/Instrument%20Reliab ility%20and%20Validity/reliabilitycalculator2.xls Cronbach's Alpha 0.848720447 Reliability Calculator Split-Half (odd-even) Correlation 0.808325024 created by Del Siegle (dsiegle@uconn.edu) Spearman-Brown Prophecy 0.894004135 Mean for Test 13.84722222 Standard Deviation for Test 4.924879362 KR21 0.749649293 Questions Participants KR20 0.848720447 27 144 Question 1 Question 2 Question 3 Question 4 Participant1 1 1 1 1 Participant2 1 1 0 0 Participant3 1 1 1 1 Participant4 1 1 1 1 Participant5 0 1 0 0 Participant6 1 1 0 0 Participant7 0 1 0 0 Participant8 1 1 0 0 Participant9 0 1 0 0 Participant10 1 1 0 0 Participant11 1 1 0 0 Participant12 1 1 0 0 Participant13 1 1 1 1 Participant14 1 1 1 1 Participant15 1 1 0 0 Participant16 0 1 0 0 Participant17 1 1 0 0 Participant18 1 1 0 0 Participant19 0 1 0 0 Participant20 0 1 0 0 Participant21 1 0 0 0 Participant22 0 0 0 0 Participant23 0 0 0 0 Participant24 0 1 0 0

PAGE 116

108 Appendix 14. Reliabilty Calculations Templates (Continued) (Computer-based Posttest--excerpt) [online] http://www.gifted.uconn.edu/siegle/research/Instrument%20Reliab ility%20and%20Validity/reliabilitycalculator2.xls Cronbach's Alpha 0.873990452 Reliability Calculator Split-Half (odd-even) Correlation 0.816034621 created by Del Siegle (dsiegle@uconn.edu) Spearman-Brown Prophecy 0.89869941 Mean for Test 20.08074534 Standard Deviation for Test 8.608803204 KR21 0.845461693 Questions Participants KR20 0.873990452 54 161 Question 1 Question 2 Question 3 Question 4 Participant1 0 1 1 0 Participant2 0 0 0 0 Participant3 0 1 0 0 Participant4 0 0 0 0 Participant5 0 0 0 0 Participant6 1 0 0 0 Participant7 1 1 1 0 Participant8 0 1 0 0 Participant9 0 1 1 0 Participant10 0 0 1 0 Participant11 0 0 0 0 Participant12 1 0 1 0 Participant13 1 0 0 0 Participant14 0 0 0 0 Participant15 0 1 0 0 Participant16 0 1 0 0 Participant17 0 0 0 0 Participant18 0 0 0 0 Participant19 0 1 1 0 Participant20 0 1 1 0 Participant21 0 1 1 0 Participant22 1 0 1 0 Participant23 1 0 0 0 Participant24 0 0 0 0

PAGE 117

ABOUT THE AUTHOR Major Reinaldo L. Canton received an Associate in Applied Electronics Technology from the Community College of the Air Force in 1987. He earned a Bachelor’s of Engineering Technology from the University of South Florida and was appointed a Second Lieutenant in the US Air Force in December 1989. While serving on active duty, Major Canton completed a Master of Science in Management from Lesley College, Cambridge Massachusetts. While assigned as an Associate Professor of Spanish at the U.S. Air Force Academy, he was competitively chosen to pursue a doctorate in Instructional Technology from the University of South Florida. After entering the doctoral program in 2000, and while completing his doctoral studies, Major Canton volunteered with US Central Command to lend support, in an extra-curricular capacity, to the national effort after September 11, 2001. He is a decorated officer of the United States Air Force, numerously recognized for his meritorious service.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001670389
003 fts
005 20051216093343.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 051123s2005 flu sbm s000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001259
035
(OCoLC)62330015
SFE0001259
040
FHM
c FHM
049
FHMM
090
LB2805 (Online)
1 100
Canton, Reinaldo L.
0 245
Effects of constructed response contingencies in web-based programmed instruction on graphing compared to cued-text presentation of the same information
h [electronic resource] /
by Reinaldo L. Canton.
260
[Tampa, Fla.] :
b University of South Florida,
2005.
502
Thesis (Ph.D.)--University of South Florida, 2005.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 117 pages.
3 520
ABSTRACT: Web-based lessons teaching graph construction techniques (via the internet) were presented to 144 undergraduate and graduate college students. One group experienced program-controlled tutorials requiring them to construct answers in a defined sequence. A second group experienced identical lesson material in the form of typographically cued text presentations. The programmed instruction students performed significantly better than the cued-text group on an immediate computer-based posttest assessing comprehension of the graphing lesson material. The cued-text group performed better on an applied graphing assignment. The experiment did not account for individuals internet study habits or the metacognitive approaches to learning employed by the study participants.
590
Adviser: James A. White, Ph.D.
Co-adviser: William A. Kealy, Ph.D.
653
Instructional technology.
Teaching methods.
World wide web.
Instructional design.
Academic behavior.
Learner control.
Computer-based instruction.
690
Dissertations, Academic
z USF
x Educational Leadership
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1259