USF Libraries
USF Digital Collections

The model-based systematic development of LOGIS online graphing instructional simulator

MISSING IMAGE

Material Information

Title:
The model-based systematic development of LOGIS online graphing instructional simulator
Physical Description:
Book
Language:
English
Creator:
Davis, Darrel R
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Developmental research
Simulation
ADDIE
Model
Systematic design
Guided contingent practice
Programmed instruction
Adaptive instruction
Dissertations, Academic -- Instructional Technology -- Doctoral -- USF   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: This Developmental Research study described the development of an interactive online graphing instructional application and the impact of the Analysis Design Development Implementation Evaluation (ADDIE) model on the development process. An optimal learning environment was produced by combining Programmed Instruction and Adaptive Instruction principles with a graphing simulator that implemented guided contingent practice. The development process entailed the creation and validation of three instruments measuring knowledge, skills, and attitudes, which were components of the instruction. The research questions were focused on the influence of the ADDIE model on the development process and the value of the LOGIS instructional application. The model had a significant effect on the development process and the effects were categorized by: Organization, Time, and Perspective.In terms of Organization, the model forced a high level of planning to occur and dictated the task sequence thereby reducing frustration. The model facilitated the definition of terminal states and made it easier to transition from completed tasks to new tasks. The model also forced the simultaneous consideration of global and local views of the development process. The model had a significant effect on Time and Perspective. With respect to Time, using the model resulted in increased development time. Perspectives were influenced because previously held assumptions about instructional design were exposed for critique. Also, the model facilitated post project reflection and problem diagnosis. LOGIS was more valuable in terms of the knowledge assessment than the skills and attitudes assessments. There was a statistically and educationally significant increase from the pretest to posttest on the knowledge assessment, but the overall posttest performance was below average.Overall performance on the skills assessment was also below average. Participants reported positive dispositions toward LOGIS and toward graphing, but no significant difference was found between the pre-instruction survey and the post-instruction survey. The value of LOGIS must be considered within the context that this study was the first iteration in the refinement of the LOGIS instructional application.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2007.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Darrel R. Davis
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 266 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001935446
oclc - 226058462
usfldc doi - E14-SFE0002271
usfldc handle - e14.2271
System ID:
SFS0026589:00001


This item is only available as the following downloads:


Full Text

PAGE 1

The Model-Based Systematic Development of LOGIS Onl ine Graphing Instructional Simulator by Darrel R. Davis A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Secondary Education College of Education University of South Florida Major Professor: James White, Ph.D. Darrel Bostow, Ph.D. Robert Dedrick, Ph.D. William Kealy, Ph.D. Date of Approval: August 22, 2007 Keywords: Developmental Research, Simulation, ADDIE Model, Systematic Design, Guided Contingent Practice, Programmed Instruction, Adaptive Instruction Copyright 2007, Darrel R. Davis

PAGE 2

Dedication To Martha, Ronald, June, Ray, Sharret, and Shawn

PAGE 3

Acknowledgements I would like to thank the members of my doctoral co mmittee for their support throughout this dissertation process. To Dr. White and Dr. Bostow, you have been tremendous! Thank you both for taking the journey with me. I could not have asked for better mentors. To Dr. Dedrick, your detailed anal yses were invaluable. To Dr. Kealy, thank you for your support. To my parents June and Ray, I am because of you. T o my wife Martha, “you are…” To my brother Ronald, thank you for believin g in me. To Shawn, thank you for being there for me. To Ms. Ellie, thank you for ev erything. To the hidden gems in life who have been a constant source of encouragement. To Imed, Ilir, Karim, Jeff, Mike, Kale, Carol, Kell y Gummi, Satch, and Mrs. Hendrix, you have all been incredible gems.

PAGE 4

i TABLE OF CONTENTS List of Tables..................................... ................................................... ............................vii List of Figures.................................... ................................................... .............................ix Abstract........................................... ................................................... ................................xi Chapter One Introduction........................... ................................................... .....................1 Chapter Map........................................ ................................................... .................1 Motivation......................................... ................................................... ...................1 Practical Motives.................................. ................................................... ...1 Research Motives................................... ................................................... ..3 Focus of the Study................................. ................................................... ..............4 Research Questions................................. ................................................... .............4 Significance of the Study.......................... ................................................... ...........5 Acronyms and Definition of Terms................... ................................................... ..6 Summary............................................ ................................................... ..................7 Chapter Two Review of the Literature............... ................................................... .............8 Chapter Map........................................ ................................................... .................8 Developmental Research............................. ................................................... ........9 Historical Perspective............................. ..................................................1 0 The Nature of Developmental Research............... ....................................12 Issues and Challenges in Developmental Research.... ..............................16 Why Conduct Developmental Research?................ .................................18 Learning........................................... ................................................... ..................18 What is Learning?.................................. ................................................... 19 Experience and Learning............................ ..............................................23 Attitudes and Learning............................. .................................................26 Learning Outcomes.................................. .................................................29 Intellectual Skill................................. ...........................................30

PAGE 5

ii Discriminations.................................... .............................30 Concepts........................................... .................................30 Rules and principles............................... ...........................31 Problem solving.................................... ............................31 Cognitive Strategy................................. .......................................31 Verbal Information................................. .......................................31 Attitude........................................... ..............................................32 Motor Skill........................................ ............................................32 Relating Learning Perspectives to Instruction...... ....................................32 The Assessment of Learning......................... ............................................33 Instruction........................................ ................................................... ..................35 Programmed Instruction............................. ...............................................35 Skinner............................................ ..............................................35 Instructional Characteristics...................... ....................................37 Prompts............................................ .................................39 Fading............................................. ..................................39 Copy frames........................................ ..............................40 Priming............................................ ..................................40 Research on PI..................................... .........................................42 Overt Versus Covert Responses...................... ..............................43 Constructed Response vs. Multiple-Choice........... .......................45 Confirmation....................................... ..........................................45 Current and Future Research........................ ................................46 Adaptive Instruction............................... ................................................... 48 Micro-Adaptive Instructional Models................ ...........................50 Intelligent Tutoring Systems....................... ..............................................56 Simulations........................................ ................................................... ....57 What is a Simulation?.............................. .....................................57 Games, Virtual Reality and Microworlds............. ........................59 Types of Simulations............................... .....................................61 Advantages and Disadvantages of Simulations........ ....................63 Are Simulations Effective?......................... ..................................65 Why use a simulation............................... .....................................69 The Design of Instruction.......................... ...............................................69 What are Instruction and Design?................... ..............................69

PAGE 6

iii Instructional Systems Design....................... .................................71 A Systematic Approach.............................. ..................................72 ADDIE.............................................. ................................................... .....74 The ADDIE Phases................................... ....................................75 Why use ADDIE?..................................... ....................................77 Graphing........................................... ................................................... .................77 Summary............................................ ................................................... ................83 Chapter Three Proposed Development................. ................................................... .........85 Chapter Map........................................ ................................................... ...............85 Research Questions................................. ................................................... ...........86 The ADDIE Model.................................... ................................................... ........87 Analysis Phase..................................... ................................................... ..88 Analysis Component 1............................... ...................................88 Analysis Component 2............................... ...................................89 Analysis Component 3............................... ...................................91 Analysis Component 4............................... ...................................92 Design Phase....................................... ................................................... ...94 Design Component 1................................. ....................................94 Design Component 2................................. ....................................94 Design Component 3................................. ....................................96 Design Component 4................................. ....................................97 Design Component 5................................. ....................................99 Design Component 6................................. ..................................100 Development Phase.................................. ...............................................101 Development Component 1............................ ............................101 Development Component 2............................ ............................101 The LOGIS interface................................ .......................101 The LOGIS development process...................... .............106 The Knowledge Assessment development............... ......110 The Skills Assessment development.................. .............120 The Survey development............................. ...................125 Threats to assessment validity..................... ...................129 Threats to the survey validity..................... .....................131 Development Component 3............................ ............................131

PAGE 7

iv Development Component 4............................ ............................131 Development Component 5............................ ............................132 Implement Phase.................................... .................................................13 2 Implement Component 1.............................. ...............................132 Implement Component 2.............................. ...............................132 Evaluate Phase..................................... ................................................... 132 Evaluate Component 1............................... .................................132 Evaluate Component 2............................... .................................134 One-to-One Evaluation.............................. .....................135 Small-Group Evaluation............................. ....................137 Field Trial........................................ ................................139 Evaluate Component 3............................... .................................139 Summary............................................ ................................................... ..............140 Chapter Four Actual Development.................... ................................................... ..........141 Chapter Map........................................ ................................................... .............141 Research Questions................................. ................................................... .........143 Analysis Phase Report and Reflection............... .................................................14 3 Analysis Phase critique............................ ...............................................146 Design Phase Report and Reflection................. ..................................................1 47 Design Phase Critique.............................. ...............................................151 Development Phase Report and Reflection............ ............................................152 The interface and the application.................. ..........................................152 Tutorial Development............................... ..............................................164 Knowledge Assessment Development................... .................................166 Skills Assessment Development...................... .......................................177 Survey Development................................. ..............................................182 Graphing Proficiency Test Development.............. ..................................196 Descriptive Information From The One-To-One Evaluat ion.................198 Descriptive Information From The Small-Group Evalua tion.................202 Development Phase Critique......................... ..........................................202 Implement Phase Report............................. ................................................... .....203 Implement Phase Critique........................... ............................................203 Evaluate Phase Report.............................. ................................................... .......204 Evaluate Phase Critique............................ ..............................................205

PAGE 8

v Summary............................................ ................................................... ..............205 Chapter Five Conclusions........................... ................................................... .................207 Chapter Map........................................ ................................................... .............207 Research Question 1................................ ................................................... ........208 Organization....................................... ................................................... ..209 Time............................................... ................................................... ......213 Perspective........................................ ................................................... ...214 Research Question 2................................ ................................................... ........217 General Reflection................................. ................................................... ..........218 Future Direction................................... ................................................... ............221 References......................................... ................................................... ...........................223 Appendices......................................... ................................................... ..........................231 Appendix A: The Tutorials and Major Goals........ ...........................................232 Appendix B: Guidelines for the Development of the Alternate-Choice Items.............................................. ..............................................234 Appendix C: Checklist for the Development of the Alternate-Choice Items.............................................. ..............................................235 Appendix D: Guidelines for the Development of the Multiple-Choice Items.............................................. ..............................................236 Appendix E: The Print Format Guidelines.......... .............................................237 Appendix F: Checklist for the Development of the Multiple-Choice Items.............................................. ..............................................238 Appendix G: Guidelines for the Development of the Short-Answer Items.............................................. ..............................................240 Appendix H: Checklist for the Development of the Short-Answer Items........241 Appendix I: Checklist for the Development of the Skills Assessment Items.............................................. ..............................................242 Appendix J: Guidelines for the Development of the Survey Items..................243 Appendix K: Checklist for the Development of the Survey Items...................244 Appendix L: Wrong Responses for each Frame of eac h Tutorial Task............245 Appendix M: The Final Version of the Items in the Knowledge Assessment......................................... .........................................251 Appendix N: Itemized Summary of the Posttest Data ......................................255

PAGE 9

vi Appendix O: The Skills Assessment Items.......... .............................................258 Appendix P: The Graphing Proficiency Items....... ...........................................260 Appendix Q: Actual Responses from in the One-To-O ne Evaluation Group.............................................. ............................................261 Appendix R: A Summary of the Verbal Responses fro m Participants in the One-To-One Evaluation Group.................... .........................262 Appendix S: A Summary of the Small-Group Evaluati on Responses..............264 Appendix T: LOGIS Java Code...................... ..................................................2 66 About the Author................................... ................................................... .............End Page

PAGE 10

vii LIST OF TABLES Table 1 Acronyms Used............................. ................................................... .....................6 Table 2 Terms Used................................ ................................................... ........................7 Table 3 Initial Goals and Tasks Classifications... ................................................... .........90 Table 4 Revised Goals and Task Classifications.... ................................................... ......93 Table 5 A Comparison of Bloom's Taxonomy and Gagne 's Classification Scheme............................................. ................................................... .............112 Table 6 Tutorial Weight Distribution.............. ................................................... ...........114 Table 7 Points Allocation for Items on the Skills Assessment Rubric..........................124 Table 8 A Comparison of the Original Graphing Tuto rials and the Derived LOGIS tutorials.................................... ................................................... .........148 Table 9 The Number of Frames in the LOGIS Tutorial s and Practice Tasks................149 Table 10 A Comparison of the Original Graphing Tuto rials and the LOGIS Tutorials.......................................... ................................................... ..............150 Table 11 The Distribution of the Knowledge Assessme nt Items...................................167 Table 12 A Summary of the Statistical Data from the Posttest......................................168

PAGE 11

viii Table 13 A Summary of the Data from Participants Wh o Completed Matching Pretests and Posttests............................. ................................................... .......171 Table 14 A Summary of the Statistical Data from the Skills Assessment......................179 Table 15 The Average Score on each Rubric Evaluatio n Point for each Skills Assessment Graph................................... ................................................... ......181 Table 16 The Items in the Final Version of the Surv ey................................................. .185 Table 17 The Likert Scale and Corresponding Interva l Ranges.....................................187 Table 18 A Summary of the Post-Survey Data......... ................................................... ...188 Table 19 Rotated Component Matrices of the Post-Sur vey Data for the 45 participants....................................... ................................................... .............190 Table 20 A Summary of the Pre-Survey Data.......... ................................................... ...192 Table 21 Results from the T Test Analysis on the Fi rst 4 Items on the PreSurvey and Post-Survey............................. ................................................... ...195 Table 22 A Summary of the Statistical Data from the Graphing Proficiency Test........197

PAGE 12

ix LIST OF FIGURES Figure 1. The Kolb and Fry learning model........ ................................................... ........23 Figure 2. Gagne's categorization of learning outc omes............................................... ...30 Figure 3. Gagne’s ADDIE model.................... ................................................... ............75 Figure 4. A prototype of the interface for LOGIS showing the tutorial window.........102 Figure 5. A prototype of the interface for LOGIS showing the exhibit window..........103 Figure 6. A prototype of the interface for LOGIS showing the practice window............................................. ................................................... ..........104 Figure 7. A prototype of the interface for LOGIS showing the survey window..........105 Figure 8. A prototype of the interface for LOGIS showing the knowledge assessment window.................................. ................................................... ..106 Figure 9. A flowchart of the LOGIS components and instruments development iterations......................................... ................................................... ............109 Figure 10. A prototype knowledge assessment data wo rksheet.....................................118 Figure 11. A view of the final version of the LOGIS interface......................................153 Figure 12. The significant areas of the LOGIS inter face............................................... .155 Figure 13. A view of the LOGIS interface showing th e grid and graphing tools...........156

PAGE 13

x Figure 14. A view of the LOGIS interface showing a survey item and the slide bar................................................ ................................................... ..............157 Figure 15. A flowchart of the actual development pr ocess............................................16 2 Figure 16. A Histogram of the posttest scores showi ng the distribution of the scores............................................. ................................................... .............169 Figure 17. Histograms of the pretest and posttest s howing the distribution of the scores............................................. ................................................... .............173 Figure 18. Boxplots of the scores for the 29 partic ipants with matched pretests and posttests...................................... ................................................... .........175 Figure 19. Boxplots of the Skills Assessment scores ................................................... ..180 Figure 20. Histograms of the pre-survey and post-su rvey showing the distribution of the responses across the survey ite ms (n = 45).....................193 Figure 21. A Scatterplot of the average pre and pos t survey responses.........................194 Figure 22. A Histogram of the proficiency test scor es showing the distribution of the scores...................................... ................................................... .........198

PAGE 14

xi The Model-Based Systematic Development of Logis Online Graphing Instructional Simulator Darrel R. Davis ABSTRACT This Developmental Research study described the dev elopment of an interactive online graphing instructional application and the i mpact of the Analysis Design Development Implementation Evaluation (ADDIE) model on the development process. An optimal learning environment was produced by com bining Programmed Instruction and Adaptive Instruction principles with a graphing simulator that implemented guided contingent practice. The development process entai led the creation and validation of three instruments measuring knowledge, skills, and attitudes, which were components of the instruction. The research questions were focused on the influenc e of the ADDIE model on the development process and the value of the LOGIS inst ructional application. The model had a significant effect on the development process and the effects were categorized by: Organization, Time, and Perspective. In terms of O rganization, the model forced a high level of planning to occur and dictated the task se quence thereby reducing frustration. The model facilitated the definition of terminal st ates and made it easier to transition

PAGE 15

xii from completed tasks to new tasks. The model also forced the simultaneous consideration of global and local views of the deve lopment process. The model had a significant effect on Time and Pers pective. With respect to Time, using the model resulted in increased develop ment time. Perspectives were influenced because previously held assumptions abou t instructional design were exposed for critique. Also, the model facilitated post pro ject reflection and problem diagnosis. LOGIS was more valuable in terms of the knowledge assessment than the skills and attitudes assessments. There was a statistical ly and educationally significant increase from the pretest to posttest on the knowledge asses sment, but the overall posttest performance was below average. Overall performance on the skills assessment was also below average. Participants reported positive disp ositions toward LOGIS and toward graphing, but no significant difference was found b etween the pre-instruction survey and the post-instruction survey. The value of LOGIS mu st be considered within the context that this study was the first iteration in the refi nement of the LOGIS instructional application.

PAGE 16

1 CHAPTER ONE INTRODUCTION Chapter Map This chapter introduces the current study, and prov ides a framework and rationale for conducting the study. The following map descri bes the organization of the chapter: Introduction Chapter map Motivation Practical motives Research motives Focus of the study Significance of the study Acronyms and definitions of terms Summary Motivation Practical Motives An online course at a major southeastern university uses Alberto and Troutman (2006) as its primary course textbook. Chapter fou r in the Alberto and Troutman (2006)

PAGE 17

2 text is titled “Graphing Data”, and in this chapter students learn how to create and interpret simple and cumulative graphs. Multiple b aseline graphs are covered in Chapter five titled “Single-Subject Designs”. The pedagogy of both chapters is based on describing objects or features; stating facts or ru les; and providing examples, summaries, and exercises. Because the graphing theme is distr ibuted across chapters and the text does not intrinsically provide instruction, the pro fessor of record created interactive instructional tutorials to initially augment but ev entually replace the graphing chapters in the book. These tutorials were experimentally test ed with students in prior semesters and although the posttest results were better than the traditional method of instruction, the tutorials were not as effective as had been envisio ned. One possible explanation for the modest performance of students on the tutorial posttest is the fact that the tutorials themselves did not require students to graph. The textual prompts and pictorial examples were evident ly not powerful enough to cause individual students to produce an acceptable graph from the data provided. Although it seems obvious that graphing should be required with graphing instruction, such practice is often deemed uneconomical or impractical. Stude nts are frequently expected to convert visual or auditory stimulus into new behavi ors – learning. This is based on the common fundamental assumption in education that rea ding and/or lectures are sufficient for learning to occur. The purpose of the current study was to create effe ctive instruction for the graphing component of the specified course. This i nstructional application was named LOGIS, a recursive acronym that represents LOGIS On line Graphing Instructional Simulator. The general aim was for students to com plete the instruction in LOGIS, pass

PAGE 18

3 the subsequent course quiz, and eventually pass the course. This study contended that the goal of creating effective instruction for this tas k was best realized if the instruction was paired with non-optional guided contingent practice where forward progress within the practice task was dependent upon correct responses from the learner. Research Motives The creation of new instruction provided the opport unity to investigate the development process and engage in Developmental Res earch. The decision to create model-based instruction introduced the possibility of creating effective instruction and simultaneously analyzing the creation process. Usi ng the Analysis Design Development Implementation Evaluation (ADDIE) model and detaili ng each step provided a foundation to comment on the effects of using a mod el-based approach to development, thus adding to the current literature. There are many Instructional Systems Design (ISD) m odels and some, for example, the Dick and Carey model (Dick, Carey, & C arey, 2005), might have been more suitable for this particular task. The object of t he study was not to compare models or develop another set of “best practice” guidelines, but to analyze the development process using the ADDIE model. The ADDIE model was chosen because it was the most generic and fundamental model, and comments on this model m ight extend to other derived models. The ADDIE model has five phases: Analysis, Design, Development, Implementation, and Evaluation. These phases guide the development of instructional applications by providing a framework for transpare ncy and accountability within the development process. This study will comment on ho w well ADDIE achieved its goals.

PAGE 19

4 Focus of the Study The primary focus of this study was the model-based development of instructional software for a specific unit of instruction and the documentation and analysis of that process. Using a model provided the opportunity to build effective instruction and document the process thus increasing the transparen cy and repeatability of the study. This provided a basis for analysis and comment on t he development process. In addition to the practical development and docume ntation processes, this report will comment on the principles of response continge nt progression in both instruction and guided practice. Research Questions This study focused on two non-experimental research questions: 1. How does the use of the ADDIE model influence the c reation of the LOGIS instructional application? 2. Is LOGIS an effective form of instruction? a. Is LOGIS an effective form of instruction as measur ed by educationally significant differences in learners’ performance fr om the pretest to the posttest on the Knowledge measure? b. Is LOGIS an effective form of instruction as measur ed by educationally significant differences in learners’ performance fr om the pretest to the posttest on the Skills measure?

PAGE 20

5 c. Is LOGIS an effective form of instruction as measur ed by educationally significant differences in learners’ attitude towar ds graphing from the pre to the post Graphing Attitude Survey? Significance of the Study This study is significant because firstly it answer ed the call for Developmental Research (Reeves, 2000a; Richey, Klein, & Nelson, 2 004; van den Akker, 1999). This study engaged both the development process and the investigation of the development process, increasing the study’s practical and resea rch scholarship value (Richey et al., 2004). Secondly, this study implemented guided contingent practice and adaptive instruction principles, and both are aimed at incre asing the effectiveness of the instruction. The literature (Kirschner, Sweller, & Clark, 2006) acknowledged the importance of guided practice and this report will provide scholarly comment on the issue. Thirdly, the effectiveness of simulations is still questionable (de Jong & van Joolingen, 1998). This study cannot settle the deb ate, but this report will add relevant findings regarding the effectiveness of simulations The instructional application that was developed contained a simulation component, and the analysis of this application yielded valuable insight into the effectiveness of simulators in a learning environment. Finally, this study documented the development and evaluation processes, and used that documentation as a framework for comment on the use of the ADDIE model.

PAGE 21

6 The comments are not simply advantages and disadvan tages, but a narrative on how the use of the model affected the development process. Acronyms and Definition of Terms The acronyms in Table 1 are used within the current document. They increase readability without compromising understandability. Table 1 Acronyms Used Acronym Meaning PI Programmed Instruction (PI) is a method of instr uction emphasizing the linear and logical arrangement of reinforcement con tingencies that cumulatively establish target behaviors. ISD Instructional Systems Design (ISD) is “the proc ess for creating instructional systems” (Gagne, Wager, Golas, & Keller, 2005, p. 1 8). ADDIE Analysis Design Development Implementation Ev aluation (ADDIE) is “The most basic model of the ISD process” (Gagne et al., 2005, p. 21). LOGIS Logis Online Graphing Instructional Simulator (LOGIS) is a recursive acronym describing the instructional application de veloped for the current study. The terms in Table 2 are used within the current do cument. The definition for each term is based on the reviewed literature.

PAGE 22

7 Table 2 Terms Used Term Meaning Learning “a process that results in a relatively c onsistent change in behavior or behavior potential and is based in experience” ( Tomporowski, 2003, p. 7). Developmental Research “a situation in which someone is performing instruc tional design, development, or evaluation activities and studying the process at the same time” (Richey et al., 2004, p. 1099). Simulation “a model of some phenomenon or activity that users learn about through interaction with the simulation” (Alessi & Trollip, 2001, p. 213) Attitude “an internal state that affects an individ ual’s choice of personal action toward some object, person, or event” (Gagne et al., 2005, p. 95). Guided Contingent Practice Repetition of logically arranged steps where progre ss is contingent upon correct incremental responses. Summary This chapter introduced the current study providing rationale, motive and justification. This Developmental Research study d id not entail model creation or validation, but instead it described the process an d effect of using a model to develop instructional software.

PAGE 23

8 CHAPTER TWO REVIEW OF THE LITERATURE Chapter Map This chapter is a review of literature that is rele vant to both the theoretical and practical foundations of LOGIS. It describes the n ature of Developmental Research and the value of this type of research. This chapter a lso establishes the precedence for the instructional techniques implemented in LOGIS and p rovides justifications for the inclusion of specific features into LOGIS. The fol lowing map describes the organization of the chapter: Review of the literature Chapter map Developmental Research Historical perspective The nature of Developmental Research Issues and challenges in Developmental Research Why conduct Developmental Research? Learning What is learning? Experience and learning

PAGE 24

9 Attitudes and learning Learning outcomes Relating learning perspectives to instruction The assessment of learning Instruction Programmed instruction Adaptive instruction Intelligent tutoring systems Simulations The design of instruction Addie Graphing Summary Developmental Research The current educational research literature recogni zes Developmental Research as a unique and emerging area. Despite the acknowledg ed importance and relevance of this type of research, no clear consensus has been reach ed regarding the definition, scope, and overall character of Developmental Research. This study includes a discussion of Developmental R esearch in an effort establish a base rationale and framework within which the stu dy can be framed. A clear understanding of this type of research will focus t he study, provide a basis for scholarly

PAGE 25

10 comment on relevant issues, and provide conceptual guidance in an area van den Akker (1999, p. 3) called “…rather confusing.” Historical Perspective It is necessary to understand and appreciate the ev olution of Developmental Research before attempting any meaningful dialog ab out its current and future status. Not only does background information provide the co ntext for the current literature, it also provides a guide for future discourse. To understand Developmental Research is to understa nd educational technology as a discipline. The recent calls for more and bet ter Developmental Research scholarship from prominent figures (Reeves, 2000a; Richey et al ., 2004; van den Akker, 1999) are understandable and expected given the history of ed ucational technology. The almost cyclical nature of especially educational history ( Cuban, 1986) presents an interesting dilemma where the case can be made that the call fo r Developmental Research is a wholly expected and necessary artifact of modern so ciety and scholarship. Developmental Research, it can be argued, is the ne xt link in the evolution of educational technology. The intellectual framework for educational technolo gy was developed in the early 1920s, but it was not until the 1950s that the acad emic programs and support structures were created (De Vaney & Butler, 1996). Educationa l technology emerged from the militaristic training model that emphasized both te mporal and quantitative efficiency. Given the political climate, the emphasis on quanti fiable science, and a positivist doctrine, it is understandable that education and c onsequently educational technology would have a deterministic bias. The scholarship o f that time was very reflective of the

PAGE 26

11 social norms of that time. This is only one of man y examples where scholarship parallels society. Technological progress is also a lens through which the evolution of educational technology can be viewed. The growth of the discip line can be clearly seen as it evolves from early investigations of audiovisuals to more d etailed research on current forms of technology. This reality has proven to be both pos itive and negative. While it is true that there has been some progress in the definition of t he field and its relevance in academia, the current conclusion is that the field needs less media comparisons studies (Reeves, 2003). The debate between R. E. Clark (1983) and K ozma (1991) on the effect of media is both humbling and promising in that it suggested that decades of research are anecdotal at best. The mere presence and framing of this type of argument highlights how much the scholarship in educational technology has matured. On the surface, the current lack of structure and focus in educational technology seems to undermine the validity of the field, but as De Vaney and Butler (1996) pointed o ut, this might actually be to the credit of the field. They proposed the following The fact that past and present educational technolo gy scholars have failed in this monolithic effort is to the credit of the field. He terogeneous texts produced during the period under consideration and later provide a rich account of objects of study, theories engaged, methods employed, and audiences i ncluded. The written and oral texts considered here disclose a set of common goals but are diverse projects whose structures are contingent on historically acc epted concepts and values. They reflect prevailing notions of learning theory and pedagogy, research methods, economic, military, and political values, and other elements of the social

PAGE 27

12 milieu in which they were produced. The iterations of names, concepts, assumptions, and theories in these texts not only p romoted ideas but actually created truisms in the field for the time in which they were written. The value of these texts cannot be measured by sophisticated sta ndards of current research, nor by highly evolved notions of learning theory, but b y how they achieved their common goals when they were written. From whatever perspective these authors spoke, we might ask how well they made their object s of study intelligible to specific audiences at specific moments in time. The rhetoric with which they spoke and the discourses that spoke through them en ergized an audience of scholars, educators, and students to participate in a new field, educational technology. By any measure they were successful. (p 3) The Nature of Developmental Research The nature of Developmental Research is tied to its definition. Seels and Richey (1994) defined Developmental Research as “the syste matic study of designing, developing and evaluating instructional programs, p rocesses and products that must meet the criteria of internal consistency and effectiven ess” (p. 127). van den Akker (1999) proposed that Developmental Research had differing definitions that are sub-domain specific. Several sub-domains were examined to hig hlight what van den Akker termed “conceptual confusion” (p. 3). Within the Curricul um sub-domain, the purpose of Developmental Research is described as “…to inform the decision making process during the development of a product/program in order to im prove the product/program being developed and the developers' capabilities to creat e things of this kind in future situations” (van den Akker, 1999, p. 3). Similarly van den Akker quoted Richey and

PAGE 28

13 Nelson’s (1996) aim of Developmental Research withi n the Media & Technology subdomain, citing “improving the processes of instruct ional design, development, and evaluation … based on either situation-specific pro blem-solving or generalized inquiry procedures” (Richey & Nelson, as cited in, van den Akker, 1999, p. 4). The examples reveal different dimensions that all fall under the general heading of Developmental Research. Developmental Research has had several labels over the years. It has been linked to Action Research and Formative Research/Evaluatio n to name a couple. Although the terms are often used interchangeably, Reeves (2000a ) made a clear distinction between research with development goals and those with acti on goals. Development Research, which Reeves (2000a) also ref erred to as Formative Research, is focused on “…developing creative appro aches to solving human teaching, learning, and performance problems while at the sam e time constructing a body of design principles that can guide future development effort s” (p. 7). He emphasized the idea that development research addresses both the practical a nd theoretical issues involved in the learning process. According to Reeves (2000a) Action Research is focu sed on “…a particular program, product, or method, usually in an applied setting, for the purpose of describing it, improving it, or estimating its effectiveness a nd worth” (p. 7). This type of research, Reeves suggested, is purported to solving a specifi c problem under select conditions within a limit time frame. Action Research, Reeves implied, does not have the same emphasis on theory as that of development focused r esearch, thus it is not widely regarded as legitimate research. Reeves noted that under certain conditions, for example,

PAGE 29

14 reporting useful consumable results, Action Researc h could in fact be considered legitimate research. Reigeluth and Frick (1999) discussed Formative Rese arch and presented the argument …if you create an accurate application of an instru ctional-design theory (or model), then any weaknesses that are found in the a pplication may reflect weaknesses in the theory, and any improvements iden tified for the application may reflect ways to improve the theory, at least fo r some subset of the situations for which the theory was intended. (p. 4) They suggested that Formative Research is a subset of Developmental Research where the focus is on the development and testing of theo ries or models. Action Research and Formative Research are only two of the terms associated with Developmental Research. Although they are per ceived differently depending on the author, the common thread is the development and va lidation of useful instructional interventions. Richey et al. (2004, p. 1099) presented a somewhat unifying theory of Developmental Research. They asserted that in its simplest form, Developmental Research can be either the study of the process and impact of specific ins tructional design and development efforts; or a situation in which someone is performing instruct ional design, development, or evaluation activities and studying the process at t he same time; or the study of the instructional design, development, and evaluation process as a

PAGE 30

15 whole or of particular process components. There is a clear distinction between the developmen t process and the studying of the development process. This distinction is very sign ificant because it provides a framework for the characterization of Developmental Research. Richey et al. (2004) proposed two categories for De velopmental Research and they are differentiated by the nature of their conc lusions. The first category (Type 1 research) includes studies that generally have cont ext-specific conclusions. These studies involve a specific product or program design, and t he development of that product or program. Typically, the design, development, and e valuation process of the entire instructional intervention is documented. Type 1 s tudies usually result in consumables, for example the impact of a program, or suggestions for improving a specific product. Richey et al. (2004) cited as an example “McKenney’ s (2002) documentation of the use of CASCADE-SEA, a computer-based support tool for c urriculum development” (p. 1102). Although McKenney and van der Akker (2005) confirmed that the study had a developmental approach, they also offered the follo wing caveat “The research approach in this study may be more specifically labeled as f ormative research, since it involved the actual design and formative evaluation of a program ” (p. 47). While the distinction appears to be trivial, it would be interesting to k now if the definition affected the study to the degree that the distinction was worth mentionin g. The second category (Type 2 research) includes stud ies that typically have generalized conclusions. Although they may involve the analysis of the design and development of a program or product, Type 2 studies generally occur after the development process is complete. Type 2 research s tudies are purposed at producing

PAGE 31

16 knowledge, thus it is not uncommon for these studie s to focus on model creation and validation. These studies usually produce, for exa mple, new or enhanced models, or evidence of a model’s validity. Richey et al. (200 4) cited as an example of Type 2 research, a study conducted by Jones and Richey (2 000). The study was based on the principle of Rapid Prototyping, and they proposed a revised ISD model that included Rapid Prototyping. Although the study was characte rized as Type 2, Jones and Richey (2000) noted “Many view RP methods essentially as a type of formative evaluation that can effectively be used early and repeatedly throug hout a project (Tessmer, 1994)” (p. 63). Once again, the importance of the definition was acknowledged. Many different research procedures can be used when conducting Developmental Research. The examination of a process, as is the case in Developmental Research, affords the possibility of gathering rich data whic h in turn increases the validity of the study. Considering that the setting is usually rea l-world based, these studies often employ traditional quantitative procedures and addi tionally, they may also include qualitative aspects. Given the goals of Developmen tal Research, it becomes clear that the task of describing processes requires traditional a nd alternative perspectives. Many different instruments can be used to collect data, and many techniques can be used to analyze and report the data. A very good example o f this is the CASCADE-SEA project (McKenney & van der Akker, 2005), where 108 instrum ents were used to collect data and several different procedures were used to analy ze and report the data. Issues and Challenges in Developmental Research Conducting a study that has developmental goals is not a trivial task. While the potential exists for rich data and significant conc lusions, rich data collection and analysis

PAGE 32

17 take time. Researchers acknowledge that this type o f research requires an appreciable investment in time, and often forgo Developmental R esearch studies instead focusing on scholarship that is quicker to complete and more im portantly quicker to publish (Reeves, 2000a). In an environment where researchers must p ublish or perish, Developmental Research is often avoided. Developmental Research like educational research as a whole seems to be almost disjoint from its stakeholders, namely educators. The perception that educational research is only useful to scholars is not totally without merit. D. W. Miller (1999) noted “Some scholars contend that education research can boast plenty of solid, useful findings about learning and reform. The problem is not that the research isn't good, they say, but that it doesn't find its way into the classroom” (p A18). In that scathing article, D. W. Miller suggested that the education discipline is n ot only failing to shape classroom realities, but its research “is especially lacking in rigor and a practical focus on achievement” (p. A17). This position is supported by Reeves (1995) when he characterized significant portions of educational r esearch as “pseudoscience” (p. 6). Although the failings of education research are evi dent, most scholars acknowledge that hope exists. Recently, prominent scholars (Reeves, 2000a; Richey et al., 2004; van den Akker, 1999) have called for mor e Developmental Research to be conducted. They agreed that Developmental Research is one avenue through which academic and practical solutions can be found. Obv iously Developmental Research will not solve all the problems in educational research and subsequently education, but considering what is at stake, Reeves (2000a) put it best when he said “Given the poor history of other approaches, I am increasingly conv inced that if instructional

PAGE 33

18 technologists want to contribute to meaningful educ ational reform, they should pursue development goals” (p. 11). Why Conduct Developmental Research? There are many reasons why researchers should condu ct Developmental Research. Consider that more and better research w ith development goals will essentially increase the credibility of the field. Also, consi der that this type of research is more apt to bridge the gap between the scholar and the pract itioner thus increasing the value of research. Although those reasons are very good in a nd of themselves, perhaps the best reason to conduct Developmental Research is because it is socially responsible. Most of the literature reviewed for the current document sh are the common theme that Developmental Research is simply the right thing to do. The sentiment is best expressed by Reeves (1995) when he suggested “It would seem t hat we stand a better chance of having a positive influence on educational practice if we engage in Developmental Research situated in schools with real problems” (p 465), and concluded that “We cannot afford to lose another generation of researchers to the pursuit of research for its own sake. We must be more socially responsible” (Reeves, 2000 b, p. 27). Learning This section examines some of the discourse pertain ing to learning. It is important to address the issue of learning on both a concrete and an abstract level because the perceptions of what learning is dictate the imp lementations of instruction and the procedures for assessment. If the terminal objecti ve is the production of an instructional intervention, then a clear understanding of learnin g is a logical starting point.

PAGE 34

19 This discussion will form a part of the framework f or this study. A clear definition of learning will be generated, thus faci litating the creation and development of the instructional intervention. What is Learning? In general, people agree that learning is important but the causes, processes, and consequences are still contentious issues. The def inition of learning changes depending on perspective, but there are common threads and si milar themes across different perspectives. This review will not include a detai led discussion of different learning perspectives but a general synopsis of five of the most common orientations to learning can be found in Merriam and Caffarella (1999, p. 26 4). The behavioral and the cognitive perspectives will be the primary focus for this review. The reason these two were selected is that they represent fundamentally different but similar propositions; the behavioral emphasis o n the overt environment juxtaposed to the cognitive emphasis on covert mental processes. Although different, a thorough examination would reveal that behavioral and cognit ive positions contain many common threads. McDonald, Yanchar, and Osguthorpe (2005) s uggested that In most fundamental respects, however, cognitivism and behaviorism are virtually indistinguishable—they are both rooted in a determi nistic (mechanistic) ontology that views human action and learning as the necessa ry output of environmental inputs and biological conditions; and both are base d on an empiricist epistemology that views the mind—including behavior al repertoires, schemas, mental models, and so on—as gradually constructed o ver time through the mechanistic processing of sensory impressions. (p. 91)

PAGE 35

20 Scholars like Piaget, influenced by both behavioral and cognitive schools concluded that learning is affected by both internal and external agents (Merriam & Caffarella, 1999). The early behavioral definitions of learning focuse d on learning as a product. Jarvis (1987) provided an example when the he quote d Hilgard and Atkinson’s definition of learning as “a relatively permanent change in be haviour that occurs as a result of practice” (p. 2). For the purpose of scientific in quiry this definition is clear and concise, but the simplicity of the definition led to critici sms that questioned, for example, whether or not the behavior had to be performed in order fo r learning to have occurred, and also whether or not potential for change was taken into consideration (M. K. Smith, 1999). Jarvis, a critic of the behavioral definition, prop osed an expansion to the behavioral definition and suggested that learning is “the tran sformation of experience into knowledge, skills and attributes, and to recognise that this occurs through a variety of processes” (p. 8). Most of the critiques of the behavioral position ar e structured around the pervasiveness of the mind, that is, to what extent the mind controls the individual. While the arguments are philosophical in nature and not e xtensively treated in this review, it is important to note that most critiques of the behavi oral position are erringly structured on trying to fit behaviorism within the cognitive scop e. Burton, Moore, and Magliaro (2004) surmised that Skinner’s work was criticized often for being too d escriptive—for not offering explanation. Yet, it has been supplanted by a trad ition that prides itself on qualitative, descriptive analysis. Do the structure s and dualistic mentalisms add anything? We think not. (p. 27)

PAGE 36

21 Emurian and Durham (2003) suggested that within the behavioral context, the antecedents and interactions were sufficient in exp laining learning. They classified their approach as “atheoretical” and asserted that It is atheoretical in that it will focus on the interactions themselve s as the explanation of the antecedents to knowledge and ski ll acquisition, and it will rely on those antecedents, rather than external explanat ory metaphors, such as cognitive models, to explain the process and outcom e of learning. (p. 679) Jarvis (1987), in dissecting the behavioral definit ion of learning, proposed several critical flaws, one of which was: First, if a person can be taught to think criticall y and also to be autonomous, then it is difficult to maintain that what is going on w ithin a person in subsequent situations is merely the result of the environment, or determined by previous experiences. (p. 4) Skinner (1972) proposed that the functions of auton omous man could be attributed to the controlling environment, and Jarv is (1987) considered this the point of contention. It should be clarified, however, that cognitive structures cannot be arbitrarily assigned to behavioral definitions. Skinner (1957) viewed thinking as The simplest and most satisfactory view is that tho ught is simply behavior – verbal or nonverbal, covert or overt. It is not so me mysterious process responsible for behavior but the very behavior itse lf in all the complexity of its controlling relations, with respect to both man the behaver and the environment in which he lives. (p. 449) The concept “critical thinking”, which Jarvis menti oned, differs between the two

PAGE 37

22 perspectives. This fundamental difference is not a ccounted for within the critique as Jarvis used the cognitive values of thinking as a t ool to comment on the behavioral position. Cognitivists extended the behavioral definition of learning to include mental convert processes and the “capacity” (Schunk, 2000, p. 2) to learn. Currently, the generally accepted cognitive definition of learning emphasizes learning as a process. Tomporowski (2003) quoted Zimbardo and Gerrig’s def inition of learning as “a process that results in a relatively consistent change in b ehavior or behavior potential and is based in experience” (p. 7). This definition implies thr ee characteristics. First, the term learning is only used in situations where an overt change in behaviors occurs consistently over a given time frame. Secondly, because learnin g is defined as a covert process, the behaviors must be demonstrated to prove that learni ng has occurred. Once proven, the behavior will become a relatively permanent part of the learner’s repertoire. Finally, learning can only occur with practice or experience (Tomporowski, 2003). These conditions are direct target goals for the current study. Although Burton et al. (2004) proposed a scientific definition of learning “…a function of building associations between the occas ion upon which the behavior occurs (stimulus events), the behavior itself (response ev ents) and the result (consequences)“ (p. 9), this study will define learning as “a process t hat results in a relatively consistent change in behavior or behavior potential and is bas ed in experience” (Tomporowski, 2003, p. 7). The latter definition captures both t he behavioral background and current cognitive influences of learning, and provides an o pportunity for the measurement of overt learning artifacts.

PAGE 38

23 Experience and Learning Traditionally, experiential learning has been used in two ways: learning via contact with the phenomena being studied, and learn ing via the events of life. The literature presents discussion on the social constr uction of learning, and reflective and non-reflective learning (Jarvis, Holford, & Griffin 2003). Although the arguments are purposed at teasing out the attributes of learning thus generating a clear definition, Jarvis et al. (2003) admitted that “…all learning is exper iential and so any definition of learning should recognize the significance of experience” (p 67). One of the central tenets of the current study is t hat experience is vital to learning, but the wider social context of life experiences is not considered. This study is primarily concerned with the physical connection between a le arner and a target behavior; the relationship being experience. The Kolb and Fry Mo del (Jarvis et al., 2003, p. 59) is shown in Figure 1. It was developed in 1975 and hi ghlights the importance of concrete experiences in the learning cycle. Experience concrete Observations and Reflections Formulation of abstract concepts and generalization Testing implications of concepts on situations Figure 1 The Kolb and Fry learning model. Although the model is critiqued as being too simpli stic, it is generally considered the acceptable foundation for experiential learning discourse. The Kolb and Fry model is

PAGE 39

24 also the basis for other more complex models (Jarvi s et al., 2003, p. 59) that seek to address the importance of secondary life experience s. The importance of experience in learning is clear. What remains uncertain are the attributes of the experiences that optimally produc e learning, and the conditions under which they occur. Ericsson, Krampe, and Tesch-Rme r (1993) presented evidence that suggested expertise could be explained by extended deliberate practice as opposed to innate characteristics. While they admitted that t he relationship is subject to various confounds, their study showed that expert behavior can be reliably linked to extended deliberate practicing. Even more important was the assertion that immediate informative feedback is necessary for accurate performance, and the lack of feedback seriously inhibits learning even for highly motivated learner s. Pimentel (1999) went further and noted that learning environments must have high lev els of interaction. Interestingly, Pimentel stated that “the environment does not prov ide a lesson in an explicit fashion, rather it is something that humans do naturally. Th e environment simply provides the means for lessons to be learned” (p. 77), hinting a t the usefulness of unguided nonexplicit instruction. Pimentel (1999) developed a complex virtual environ ment (LeProf) that provided learners with experiences that were both meaningful and transferable. The simulation allowed learners to manually input circuit paramete rs and experience different outputs. The interactive nature of the simulation was report ed as successful because learners expressed positive attitudes towards the simulation Interactive simulations like LeProf have benefits, but the current study contends that explicit instruction must be paired with a simulation for meaningful learning to occur. It is not sufficient to present an interactive

PAGE 40

25 environment; instruction and guidance are necessary components of learning. Kirschner et al. (2006) supported the need for direct instruc tion guidance stating After a half century of advocacy associated with in struction using minimal guidance, it appears that there is no body of resea rch supporting the technique. In so far as there is any evidence from controlled stu dies, it almost uniformly supports direct, strong instructional guidance rath er than constructivist-based minimal guidance during the instruction of novice t o intermediate learners. Even for students with considerable prior knowledge, str ong guidance while learning is most often found to be equally effective as unguide d approaches. Not only is unguided instruction normally less effective, there is evidence that it may have negative results when students acquire misconceptio ns or incomplete and/or disorganized knowledge. (p. 83) It becomes evident that highly interactive simulati on environments must contain elements of direct guided instruction. While exper iential learning advocates placing the learner in contact with the phenomena to be studied it is not sufficient to simply facilitate the contact. The learner must be guided within the medium such that important aspects are highlighted and irrelevant artifacts ignored. R. E. Clark (2005) addressed this issue, developing the Guided Experiential Learning (GEL) p rocess for completing and validating the design and development aspects of in struction. The current study fundamentally contends that exper ience in the form of concrete contact is vital in learning but this experience mu st not only be opportunistic, it must be required. It is not sufficient to simply present t he opportunity to practice or engage experiences; the practice must be required and cont ingently guided.

PAGE 41

26 Attitudes and Learning The relationship between learning and attitude is v ery important. Jarvis (1987) suggested that learning is “the transformation of e xperience into knowledge, skills and attributes” (p. 8). Knowledge and skill are measur able attributes, and are a genuine fit within the behavioral framework. Attitude, in its native form does foster accurate assessment because it is essentially a metaphor des cribing the state of, in this case, a learner. A definition consistent with the theme of this study must be developed if the question of attitude is to be addressed. The study of attitudes is a major focus in the beha vioral and psychological sciences. The volume of knowledge and research in this area is beyond the scope of this review but at minimum the scholarship will provide a foundation for defining and describing attitudes. Generally, attitudes are described in affective ter ms, for example, like/dislike and good/bad. An attitude towards an object is determi ned by subjective values of an object’s attributes and strength of the associations (Ajzen, 2001). An individual may have many different beliefs towards a single object, but only those beliefs that are readily available in memory influence attitudes at a given moment (Aj zen, 2001), thus the temporal nature of attitudes is exposed. In describing attitude formation, Crano and Prislin (2006) stated Today, most accept the view that an attitude repres ents an evaluative integration of cognitions and affects experienced in relation t o an object. Attitudes are the evaluative judgments that integrate and summarize t hese cognitive/affective reactions. These evaluative abstractions vary in st rength, which in turn has

PAGE 42

27 implications for persistence, resistance, and attit ude-behavior consistency. (p. 347) This definition highlights the inherent subjectiven ess of attitudes and further clarifies the notion that an attitude is a collection of judgment s affecting behavior towards an object. Crano and Prislin (2006) further discriminated betw een attitude formation and attitude change by describing the Dual-Process model of atti tude change Dual-process models hold that if receivers are able and properly motivated, they will elaborate, or systematically analyze, persuasi ve messages. If the message is well reasoned, data based, and logical (i.e., stron g), it will persuade; if it is not, it will fail. Auxiliary features of the context will h ave little influence on these outcomes. However, if message targets are unmotivat ed (or unable) to process a message, they will use auxiliary features, called “ peripheral cues” (e.g., an attractive source), or heuristics (e.g., “Dad’s usu ally right”) to short-circuit the more effortful elaboration process in forming an at titudinal response. Such attitudes are less resistant to counterpressures, l ess stable, and less likely to impel behavior than are those formed as a result of thoro ugh processing. (p. 348) This description introduces both the individual abi lity differences in learners and the motivational factors involved in attitude change. It is clear from the description that a capable and willing learner will change an attitude if the message is sufficiently strong. The development or change of an attitude might occu r over a period of time, or after isolated contact. If a new attitude is estab lished and it is strong, it will be stable over time, it will be persuasion resistant, and mos t importantly it will be a predictor of future behavior (Ajzen, 2001).

PAGE 43

28 It is important to remember that the exact relation ship between behavior and attitude is still unknown, but it is widely accepte d that each influences the other (Ajzen, 2001). The many variables involved in attitude for mation or change makes the measurement of this attribute very difficult. Gagne et al. (2005) defined attitude as “an interna l state that affects an individual’s choice of personal action toward some object, person, or event” (p. 95). Th e importance of this definition is that it introduces the measurable construct choice While this definition admittedly does not capture the ent ire scope of attitudes (Gagne et al., 2005), it is a consistent subset of the current lit erature and it is directly applicable to the current study. The Gagne et al. (2005) definition can be viewed as a subset of the definition proposed by Crano and Prislin (2006) under two cond itions. Firstly, “evaluative judgments” and “cognitive/affective reactions” are internal constructs and can be correctly labeled internal states that are removed from casual analysis. Secondly, the concepts of evaluate, integrate, and summarize are all behaviors directed towards a target recipient. Clearly, attitudes are a combination of complex behaviors and the result of many variables. The Gagne et al. definition, altho ugh less precise than the definition by Crano and Prislin, captures these complex behaviors and their internal antecedents and consequences in a measurable way – choice. The aim was not to trivialize or minimize the contributions of internal agents, but to develo p a context through which attitudes can be objectively assessed while reserving cognitive a nd affective comments for the authors who work within those fields (Gagne et al., 2005, p 94). The current study will use the

PAGE 44

29 Gagne et al. (2005) definition of attitude as a bas e to describe the measurement and subsequent analysis of attitudes. It is tempting to classify attitudes as predictors of future behavior, but that is only partially correct (Ajzen, 2001). Only a strong att itude will predict future behavior with acceptable accuracy. Weak attitudes are subject to external confounds and thus they are not an accurate measure of future behavior. Within this context, the current study does not measure attitude as a predictor of future behav ior, rather, given the temporal nature of attitudes, it measured attitude at a single point i n time. That measurement will not be used as a predictor of behavior, but rather as a po int of reference to describe the possible development of future behavior. Learning Outcomes Learning outcomes were classified based on Gagne et al.’s (2005) types of learning outcome. This classification scheme was c hosen because it is consistent with the general theme of the study and it logically fits wi th the other parts of the current study that are based on Gagne’s work. Figure 2 shows Gag ne et al.’s (2005) categorization of types of learning outcomes.

PAGE 45

30 Figure 2. Gagne's categorization of learning outcomes. A description of each outcome is provided followed by an example performance indicator. Intellectual Skill A class of learned capabilities that allows an indi vidual to respond to and to describe the environment using symbols, for example language or numbers. This class is divided into hierarchical levels where each level i s a prerequisite for the next. Discriminations. Discrimination refers to the ability to identify differences in stimuli based on a given dimension. The learner mu st be able to discriminate between, for example, the ordinate and the abscissa, indicat ing that the learner can distinguish similar and different attributes of a stimulus. Concepts. A concept allows the learner to classify stimuli based on general or common attributes. When a learner identifies the p roperties of an object that make it a member of a class, it is an indication that the lea rner has acquired the concept governing that object.

PAGE 46

31 Rules and principles. Rules or principles are statements that describe the relationship among concepts. Most complex behavior s, for example swimming, are based on rules, thus engaging in complex behaviors is an indication of rule acquisition. It is not sufficient to state rules, they must be appl ied. Problem solving. Problem solving is a process leading to an instru ctional outcome. It is not the outcome itself. Rules are sometimes used in problem solving, but this is not mandatory. Discovery Learning is an ex ample of problem solving. Most problem solving involves the use of complex rules f ormed from simpler rules. Taken together, they can be used to solve a specific prob lem. Cognitive Strategy A cognitive strategy is an internal process where l earners engage the way they remember, learn, attend, and think. There are many types of cognitive strategies including rehearsal, elaboration, and organizing, b ut they are all methods that facilitate self-modification. Cognitive strategies are intern al processes and cannot be readily observed. They must be inferred by querying other intellectual skills or obtained via selfreports. Verbal Information Verbal information or declarative knowledge provide s a foundation for learners to build other skills. Verbal information knowledge is built on information which is in turn built on data. For example, the time is 9:30am (da ta) and behavior occurs every hour on the hour (information) leads to the declarative kno wledge that the behavior is not occurring at this time.

PAGE 47

32 Attitude Attitude, defined in the “Attitudes and Learning” s ection of this document, is an internal state that affects an individual’s choice of personal action toward some object, person, or event. An attitude can be measured by o bserving the choices learners make under certain conditions. Motor Skill Motor Skills are learned capabilities reflected in bodily movements. Practice is a key issue in developing motor skills, and performan ce of the skills under specified conditions indicates acquisition of that skill. Gagne et al. (2005) pointed out that although a maj ority of instruction includes most or all of the categories, classifying learning can be useful because grouping objectives reduces design work and facilitates opti mal instruction sequencing and planning. Relating Learning Perspectives to Instruction Hartley (1998) proposed a set of principles that gu ide learning and consequently instruction. A detailed discussion on instruction is included in this review, but Hartley listed principles that bridge theory and practice. These principles provide a framework where the abstractions of the theoretical perspecti ves can be transformed into concrete usable artifacts. Although this is not an exhausti ve set of principles, it does provide a first step in determining what instruction should l ook like based on behavioral and cognitive perspectives. Key behavioral principles emphasized during the learning process include: Activity is important. The probability of learning increases when learners are

PAGE 48

33 actively involved in the learning process. Repetition, generalization, and stimulus discrimina tion are important if learning is to incorporate transfer to new contexts. Reinforcement, especially positive reinforcement, i s a powerful motivator. The presence of objectives aids the learning proces s. Key cognitive principles emphasized during the lear ning process include: Instruction should be logically ordered and well st ructured. Well-organized instruction is easier to learn than poorly organized instruction. The way the instruction is presented is important. Perceptual features of the task are important thus it might be a good idea to, for example, give general outlines of tasks to be covered before the instruction begins. Prior knowledge is very important, and learning tha t fits within or extends the learner’s current knowledge base will probably be m ore positively received. Individual differences in, for example intellectual ability or personality, affect learning. Cognitive feedback regarding success or failure inc reases the probability of learning. It is evident that there is considerable overlap be tween the principles, thus it is possible to incorporate many or all of the principl es into one instruction strategy. The Assessment of Learning Jarvis et al. (2003) discussed assessment as an ext ension of learning perspectives. They outlined the importance of assessment as

PAGE 49

34 …how people’s learning is assessed determines to a large extent what learning they think is important. Assessment is not therefo re a ‘neutral’ technique for measuring the performance of learners. It has beco me a central feature of how education and training are organized in almost ever y society. (p. 158) Although they did not treat specific assessment tec hniques in detail, they provided key features that broadly reflect the development o f assessment literature and are pertinent to the current study: Formal and informal assessment Formal assessment is more purposeful and organized than informal assessments Formative and summative assessment While formative assessments are used to determine o r alter current teaching or learning, summative assessments reflect what has been learned at the end. It must be noted that in practice, ass essments are usually conducted for both formative and summative reasons. Measurement Assessment of learning may include some numeric rep resentation of achievement, or in certain circumstances labels are more appropriate. Judgment Often, judgments are made regarding the level of ma stery that a learner has achieved. An example of this is the allocation of a letter grade, for example an A, as an indication that the learner has mastered a particular content. The teacher in this case makes a judgment as to mastery level of

PAGE 50

35 the learner. Validity Assessments that are valid measure only what they a re supposed to measure. If a learner is to be assessed in a parti cular area of a subject, then the assessment instrument must measure that pa rticular content area. Valid assessments can be used to accurately determi ne what mastery level the learner has achieved in a particular area. Reliability A reliable assessment will return consistent result s for different learners who perform similarly. This means that across all learners, the instrument will return similar scores for learners at the same performance level. Instruction Programmed Instruction Skinner Probably no single movement has impacted the field of instructional design and technology more than Programmed Instruction. It sp awned widespread interest, research, and publication. It was then placed as a component within larger systems movement, and finally, it was largely forgo tten. (Lockee, Moore, & Burton, 2004, p. 545) The term Programmed Instruction (PI) was probably t he result of Skinner’s 1954 paper entitled “The Science of Learning and the Art of Teaching”. Skinner (1954) was

PAGE 51

36 mostly a concerned reaction to his daughter’s class room realities at the time, but it set the stage for Skinner’s comment on the science and tech nology of human behavior. Skinner (1958) formally proposed the programming of instruction as a way of increasing learning. He noted that education neede d to become more efficient to deal with the steady population growth. Audio-visuals a ids were being used to supplement instruction, but Skinner felt that although content could be delivered via visual aids, “There is another function to which they contribute little or nothing. It is best seen in the productive interchange between teacher and student” (Skinner, 1958, p. 969). Skinner believed that instruction could be automatically an d mechanically delivered while maintaining the teacher/learner interchange in a tu torial style environment. His aim was to create an environment where the learner was not “a mere passive receiver of instruction” (p. 969). The Sidney Pressey Teaching Machines of the 1920s, which Skinner used as a foundation, had several features that Skinner belie ved to be paramount. The most important feature of the teaching machines was that they permitted learners to work at their own pace, and facilitated learning by providi ng immediate feedback to the learner. Although the Pressey machines failed in part to wha t Skinner called “cultural inertia” (Skinner, 1958, p. 969), the principles of immediat e feedback and the learner as an active participant in instruction remained. Skinner’s idea of teaching machines was less an ins trument and more of a set of principles to bring learning under the control of s pecific stimuli. His proposed machine had several important features that reflected Skinn er’s view on learning. The machine should require that the learner compose as opposed to select responses. The aim

PAGE 52

37 according to Skinner is to promote recall in lieu o f recognition. Another feature Skinner mentioned was the sequencing of learning contingenc ies such that a learner would traverse a set of small steps that would lead to th e desired terminal behavior. It is within this context that Skinner introduced the “frame” as a presentation of visual material that required a response, and that response would then e licit immediate appropriate feedback. A frame, discussed later in this section, would con tain content that would be differentially reinforced, bringing verbal and nonv erbal behaviors under the control of specific stimuli. The teaching machine was not designed to teach, but rather “It simply brings the student into contact with the person who composed t he material it presents” (Skinner, 1958, p. 973). The machine Skinner envisioned woul d facilitate mass instruction while retaining the “good tutor” quality that Skinner ins isted was important. Skinner (1986) envisioned that the personal computer could, for th e first time, truly facilitate mass instructional while retaining the individualized ch aracteristics of a personal tutor. Instructional Characteristics PI can be specified as a “sequential arrangement of reinforcement contingencies that cumulatively establish terminal repertories – as well as their stimulus control” (Davis, Bostow, & Heimisson, in press). PI encompa sses several principles and techniques but no general consensus exists as to a standard approach to PI, hence the reference to PI as an art form by Skinner (Skinner, 1958). Lockee et al. (2004) described some of the commonalities that exists across differ ing approaches to PI. They mentioned the following components:

PAGE 53

38 Specification of content and objectives Determining the content to be taught, including ter minal behavior and measurable objectives. Learner analysis Gathering data, for example demographic data, on th e learner in an effort to customize the instruction. Behavior analysis Analyzing the relevant behaviors to facilitate the sequencing of the instruction. Selection of a programming paradigm Determining the sequencing technique to be used. E xamples of techniques include linear, branching, and intrinsic programming. Sequencing of content Several sequencing techniques exist, including a ge neral linear progression based on objectives, and the RULEG syst em developed by Evans, Glaser, and Homme in 1960. Evaluation and revision Evaluating learner responses at the end of the inst ruction in an effort to fine-tune the content and sequencing. Completion of the components increases the probabil ity of a successful instructional program, but each component is not required. Several concepts are important when constructing PI tutorials. At the fundamental level, the programmer is concerned with the creation of individual “frames”,

PAGE 54

39 and the sequencing of those frames. A frame is a s ingly displayed presentation of visual material (Skinner, 1958), requiring an overt respon se. The sequencing of the frames affects the effectiveness of the overall tutorial. The aim of the tutorial is to “shape” a desired behavior and to accomplish this goal the tu torial must differentially reinforce the correct forms of the desired behavior. In essence, PI moves a behavior from a low probability of occurrence, to a high probability of occurrence via the shaping process. A tutorial can contain several different techniques that help in the shaping process. Techniques like fading, the use of themat ic or formal prompts, the use of copy frames, and priming all help bring the learner unde r the control of specific stimuli. These common techniques are described below: Prompts. The use of prompts is central in PI. Prompts act as a sort of hint for the learner increasing the probability that the desired behavior will be emitted. Prompts can be formal or thematic. Formal prompts include a fo rm of the desired behavior. For example, help letters for a missing key word. In t his case, the formal prompts increase the probability of the desired response, that is, t he construction of the key word. Thematic prompts generate desired responses by cuin g the learner via contextual clues. The use of synonyms, antonyms, and common c omparisons are all strategies that take advantage of context to provide thematic promp ts. Fading. Fading involves the removing of clues from indivi dual frames as the tutorial progresses. The withdrawal of formal or t hematic prompting clues helps the learner to become independent of the need for such clues. Low probability responses become high and are emitted without the need for ar tificial prompting or clues.

PAGE 55

40 Copy frames. Copy frames are unique in that they contain the d esired response within the frame itself. Initially a tutorial may require a learner to emit a response or behavior that has not yet been established. The co py frame presents the response within its visual presentation and requires that the learn er repeats or copies that response. Copy frames can be used to prime responses so that they can be shaped later in the tutorial. Priming. Priming involves the presentation of a stimulus t hat the learner must independently produce at a later time. Priming a r esponse early in a tutorial is necessary if that response is to be shaped and later produced with and without clues. It can be summarized and contextualized with the following pa ragraph: Programmed Instruction is an effective teaching too l. Desired responses, which initially have a very low probability of occurrence are primed using copy frames that require that the learner simply repeat the provided stimulus. After priming, the probability that the currently weak response will o ccur is increased by a process called shaping. Shaping involves differentially reinforci ng correct forms of the response. The learner is successively reinforced for correct resp onses, and these responses become progressively difficult and prompts are systematica lly removed. The incremental withdrawal of prompts is called fading, and it is u sed to help transform a low probability response to a high probability response. Finally, a learner must construct the response without the aid of either formal or thematic prompt s, thus the response is now under the control of stimulus presented in the tutorial. Skinner (1958) admitted that programming instructio n was an art form, and envisioned a day when it could be less an art, and more a technology. He did find

PAGE 56

41 conciliation in the fact that art form or not, “it is reassuring to know that there is a final authority – the student” (p. 974). One criticism of PI is that this type of instructio n is only effective for low achieving learners. The validity of this claim rem ains uncertain because the definition for “low achieving” is not usually defined within the c ontext of the criticism. There are, however, situations where PI is effective and situa tions where it is not. Emurian and Durham (2003) proposed that Programmed instruction approaches may be best suite d for students who have not mastered the art of studying, and one important ben efit of completing a programmed instruction tutoring system is that it t eaches a learner how to acquire knowledge independent of the domain. The ultimate o bjective, then, of a programmed instruction tutoring system is to free t he learner for advanced study undertaken with traditional forms of knowledge codi fication, such as a book (p. 694) Someone who has not learned how to study cannot be immediately labeled as a low achiever, but that person has a high probability of performing poorly because they have not learned to study. PI in this instance can be e ffective. Learners who are motivated and have mastered self-m anagement and studying can also benefit from PI. These learners are more likely to quickly and successfully advance through PI frames, thus learn at a fast rat e. An interesting proposition is that for motivated learners, learning is independent of tech nological support. This would suggest that value of PI or another instructional system is determined by the nature of the learner (Emurian & Durham, 2003).

PAGE 57

42 Research on PI Analysis of research on PI and on educational techn ology as a whole is often preceded with disclaimers. Lockee et al. (2004) in cluded a sub-section (20.4.1) titled “A Disclaimer” where they proceeded to characterize hi storical research with terms like “lacks validity” (p. 552) and “buyer beware” (p. 55 3). These characterizations represent the current view scholars have of educational techn ology. Today, most scholars agree that historical research on PI is littered with som e combination of wrong assumptions, bad design, incorrect analysis, or just plain insig nificance (De Vaney & Butler, 1996; Gredler, 2004; Reeves, 2000a). PI, as a consequenc e of the behavioral movement, is assumedly one of the main culprits because “Program med Instruction (PI) was an integral factor in the evolution of the instructional design process, and serves as the foundation for the procedures in which IT professionals now en gage for the development of effective learning environments” (Lockee et al., 2004, p. 545 ). Historical PI research studies were often compariso n studies, where PI was compared to traditional instructional methods. Man y of these studies had confounding variables and frequently suffered from novelty effe cts and sampling errors. Lockee et al. (2004) cited several examples where obvious samplin g or design errors were made, thus justifying the need for caution when analyzing the results. Most of the comparative studies found that learner performance after instru ction using PI was either better or the same as traditional instruction, hence “no signific ant difference”. Most studies, however, noted that PI tended to produced equal or better re sults in less time. The no significant difference mantra that would eventual label the ent ire field, would in this case be positive because PI could produce at least the same results as traditional methods in less time and

PAGE 58

43 at a cheaper cost. This reality would become the p rimary reason why PI was adopted and adapted for use by the military, an area where effi ciency and mass distribution were premium. Early PI research contains documented errors and fl aws, but the effort did yield several principles that are still relevant today. Lockee et al. (2004) described several key components of PI that are general and can be associ ated with any instructional intervention. They listed the key components as mo de of presentation, overt versus covert responses, prompting, confirmation, sequence size of step, error rate, program influence by age level, type of response construc ted vs. multiple choice, and individual versus group uses. Three of these components are w orth further description because they have significant implications for current instructi onal practices. Overt Versus Covert Responses The issue of overt versus covert responding is a ce ntral issue in instruction. Is an observable response necessary for meaningful instru ction? Lockee et al. (2004) cited various studies where researchers found no signific ant differences in overt versus covert responding and several studies where significant di fferences were found. They cited two studies that were done at Western Michigan Universi ty, a doctoral dissertation done by S. Shimamune in 1992, and a master’s thesis by P. Vuno vick in 1995. Those two studies found no significant differences in the effectivene ss of instruction requiring overt responding compared to those that used covert respo nses. Lockee et al. also cited Tudor and Bostow (1991), and Tudor (1995) and these found differences where overt responding significantly outperformed covert respon ding. The significance of the above two pairs of studies is that they were all used as the basis for a study by M. L. Miller and

PAGE 59

44 Malott (1997). M. L. Miller and Malott hypothesize d that the performance-based incentives that were not a part of the Tudor studie s could explain the difference between the previously mentioned two pairs of studies and o thers studies that were grouped respectively. M. L. Miller and Malott concluded th at “the superiority of overt responding is a robust enough phenomenon to occur even when an incentive is provided for performance on the posttest” (p. 500). Although th e M. L. Miller and Malott study validated the need for overt responses in instructi on, closer inspection reveals that the study could be criticized for several reasons. Fir stly, the sample was self-selected thus exposing the study to self-selection bias. Secondl y, the final sample sizes were small. Finally, it appears that both groups received some form of incentive, but the exact distribution and criteria of the incentives were no t clearly described in the journal article. These cautions do not invalidate the results, but c ertainly reduce the authority of the study. Kritch and Bostow (1998) examined the issue of cons tructed responses in an effort to investigate the importance of a high rate of response construction. They found that high-density responding (overt) significantly out-performed low-density responding (covert), and the performance gains were also refle cted in the applied task that was assigned. Kritch and Bostow observed that there wa s no statistical difference between high-ability and low-ability learners, where the ab ility measure was based on selfreported grade point averages. In addition, they f ound that higher learner gains occurred when instructional time was greatest, and carefully noted that the results were expected to be generalizable. None of the criticism of M. L. M iller and Malott (1997) can be applied to the Kritch and Bostow study.

PAGE 60

45 The example sequence that was presented above highl ights the fact that research on this topic remains inconclusive and the general recommendation from authors is that more work needs to be done in this area. Constructed Response vs. Multiple-Choice The nature of responses is an important considerati on in the design of instruction. Several studies in this area found no significant d ifference between instruction requiring constructed responses and those using multiple-choi ce selections. The theoretical issue in this area is whether the distracters in multiple-ch oice questions have an adverse effect on shaping. Currently, most instructional interventio ns use a combination of the two, using each where applicable. Lockee et al. (2004) did no t mention the Kritch and Bostow (1998) study in this section, but the latter does h ighlighted the benefits of constructed responses. Although they did not make comparisons with multiple-choice, Kritch and Bostow asserted that “…frequent overt constructed r esponding within instructional contingencies is a critical design feature for effe ctive computer-based instruction“(p. 395). The construction rather than selection of re sponses would, in this case, be more appropriate if the desired outcome was the producti on of acquired skills or knowledge. Confirmation Lockee et al. (2004) acknowledged the vagueness of the term confirmation. The differences between feedback, confirmation, and rei nforcement are not only philosophical because the implementation of each im plies a different relationship between the learner and the instruction. As an exa mple, Lockee et al. noted that Skinner considered confirmation to be positive reinforcemen t in the operant conditioning process, while others disagreed with this position. The und erlining issue is that the terms are

PAGE 61

46 sometimes erringly used interchangeably in the lite rature and this fact might affect research results. In research they examined, Lockee et al. (2004) fou nd that most studies reported no significant difference. These findings were vie wed with caution because the research was labeled as incomplete and lacking information o n issues like feedback delay and feedback amount. In addition, some fundamental que stions remain unanswered, for example, are signs of progress (correct answers) a significant reinforcer? Current and Future Research Research in Educational Technology has shifted from behavioral principles to cognitive and constructivist approaches. This shif t has consequently led to a sharp decrease in research on PI. The 1963 PLATO (Progra mming Logic for Automatic Teaching Operation) project at the University of Il linois is an example of the paradigm shift in research. Originally rooted in the behavi oral approach and PI, it has changed to its current more constructivist form (Lockee et al. 2004). Even in its new form, however, PLATO still incorporates behavioral princi ples, for example, the use of immediate feedback, learner assessment, and perform ance evaluation. Although current research is scarce, it does occur. One example of current PI research is Davis et al. (2005). They investigated incremental prompting as a feature of online PI and found that when compared to standard PI and simple prose presentation, using incremental prompting produced significantly better scores on a subsequent applied task that required the learners to write three essa ys based on the instruction. Davis et al. showed that PI was a viable instructional method re aching higher levels of Bloom’s

PAGE 62

47 taxonomy, but the results were tempered by the fact that incremental prompting takes a significantly longer time for learners to complete. In their paper, McDonald et al. (2005) discussed th e implications of PI and proposed that certain assumptions led to its declin e, and if unaddressed, these assumptions will adversely affect the future of PI: Ontological determinism The student’s behavior and learning are governed by natural laws Materialism Only the overt, measurable behaviors are important Social Efficiency The imperative to reduce cost and deliver instructi ons to wider audiences significantly affected educational practices Technological Determinism Technology as the most important force in change si gnificantly affected educational practices. These assumptions, according to McDonald et al. cre ated an environment where PI materials were rigid and could not be individualize d for a particular setting or learner. The prepackaged materials were consequently only us eful under specific circumstances, but they were widely distributed and expected to be productive under all conditions. PI thus fell into disfavor because it could not work u nder all conditions. If current and future researchers are to learn from PI, they must carefully consider the assumptions to avoid the pitfalls of each. Onl ine learning, which is currently the leading example of instructional technology, assume s both social efficiency and

PAGE 63

48 technological determinism (McDonald et al., 2005). It is now frequently designed for the “lowest common denominator” and as cheaply as possi ble, to the detriment of individualized instruction. To curve this trend, i t is important to address these assumptions and adopt creative strategies, for exam ple, using multiple instructional methods within individual instructional units. McDonald et al. (2005) drew parallels between PI an d current forms of instructional technology. It is clear that PI is d eeply rooted in the history of instructional and educational technology, and it is reasonable to suggest that the future of both is interconnected. The principles of PI can inform cu rrent and future instructional development, but they must be examined and adapted to meet the needs of individual learners. The examination of fundamental assumption s thus becomes paramount. This, however, is not particular to PI; all instructional methods should be carefully examined. Adaptive Instruction Park and Lee (2004) defined adaptive instruction as “…educational interventions aimed at effectively accommodating individual diffe rences in students while helping each student develop the knowledge and skills required to learn a task” (p. 651). They presented three ingredients of adaptive instruction : first, the availability of many goals from which to choose; second, the ability to adapt to the learner’s initial abilities, then adjust to the particular strengths and weakness of the learner; finally, the ability to help the learner master the instruction then apply that mastery to real-world situations. Papanikolaou, Grigoriadou, Kornilakis, and Magoulas (2003) in their article on Adaptive Educational Hypermedia systems further cla rified the concept of adaptation. They surmised that in order for instruction to be a daptive, the educational environment

PAGE 64

49 must make adjustments to accommodate to the learner ’s need, maintain the appropriate interaction context, and increase the functionality of the educational environment by personalizing the system. They noted two forms of adaptation, adaptivity and adaptability. Adaptivity in their description refer s to interactions where the system uses data provided by the learner to modify its controls or functions. Adaptability refers to the ability of the system to support learner initiated and controlled modifications. The basic premise is that adaptation involves changes that ar e either system determined or learner determined. Park and Lee (2004) listed three approaches to adap tive instruction: Macro-adaptive instructional systems Functions such as instructional goals and delivery system are based on the learner’s ability and achievement level Aptitude-treatment interactions models Specific instructional strategies are delivered bas ed on specific characteristics of the learner. This model relies on the accurate identification of the most relevant learner traits in selecting the best instructional strategy. Micro-adaptive instructional models. Instruction is adapted based on the learner’s progr ess through the instruction. The system continuously monitors the learner’s progress and adapts the instruction based on the current perform ance. There are advantages and disadvantages to each appr oach, but for the purposes of the current study, the focus will be on micro-adapt ive instructional models. These

PAGE 65

50 models are the most relevant because PI fits comfor tably within their scope. PI, as previously mentioned, can be developed using branch ing methods. Branching PI is inherently micro-adaptive in nature because an a pr iori logic scheme can be used to determine the behavior of the adaptation, that is, the branch. The constant monitoring of the learner’s performance, which is a trademark of PI, easily satisfies the requirement that micro-adaptive instruction diagnoses the learner’s current performance and adapts the instruction based on that diagnosis. Micro-Adaptive Instructional Models Micro-adaptive instructional systems rely on the on going process of learner diagnosis to determine an optimal instructional pat h (Park & Lee, 2004). Although the fundamental issue of continuous diagnosis is preval ent in historical attempts at this model, several schools of thought have emerged. Pa rk and Lee (2004) discussed several views noting subtle differences between the impleme ntations of micro-adaptive models. For example, perspectives that adapt the sequence o f instructions can be compared with those that adapt the volume of instructional conten t delivered. The implementations have similarities, but they are conceptually different a nd have important implications. Several micro-adaptive models have been developed a nd researched over years. Park and Lee (2004) discussed several models, but t he Trajectory and the Bayesian models are of particular importance. Specific feat ures from these models can be adapted or discarded, while others can be modified to produ ce a tailor-made foundation for the current study. The Trajectory model as described by Park and Lee ( 2004) uses numerous learner traits to predict the next step in instructional se quences. In theory, the Trajectory model

PAGE 66

51 uses the learner traits (group variables) to determ ine an optimal instructional approach, although individual learner attributes are included during diagnosis and prescription of instructional steps. This model is not natively co mpatible with PI. Firstly, the Trajectory model accommodates group properties and individual internal states descriptions to predict at minimum the sequence of instructions. S econdly, the large numbers of learners necessary to generate an effective predictive datab ase is also a limiting factor in the implementation of this model. Finally, this model uses only a few variables because accounting for a large number of variables is devel opmentally unrealistic. Those three examples taken in isolation make the case that this model is irrelevant to PI, but it was based on this model that Ross and Rakow developed a n adaptive system that can be modified for use with PI. The Ross and Rakow model that is cited in Park and Lee (2004, p. 664) uses the Trajectory model to determi ne the best strategy for selecting the appropriate number of examples in a given instructi onal sequence. The core tenets of the Ross and Rakow model can be used within the PI context because it is possible to add examples to an instru ctional sequence without altering the overall direction and linearity of the content. Th e current study adapts the underlying Ross and Rakow principles and delivers examples and non-examples based on an evaluation of the learner’s current response. The overall sequence of instruction is not altered, except that examples and non-examples are inserted into the instructional sequence as necessary. The Bayesian Model presented in Park and Lee (2004) uses a two-step approach for adapting instruction. First, a pre-instruction al measure is delivered and its results are used to determine the treatment that the learner wi ll experience. Second, the instruction

PAGE 67

52 is adjusted based on the continuous monitoring of t he learner’s performance. Bayes theorem of conditional probability is then used to predict mastery level based on the preinstructional measure and the on-task performance. Like the Trajectory model, the practical implementa tion of the Bayesian model requires a large sample before predictions are reli able. The sample size requirement is a serious limiting factor, and in addition, generatin g a reliable distribution based on prior learners’ scores and historical data is not a trivi al task. The complexities of Bayes Theorem need not be applied if the simpler algorith ms can be used to accomplish the same task. The key concept from the Bayesian Model that is usa ble within the scope of the current study is the use of a pre-instructional mea sure as a means of determining a starting position for instruction. The only caveat to delivering a pre-instructional measure is that the purpose of the measure must be made clear in advance. Papanikolaou et al. (2003) justified the use of a self-report pr e-instructional measure as the basis for their INSPIRE system. In this system, learners rep orted their learning style preference and the subsequent instruction were based upon that report. An interesting observation is that at any point during the treatment, learners co uld change their learning style preference. While it is puzzling that the system w as designed to allow learners to change their learner style preference in the middle of ins truction, it can be argued that the inclusion of this function indicates that the devel opers were confident in the validity and dependability of self-reported data, and also that they were comfortable with premising their design on a purely theoretical knowledge base – learning styles.

PAGE 68

53 Self-reported data can be very useful but it should not be a fundamental and critical part of the instruction. The argument can be made that most learners are not aware of how they learn best so they might not be i n the best position to comment on their own learning style. Gilbert and Han (1999) d eveloped an adaptive system, ARTHUR, that was sensitive to learning styles witho ut using self-reports. In their system, the learners were exposed to instruction pr esented in Audio, Video and Text formats. Learners’ performance on each item from e ach format would form the basis for the branching of future learners, thus a robust cla ssification matching learner to learning style would evolve as more and more cases populate the system. Clearly the need for a large user population is a disadvantage of the syst em, but the accommodation of individual learning styles without the use of selfreports is feature worthy of comment. The Trajectory and Bayesian Models provide key feat ures that can be applied to the current study. The idea of a pre-instructional measure to determine starting position and constant monitoring and adaptation to refine in struction are complementary concepts that can coexist in any well-designed system. This idea is supported by Stoyanov and Kirchner (2004) in their statement: An adaptive e-learning environment should propose o pportunities for preassessment of different personal constructs, for ex ample, through tests or checklists for knowledge, and questionnaires for learnin g styles and learning locus of control. Based on a personal profile, the user rece ives suggestions of what and how to study. This pre-specified adaptation could coexist with dynamically adapting instruction while tracking learner behavio r. Most current theory and

PAGE 69

54 practice in e-learning suggest either pre-assessmen t or monitoring adaptation, but rarely a combination. (p. 50) The current research greatly benefits from the clar ity that a foundation in PI affords. As discussed earlier, the programming of instruction is paramount in PI. Given this fact, it is clear that the ability to alter th e sequence of instruction (frames) would not be of any value because shaping cannot occur in the presence of (possibly) disjoint frames. The sequencing of frames, which is the ess ence of programming, would be broken if the system were to either support the lea rner’s ability to alter the instruction sequence – adaptability, or determine the instructi on sequence via an algorithm – adaptivity. In this case, the best solution, confo rming to the requirements of PI, would a system that adapts the volume of instruction that i t presents to the learner. Within the PI construct, this can be implemented in terms of how many examples and non-examples the learner experiences, and the conditions under which the adaptivity occurs. Ray (1995b) described a system called MediaMatrix t hat facilitated adaptive instruction. In his system, tutorials can be built such that learners are exposed to alternate forms of the current instruction based on their cur rent performance. This system was built on the behavioral principles of shaping and s uccessive approximation, thus the branching to alternate option would not violate the behavioral principles of a systematic linear progression to the terminal behavior. It is interesting to note that the system does not itself shape performance, but instead, it provi des the environment through which a programmer could produce instruction that shapes ap propriate behavior. In addition to dynamically branching to alternate i nstruction, Ray (1995a) also described the use of pop quizzes as a means of bran ching control. In this case, the learner

PAGE 70

55 is presented with a pop quiz selected randomly from any previously completed topic. An incorrect response branches the learner to a lower level programming density, assuming that the learner has not mastered the concept and n eeds additional help. The lower level programming density refers to instruction containin g more prompts and extra stimuli for the benefit of the learner. If the learner’s perfo rmance does not improve as the levels decrease, thus increasing the programming density, the learner is presented with alternate forms of the instruction. The current study did no t implement the pop quiz mechanism described in Ray (1995a), but it implemented branch ing to alternate forms of instruction, where the presentation of additional examples or no n-examples can be described as an alternate form of instruction. Research on adaptive instruction remains unclear. Park and Lee (2004) reviewed studies where adaptive instruction was demonstrated to be effective, and also where no significant difference existed. This fact is highl ighted by Song and Keller (2001) where they investigated motivationally adaptive instructi on. They found both significant and non-significance among variables, for example, moti vationally adaptive instruction, motivationally non-adaptive, and minimal non-adapti ve instruction. In certain circumstances, adaptive instruction was superior bu t in others, no significant difference was observed. The study concluded that adaptive i nstruction is feasible and can be effective, but they conceded that additional resear ch would be needed to verify their inferences as to why the differences, or lack there of, occurred. The inconclusiveness of the research does not under mine the importance of adaptive instruction. On the contrary, adaptive in struction is almost a requirement in any good instruction system. Park and Lee (2004) sugge sted that “A central and persisting

PAGE 71

56 issue in educational technology is the provision of instructional environments and conditions that can comply with individually differ ent educational goals and learning abilities” (p. 1) The literature emphasizes that although adaptive instruction is viewed as important, the tools and methods required to suc cessfully implement effective systems are still in their infancy (Park & Lee, 2004). Intelligent Tutoring Systems Intelligent Tutoring Systems (ITS) must “…(a) accur ately diagnose students’ knowledge structures, skills and/or styles using pr inciples, rather than preprogrammed responses, to decide what to do next; and then (b) adapt instruction accordingly” (Shute & Psotka, 1996, p. 576). It is designed to mechani cally foster a dialogue resembling the interaction between a teacher and a learner (Park & Lee, 2004), and this interaction is mediated by complex algorithms and artificial intel ligence (AI) methods for describing the teaching and learning process. ITS are designed to expertly adjust to the learner’ s performance thus maximizing the instructional value of the content. The emphas is on expert behavior is highlighted by Shute and Psotka (1996) when they suggested that th e system must behave intelligently, not actually be intelligent as in human intelligenc e. This inherent complexity adds to the flexibility of an ITS. Pre-determined branching ru les that are normally a part of adaptive systems, are replaced by more dynamic methods that seek to abstractly represent knowledge. ITS are conceptually very powerful and promising, b ut their complexity is a deterrent to practical application. AI methods of representing knowledge are not only difficult to invent, but “…using fancy programming techniques may be like using a

PAGE 72

57 shotgun to kill a fly. If a drill-and-practice env ironment is all that is required to attain a particular instructional goal, then that’s what sho uld be used“ (Shute & Psotka, 1996, p. 571). This review does not consider ITS at any dep th because given the nature of the current study, complex algorithms and AI methods mi ght be considered shotguns. Simulations What is a Simulation? There is considerable debate over the definition of simulations (Alessi & Trollip, 2001). Current perspectives view simulations as co mplex visual environments involving many user accessible variables. Gredler (2004) cha racterized simulations as open-ended, evolving, and having many interacting variables. S he defined a simulation as “an evolving case study of a particular social or physi cal reality in which the participants take on bona fide roles with well-defined responsibiliti es and constraints” (p. 571). She subsequently presented several important attributes that delineate a simulation from other forms of instruction: 1. A complex real-world situation modeled to an adequa tely degree of fidelity. 2. A defined role for participants, with constraints. 3. A data-rich environment. 4. Feedback in the form of changes to the modeled situ ation. In her estimation, solving well-defined problems do es not constitute a simulation; instead a simulation is an ill-defined problem with many va riables, and many possible solution paths. She used the term “Deep Structure” (Gredler 2004, p. 573) to describe a central feature of simulations. Deep structure does not on ly suggest multiple solution paths, but includes the fact that a participant must be a part of the experience such that each action

PAGE 73

58 has valid consequences associated with that action. Gredler (2004) viewed complexity as a necessary tra it of simulations, and while this is true in some cases, simulations purposed at learning need not contain many variables and multiple solution paths. Alessi and T rollip (2001) supported the broader characterization of simulations when they not only described various different types of simulations but defined educational simulations as “a model of some phenomenon or activity that users learn about through interaction with the simulation” (p. 213). They also noted that “A simulation doesn’t just replicat e a phenomenon; it also simplifies it by omitting, changing, or adding details or features” (p. 214). Alessi and Trollip’s definition is significant in that it does not comment on the c omplexity or the degree of fidelity required before an application can be called a simu lation. This difference is not trivial because the current study assumes that the prevaili ng perception of simulations as complex environment akin to flight simulators is to o narrow and thus lacks the ability to support applications geared for educational purpose s. The idea of complexity also affects the fidelity of simulations. High degrees of fidelity are not necessarily a prerequisite for sim ulations. The relationship between fidelity and performance is not linear (Alessi & Tr ollip, 2001), thus more visually realistic simulations are not necessarily better fo r learners. Couture (2004) found that some high fidelity characteristics of the simulated environment resulted in higher credibility but other high fidelity characteristics had the opposite effect. Couture (2004) attributed the results to particular learner charac teristics.

PAGE 74

59 Games, Virtual Reality and Microworlds One reason the definition and character of simulati ons remain unclear is the fact that similar but different technologies are often d efined as and used synonymously with simulations. Games, Virtual Reality and Microworld s can all have simulation components, but they differ on at least an abstract level from a simulation. A game can contain simulation elements and likewise a simulation can contain gaming components. Educational games are defined a s competitive exercises where the goal is to win (Gredler, 2004). The key features i nclude competition with other players or the environment, reinforcement for correct actio ns in the form of advancement, and actions that are governed by rules that may be imag inative. Gredler (2004) listed four reasons for using educat ional games: practice, identification of weaknesses, revision, and develop ment of new knowledge and/or skills. The primary difference between an education game an d a simulation is competition. Games use competition primarily as a motivator, but this is absent from simulations. There are however educational simulation games, tha t combine both concepts in an effort to facilitate learning. While competition is the d ominant aspect of simulation games, for example SimCity, the simulation also provides an op portunity to learn the underlying model. McLellan (2004) defined Virtual Realities as “as a class of computer-controlled multi-sensory communication technologies that allow more intuitive interactions with data and involve human senses in new ways” (p. 461) Virtual reality applications allow very high degrees of interaction, allowing the user to have experiences from many viewpoints using multiple senses. The enhanced int eraction is usually experienced using

PAGE 75

60 specialized equipment such as head mounted monitors and gloves that provide tactile feedback. Although virtual reality has different classificati ons depending on the author involved, a few common threads exist amongst the co mpeting views. For example, virtual realties model environments that may be phy sical or abstract, and the user is able to access and manipulate aspects of the model and r eceive action-dependent feedback (Reiber, 2004). Microworlds are environments where learners explore and build knowledge. Reiber (2004) explained that the learner must “get it” (p. 587) before an environment is considered a microworld, thus the definition of mic roworlds is tied to their function. According to Reiber, microworlds are: domain specif ic, facilitate the easy acquisition of the domain-required skills, are derived from a cons tructivist perspective, and afford immersive activities that are intrinsically motivat ing. Reiber (2004) distinguished microworlds from simula tions and other tools by discriminating between using models and building mo dels. Simulations allow learners to work with a pre-existing model, thus manipulating r eady-made tools presented on the interface. Contrary to simulations, microworlds al low learners to build the tools they will use. In this case, the learner is not limited to t he variables and parameters described by the interface. Games, virtual reality and microworlds each exist a long a continuum but it is not immediately clear where one begins and the others e nd. Although they can be defined and described separately, each can at some level su bsume or be subsumed by simulations. This reality does not undermine the credibility or usefulness of simulations, instead it

PAGE 76

61 highlights the fact that good instruction can and s ometimes should contain various methodologies. Types of Simulations The cited works, Gredler (2004) and Alessi and Trol lip (2001), have slightly differing definitions of simulations. Although the re is considerable overlap, the differences in the conceptualization of what exactl y is a simulation also leads to similar but different classifications of simulations. Gred ler (2004) defined two broad types of simulations. First, experiential simulations provi de a means for learners to interact with systems that may be too costly or dangerous for rea l-world experimentation. It is described as a “social microcosm” (p. 573) where in dividuals have different responsibilities in the complex evolving scenario. This type is further divided into: Social process Consequences are embedded in the scenario. Diagnostic Consequences are based on optimal responses. Data management Consequences are embedded in the relationship betwe en associated variables in the system Second, symbolic simulations are models of specific functions or phenomena of another system. In this case, the learners play the role o f researchers investigating an external system. The type is further subdivided into: Laboratory-research simulations Individuals interact with a complex system to solve a problem.

PAGE 77

62 System simulations Individuals interact with a complex system to diagn ose a problem with that system. Alessi and Trollip (2001) categorized simulations i n a significantly simpler manner. They used two categories, “About something ” simulations and “How to do something” (p. 214) simulations. These were furthe r subdivided as follows: “About something” simulations Physical Learners interact with the representation of a phys ical object or phenomenon. Iterative Learners interact with the representation of a phys ical object or phenomenon, but do so at discrete points where they vary inputs and observe the new simulated result. “How to do something” simulations Procedural Learners interact with a system designed to teach a sequence of steps to accomplish a task. Situational Attitudes of people or organizations under differin g circumstances are investigated. The current study is best described in terms of the simple and clear definition proposed by Alessi and Trollip (2001). The instruc tional application developed in the

PAGE 78

63 current study can be described as a procedural simu lation, but both Alessi and Trollip (2001) and Gredler (2004) admitted that the categor ies are not mutually exclusive, hence considerable overlap is possible. The current stud y, although procedural, contains components that are physical simulations and iterat ive simulations. This is understandable when the nature of the study is cons idered; learning how to graph is only successful if the learner also knows about graphs. Advantages and Disadvantages of Simulations Simulation as a method of instruction and learning is predicated on the assumption that it is inherently interesting and mo tivating. While disadvantages are freely acknowledged, the purported advantages are t he driving force behind the continued interest in simulations. While simulations have ob vious advantages, such as cost and safety, over real world learning environments, they are described as having particular benefits over other instructional methodologies. A lessi and Trollip (2001) listed four main advantages of simulations have: motivation, tr ansfer of learning, efficiency, and flexibility: Motivation Most simulations contain graphical elements that ar e at minimum reinforcing for learners. Coupled with active part icipation and feedback, a graphically rich simulation presents an environment where learners are more likely to enjoy the learning process. Transfer of Learning Simulations tend to facilitate transfer because of their ability to engage various solution paths for a problem.

PAGE 79

64 Efficiency The initial learning efficiency and effectiveness c an be reduced using well designed simulations. Flexibility Simulations can be designed to present various comp onents of instruction. They may present information, guide learning, facil itate practice, or assess knowledge. In addition, simulations are unique in that they are applicable across learning philosophies. Gredler (2004) listed three other key advantages to simulations and asserted that 1. They can reveal learner misconceptions about the co ntent. 2. Simulations present complex environments that bridg e the gap between classroom and the real world 3. They can provide insight into the problem-solving s trategies that learners use. Despite their appeal and significant advantages, si mulations are by no means the perfect methodology (Alessi & Trollip, 2001; Gredle r, 2004). Several disadvantages were identified Simulations can be prohibitively costly to produce. The financial and temporal costs involved in developing a simulation might not be worth the expected educational benefits. The benefits of a simulation might be overly depend ent on prerequisite knowledge, thus learners might need prior instructi ons before they can engage in open-ended learning. Learners may experience cognitive overload in compl ex environments.

PAGE 80

65 Other general disadvantages include the inability o f simulations to accommodate sub-goals, empathy, or differing learner levels. T hese criticisms, however, can be leveled against any instructional software that is not spec ifically designed to deal with those specific issues. Are Simulations Effective? The effectiveness of simulations remains unclear. Some studies demonstrate significant differences in favor of simulation whil e others report no significant differences. Most studies, however, acknowledge th e potential benefits of simulations and discuss specific ways to help maximize the lear ning experience. The thematically similar studies chosen for this section not only re flect the uncertainty of the effectiveness of simulations in general, but the controversy that exists within the selected content areas. Simulations have been studied using many different content areas, but science seems to be most suitable for simulations (Lee, 199 9). Although science is particularly suited for simulations, the results are not always consistent. Steinberg (2000) investigated the differences betwe en a computer simulated interactive learning environment and a traditional pen-and-pencil interactive environment. He used calculus-based physics as the primary content area, focusing specifically on air resistance. Of the three class es involved in this study, two classes (n=79 and n= 67) were administered the simulation b ased content and the other class (n=83) experienced a pen-and-pencil version of the same content. Steinberg found that although there were differences in the learners’ ap proach to learning, there was no significant difference in posttest scores. Althoug h anecdotal observations regarding

PAGE 81

66 classroom events and study artifacts were mentioned no rationale as to why the study was conducted after seven weeks, or the way the gro ups were divided, was provided. Campbell, Bourne, Mosterman, and Brodersen (2002) i nvestigated the effectiveness of an electronic laboratory simulator versus physical laboratory exercises. The content was primarily focused on building elect ronic circuits. The study had two groups, the combined laboratory group (n=22) used e lectronic simulated labs then two subsequent physical labs for practice. The physica l laboratory group (n=18) did not use any simulations. Campbell et al. found no signific ant differences between the groups on a pretest measure, but found significant difference s on the written posttest. There was no significant difference in the time it took to compl ete the final physical laboratory that was a group task, thus the combined group did at least as good as the physical group. Lane and Tang (2000) investigated the use of simula tion within a statistics context. Their study involved a group that receive d simulation-based instructions, a group that received textbook-based instruction, and a no-treatment group. They found that the simulation group performed significantly b etter on the posttest and were able to better apply statistical concepts to real world sit uations. Lane and Tang admitted that the implemented simulation might not be considered a si mulation by some, for example (Gredler, 2004), because it was merely a video pres entation simulating the concepts. According to the authors, “it is likely that the ad vantage of simulation in the present experiment would have been greater if learners had had the opportunity to interact with the simulation” (Lane & Tang, 2000, p. 351). The use of simulations in statistics was also exami ned by Mills (2004). This study considered whether or not computer simulated methods enhanced learners’

PAGE 82

67 comprehension of abstract statistics concepts. The two instructional units used for the study can be described as interactive versus non-in teractive, thus it is unclear if factors such as motivation can be considered confounding. Although the sample size was small, Mills concluded that learners can learn abstract st atistical concepts using simulations. Spelman (2002) examined simulations under unique co nditions. Unlike the previously discussed studies, Spelman examined the GLOBECORP business simulation and its effect on writing proficiency over the cour se of a semester. Based on the data gathered, Spelman found that the learners who used the simulation did at least as well as the learners who used the traditional format based on posttest results and a writing proficiency test. The learners who experienced the simulation reported significantly less anxiety and significantly greater satisfaction with the instruction. These results were used to assert that “instructors who wish to enlive n their classrooms by changing to approaches that include simulation should do so wit h confidence” (Spelman, 2002, p. 393) de Jong and van Joolingen (1998) extensively review ed the literature on discovery learning with simulations, focusing on learner prob lems. Based on a selected subset of their reviewed literature, they concluded that “The general conclusion that emerges from these studies is that there is no clear and univoca l outcome in favor of simulations” (p. 182). The review did not entirely focus on whether or not simulations were effective, but they found significant findings both in favor of an d against simulations. According to the authors, if a theory of discovery learning using si mulations is to be developed, more research should be done in identifying the problems learners have in this environment, and in developing tools to address those problems.

PAGE 83

68 A meta-analysis conducted by Lee (1999) reviewed li terature based on two types of simulations: pure and hybrid, and two instructio nal modes: presentation and practice. Lee concluded that 1. Within the presentation mode, the hybrid simulation is much more effective than the pure simulation. 2. Simulations are almost equally effective for both t he presentation and the practice mode if hybrid simulation is used. 3. Specific guidance in simulations seems to help lear ners to perform better. 4. When learners learn in the presentation mode with t he pure simulation, they showed a negative attitude toward simulation. 5. Even in the practice mode, learners showed very lit tle preference to simulations. 6. Science seems to be a subject fit for simulation ty pe of learning. Lee (1999) cautioned that no definite conclusions s hould be attempted due to the small number of studies in the meta-analysis and the poss ible existence of other well-designed studies. Although the effects of simulation are inconsistent the reviewed studies reveal common threads. Simulations were reportedly more m otivating and at least as good as traditional instruction, thus its inclusion in inst ruction would not be harmful. Although the need for more research was emphasized, simulati ons were thought to be excellent if they were a compliment to other forms of instructio n. This is most clearly demonstrated by Wiesner and Lan (2004) when they found that of 1 2 oral presentation teams, 9 recommended a combination of simulation and physica l labs, 3 recommended only physical labs, and none recommended only simulation s.

PAGE 84

69 Why use a simulation “Simulation is one of the few methodologies embrace d equally by behavioral versus cognitive psychologists, and by instructivis t versus constructivist educators. Simulations can be designed in accordance with any of these approaches” (Alessi & Trollip, 2001, p. 231). The instructional application developed in the curr ent study focused on graphing instruction, and as such, the nature of the task ma kes it ideally suited for a simulation. Although the effects of simulations on learning are inconsistent, the research has shown that under the right conditions, simulations are a viable option and can return significant learning gains. Combining direct instruction, disc rete content instruction, and simulation exercises might be the conditions for significant l earner gains. The Design of Instruction The design of instruction has been described as an art, a science, a technology, and a craft (Wilson, 2005). Although there are man y ways to produce instruction, there are generally accepted methods to do it effectively This section will focus on the systematic design of instruction, thus concentratin g on the viewpoint: the design of instruction as a science. What are Instruction and Design? Instruction is generally defined within the scope o f learning. Alessi and Trollip (2001) defined instruction as “the creation and use of environments in which learning is facilitated ” (p. 7). Gagne et al. (2005) proposed that instru ction is a “set of events embedded in purposeful activities that facilitate l earning” (p. 1). Behaviorally, the terms environments and events which were used in the previous definitions, can be viewed as

PAGE 85

70 arrangements of contingencies. This perspective is important because it not only accounts for the environment and events themselves, but also the interaction between different properties, for example, sequences or len gths. P. L. Smith and Tillman (2005) supported the behavi oral perspective and proposed that instruction is “the intentional arran gement of experiences, leading to learners acquiring particular capabilities” (p. 5). They further clarified their definition by discriminating between instruction, education, trai ning, and teaching. Education, in their context, broadly describes all experiences where pe ople learn, including experiences that are non-intentional, unplanned, or informal. Train ing refers to instructional experiences where the primary focus is the acquisition of a spe cific skill that will be immediately useful. The immediacy of the skill’s application i s the defining feature of training. Teaching refers to learning experiences that are me diated by humans. Although instruction, education, training, and teac hing are often used interchangeably, they have subtle differences. The se distinctions provide focus and allow the current study to be validly grounded in the fra mework of instruction Smith and Tillman (2005) defined design as “an acti vity or process that people engage in that improves the quality of their subseq uent creations” (p. 6). Although design and planning are sometimes used synonymously the design process involves a higher degree of preparation, precision, and expert ise. The implication is that the design process should move the entire production process c loser to the realm of scientific. Gagne et al. (2005) adopted six assumptions regardi ng the process of design: 1. Instructional design is focused on the process of l earning rather than the process of teaching, and the aim is intentional learning as opposed to accidental learning.

PAGE 86

71 2. The learning process is complex and involves many v ariables that may be related. 3. Instructional design models can be applied at diffe rent levels, for example, at the lesson module level or at the course level. 4. The design process is iterative and successive refi nements are required to produce effective instruction. 5. The design process is itself comprised of sub-proce sses. 6. Different outcomes call for different designs. Instructional Systems Design Instructional Systems Design (ISD) and its relation to Instructional Design (ID) remain unclear in the literature. The lack of clar ity begins with the acronym ISD. The term Instructional Systems Development (Dick et al., 2005; Wilson, 2005) and Instructional Systems Design (Gagne et al., 2005; P. L. Smith & Tillman, 2005) are both used for ISD. The current study will use Instructi onal Systems Design supporting the definition “the process for creating instructional systems” (Gagne et al., 2005, p. 18) where instructional systems include resources and p rocedures used to facilitate learning. The relationship between ISD and ID differs dependi ng on the author. Morrison et al. (2004) defined ID as “The systematic method of implementing the instructional design process” (p. 5). In that definition, ID is systematic thus negating the need for a separate ISD. Other authors distinguish between th e two, viewing ID and an overarching term and ISD as the systematic and scientific way t o do ID (Dick et al., 2005; Gagne et al., 2005; Wilson, 2005). The current study will u se ID as an umbrella term, and ISD as a systematic implementation of ID.

PAGE 87

72 The final relevant issue regarding the clarity of I SD is its description. Some authors describe ISD as a model (Wilson, 2005), oth ers describe it as a process (Gagne et al., 2005), and yet others describe it as both mode l and process (Dick et al., 2005). The issue is not trivial because the current study has different components that must be clearly defined before they can be validly used. The curre nt study will use ISD as a term to describe a general process. The transition will th en be made to a specific exemplar model describing that process. A Systematic Approach For more than 40 years, ISD has been taught as the primary framework for instructional design. Although it has been in use for many years, an empirically based body of research supporting ISD does not exist. Wil son (2005) suggested that along with the fact that it is difficult to scientifically tes t comprehensive processes, the axiomatic status of ISD within the field has led to the curre nt scenario where ISD is accepted without question. Dick et al. (2005) suggested tha t ISD is valid because the component parts of ISD are based on validly researched learni ng principles. In essence if the parts are valid, the whole must also be valid. There are many logical and conspicuous reasons why a systematic approach to the design of instruction is beneficial. Smith and Til lman (2005) provided seven advantages of a systematic approach: 1. Encourages advocacy of the learner by making the le arner focus of the instruction. 2. Supports effective, efficient, and appealing instru ction. 3. Supports coordination among designer, developers, a nd those who will implement

PAGE 88

73 the instruction. 4. Facilitates diffusion, dissemination, and adoption because the products are more likely to be practical and easily duplicated. 5. Supports development for alternate forms or deliver y systems. This is particularly useful because modularity facilitates future develo pment. 6. Facilitates congruence and consistency among object ives, activities, and assessment. 7. Provides a systematic framework for dealing with is sue, for example, learning problems. The systematic design of instruction, though very u seful, has several disadvantages. Smith and Tillman (2005) listed som e of the limitation of ISD, placing particular emphasis on the inability of ISD to faci litate goal-free learning, or situations where learning goal cannot be determined in advance Gordon and Zemke (2000) not only considered the lim itations of ISD, but they questioned the usefulness and relevance of ISD. Th ey spoke to six experts and compiled the criticisms into four major categories: They argued that ISD is too slow and clumsy, citing the fact that real business cannot afford the time investment needed to complet e each component of ISD. There’s no “there” there. This is a criticism of t he rigidity of ISD and the pursuit to make it scientific. They positioned that if ISD is used as directed, it produces bad solutions. They supported this criticism by citing designers’ preoc cupation with learning styles and best practices, neglecting the purpose of the t raining.

PAGE 89

74 According to the authors, ISD clings to the world v iew of “weak learners and allknowing experts”. This wrong view option results i n the production of training that caters to the least common denominator. These criticisms are based on ISD as a framework fo r training. This review has previously discriminated between training and instr uction, and although they are not necessarily the same, the criticism appears to be v alid in both areas. Critics of ISD probe the failures of ISD trying to determine if the process, the practice, or the definition is flawed (Zemke & Ross ett, 2002). Proponents of ISD continue to implement new techniques within the ISD framework and for them, ISD remains significant and relevant. “Suggesting that the ISD process is no longer relevant to 21st-century training is the equivalent to suggesting t hat engineers forget about data flow and process diagrams” (R. C. Clark, 2002, p. 9 ). The reviewed literature, however segmented on many issues, concluded that ISD is not perfect, but it can be very useful if properly applied. ADDIE The Analysis, Design, Development, Implementation, and Evaluation (ADDIE) model describes the ISD process and forms a framewo rk for effective instructional design. While the exact origin is unknown, the mod el has served as both a stand-alone framework, and as a foundation for other more speci alized models (Molenda, 2003). Gagne et al. (2005, p. 21) provided a pictorial vie w (see Figure 3) outlining the interactions among the various phases:

PAGE 90

75 Evaluate Implement Develop Analyze Design Figure 3 Gagne’s ADDIE model. The layout of the model implies an output/input rel ationship where the output of one phase becomes the input for the next phase. This l inear progression leads to organized development and provides the opportunity for accoun tability at each phase. Accountability is possible because each phase can b e evaluated and repeated if necessary. The ADDIE model can be viewed as a generic prototyp ical representation of the ISD process. It does not facilitate the developmen t of all possible types of instruction, but all other ISD models can be reduced to at least a subset of the ADDIE phases (Gagne et al., 2005). The ADDIE Phases Each phase of the ADDIE model is purposed at accomp lishing defined goals. There is no generally accepted list of sub-componen ts for each phase, thus the specific sub-components of each phase can be viewed as a set of best practices proposed by individual authors. Gagne et al. (2005) provided a summary that formed the foundation for the current study. This summary was selected b ecause it is both comprehensive and consistent with other summaries reviewed. Gagne et al. (2005, p. 22) summarized the five ADDI E phases and subcomponents as follows:

PAGE 91

76 1. Analysis a. First determine the needs for which instruction is the solution. b. Conduct an instructional analysis to determine the target cognitive, affective, and motor skill goals for the course. c. Determine what skills the entering learners are exp ected to have, and which will impact learning in the course. d. Analyze the time available and how much might be ac complished in that period of time. Some authors also recommend an anal ysis of the context and the resources available. 2. Design a. Translate course goals into overall performance out comes, and major objectives for each unit of the course. b. Determine the instructional topics or units to be c overed, and how much time will be spent on each. c. Sequence the units with regard to the course object ives. d. Flesh out the units of instruction, identifying the major objectives to be achieved during each unit. e. Define lessons and learning activities for each uni t. f. Develop specifications for assessment of what stude nts have learned. 3. Development a. Make decisions regarding the types of learning acti vities and materials. b. Prepare draft materials and/or activities. c. Try out materials and activities with target audien ce members.

PAGE 92

77 d. Revise, refine, and produce materials and activitie s. e. Produce instructor training or adjunct materials. 4. Implement a. Market materials for adoption by teachers or studen ts. b. Provide help or support as needed. 5. Evaluate a. Implement plans for student evaluation. b. Implement plans for program evaluation. c. Implement plans for course maintenance and revision The current study could not implement each sub-comp onent as they are, instead each phase and sub-component was assessed and modified f or the current task. Why use ADDIE? The unique feature of ADDIE, that it is both specif ic and general, is the source of both its greatest praise and criticism. The adapta bility of the model makes it perfect for novice designers and environments where goals and o bjectives are not initially fully defined, as is the case of the current study. Whil e other ADDIE-based models, for example Dick et al. (2005), appear complete, the ge neric ADDIE model provides sufficient leeway affording a basic development str ucture and simultaneously facilitating experimentation and encouraging adaptation. Graphing Teaching learners to represent experimental data in graphical form and interpreting already existing data has traditionall y been a difficult task. Techniques such

PAGE 93

78 as task modeling with instructor guidance have gene rally been used to get learners to apply rules and generate propositions that eventual ly lead to the completion of tasks. Graphs and the process of graphing are important no t only in scientific dialogue where data relationships are pursued, but also with in the lay context where consumers of data must discriminate between available choices. Monteiro and Ainley (2002) posited that “As a data handling activity, graphing might b e conceptualized as a process by which people can establish relationships between data, an d infer information through the construction and interpretation of graphs” (p. 61). Making valued judgments in the presence of erroneous or purposefully skewed data i s a requirement of the current consumer of data (Gal, 2002). In summarizing Bruno Latour’s Essay “Drawing Things Together”, L. D. Smith, Best, Stubbs, Johnston, and Archibald (2000) presen ted the five features of graphs that make them powerful and persuasive: First, they are able to transcend scales of time an d place, rendering invisible phenomena (such as quarks, ion pumps, gross nationa l products) into easily apprehended icons. Second, they are manipulable, a nd can be superimposed and compared in ways that lead to seeing new connection s between seemingly unrelated phenomena, discerning similarities betwee n events vastly separated in time and place, and assessing the congruence of emp irical and theoretical curves. As such, they encourage the sort of abstraction fro m detail to generalities that is characteristic of theoretical science. Third, graph s are ‘mobile’ or transportable: they can be carried from laboratory to laboratory, or from laboratories to scientific conferences, or from research sites to sites of app lication. Fourth, they are

PAGE 94

79 ‘immutable’, both in the sense of fixing the flux o f phenomena – and thus stabilizing what may be only ephemeral in nature or the laboratory – and in the sense of remaining stable as they are transported a cross contexts. Fifth, as ‘immutable mobiles’, graphs can be enlisted in the task of persuading scientists in competing camps of the validity of one’s evidence. As Latour puts it, a wellconstructed graph raises the cost of dissenting fro m one’s own favoured viewpoint, forcing scientific adversaries to muster their own evidence in the form of even better graphs. To the extent that scientist s are able to mobilize consensus on data and evidence, it is through competition and negotiation over graphical representations (hence Latour’s motto that ‘inscrip tions allow conscription ’). The centrality and pervasiveness of graphs in science l ed Latour to conclude that scientists exhibit a ‘graphical obsession’, and to suggest that, in fact, the use of graphs is what distinguishes science from nonscienc e .15 Others who analyze the representational practices of scientists share Lato ur’s conviction that graphical displays of data play a central rather than periphe ral rle in the process of constructing and communicating scientific knowledge .16 (p. 75) Although emphasis is placed on the interpretation o f graphs, the experiential factors involved in graphing requires that graphing be viewed as a collection of complex tasks that cannot exist in a vacuum (Brasseur, 1999 ). It is important to understand that interpreting graphs requires a certain skill set th at is most easily obtained via the process of collecting, parsing, and presenting the data. T he process of creating graphs thus affects the ability to interpret graphs.

PAGE 95

80 The literature on graphing confirms that the task o f graphing is complex and consequently more difficult than traditionally beli eved (Brasseur, 2001). Although Roth and McGinn (1997) suggested that learner performanc e may differ by age or other characteristics, in general graphing is understanda bly paired with general mathematics and science, thus assuming a supposed inherently hi gh level of difficulty. This phenomenon has been confirmed by the researcher dur ing preliminary course improvement research where it was observed that lea rners performed poorly on graphing tasks even after they had successfully completed gr aphing instruction. Although the learners could discriminate between good and bad gr aphing, and to some degree interpret sample graphs, learners failed to independently dem onstrate mastery of basic graph production skills. Learners also reacted negativel y to the task, questioning the value of actually creating graphs. This result supports Bar ton (1997) when he suggested that “At this level the difficulties of producing manual plo ts is not as significant” (p. 70). On the surface it would appear that analyzing a graph is o nly loosely connected to the ability to create a graph. The preliminary course improvement research concluded that interactive instruction of any sort should contain required gra phing responses and provide immediate correction and confirmation, thus aiding learners i n developing their graphing skills. Although the preliminary data are mostly anecdotal, it is consistent with the overall picture that the literature paints: graphing is a c omplex task whose worth and value erringly differs depending on the viewer’s lens. The issue of graphing has both practical and philos ophical roots. Practically, the issue is whether or not graphing and similar tasks can be taught without requiring learners to actually produce good, accurate graphs. Younger learners are often given sheets of

PAGE 96

81 graph paper that they then use to produce appropria te figures, all under the watchful eye of the teacher. Thus, they have a "sample work spa ce" at their desks where they can actively practice the graphing behavior that is req uired. In more advanced classes, older learners rarely practice the task of graphing, but are instead expected to know how to graph. In most cases, the teacher or textbook pres ents a graph and learners can optionally practice on their own if they wish to reproduce the graph. This method is very troublesome in that it makes several assumptions. It assumes that learners have already mastered the “art” of graphing, unconcerned with th e notion that not all graphing is the same. It also minimizes the relevance of producing graphs by placing emphasis on the evaluation and interpretation of graphs. While it is important to interpret graphs correctly, the production of accurate graphs cannot be overlooked. More importantly, the production of graphs and the interpretation of grap hs are not necessarily mutually exclusive events (Brasseur, 2001). This is especia lly true in the education environment where teachers can and should analyze the learner a nd the class such that behavior patterns can be detected and addressed. Philosophically, the task of graphing is very inter esting because of what it represents. L. D. Smith et al. (2000) cited Bruno L atour’s essay “Drawing Things Together” and emphasized the “centrality and pervas iveness of graphs” when they concluded that “graphs constitute the lifeblood of science” (p. 75). This scientific perspective on the importance of graphing is furthe r complicated when the social aspects of graphing are considered. Roth and McGinn (1997 ) investigated graphs as inscription and conscription devices, placing graphs as mediato rs of messages and meaning. They presented different lenses though which graphing ca n be perceived and it becomes clear

PAGE 97

82 that the creation and manipulation of graphs are ti ed to personal prejudices. As noted in Brasseur (1999), “Thus, context and experience with a culture is a key factor for graph designers and interpreters and is just as important as their perceptual skills and their understanding of the technology” (p. 11). Given the complexity of graphing, the question beco mes whether or not graphing is a mental or a task rooted in practice (Roth & Mc Ginn, 1997). The implications are serious, ranging from expectations to assessment. An obvious conclusion would be that it has requirements in both area, but the fact rema ins that the interplay between thought, practice, and result is still unclear. The dilemma between graphing as a mental covert event as opposed to graphing as an overt physical p ractice can be extended to include almost aspect of learning. Can a student learn anyt hing without actually doing it? While the philosophical considerations might seem displac ed or overly cerebral, most instruction today assumes that knowledge can be del ivered, thus gained, by incidental contact with the content. The fact that reading is still the predominant form of learning (and teaching) is a testament to the current norm w here it is assumed that learners can glean purposeful information from text. Research i nto experiential learning and learning as an active process (Hansen, 2000) has highlighted the importance of placing the learner in meaningful contact with the content thus fosteri ng more interaction between learner and content. Unfortunately, even with research to t he contrary, the real-world learning environment still relies on instruction via observa tion. This reliance is probably due to the fact that it is programmatically very difficult to create environments, online in this case, that mimic the personal interaction that offl ine human environments possess.

PAGE 98

83 To maximize the online potential, environments must be created such that meaningful interaction and feedback are overt and a re at the forefront of instruction. This will undoubtedly increase the discourse between the practical and philosophical, and hopefully lessen the distance between the two. Summary This chapter reviewed pertinent literature and set the context for the current study. The literature made a strong case for Developmental Research, and studying the development of an environment that pairs direct ins truction and simulation should add valuable insight to research literature. The literature recognizes the need for diversity in the creation of effective instruction. Different strategies and techniques c an be used to increase the richness of the instruction. Adaptive instructional techniques and contingent guided practice are only two of the tools that can be used to make instructi on effective and meaningful for learners. The techniques reviewed compliment the d irect instruction component and the simulation component, resulting in an environment w here multiple senses are engaged to facilitate learning. The model-based systematic approach to instruction was used as a framework for developing LOGIS. The reviewed literature makes a strong case for the use of a systems approach to design. The use of the ADDIE model sho uld provide structure, accountability, and transparency to the development process, increasing the probability that the end result will be both effective and repe atable.

PAGE 99

84 The current study is rooted in both behavioral and cognitive doctrines. Behavioral requirements such as observability are e mphasized within both behavioral and cognitive perspectives revealing the importance of those features. Likewise, cognitive principles related to the process of learning and a ttitudes are also accommodated within the context of this study. The literature admits t he differences between the behavioral and cognitive perspectives but reconciles them such that they are both functional aspects of the same instruction. It is clear that both per spectives have advantages that can be exploited to produce optimal instruction, where the resulting instruction has the best features of both worlds.

PAGE 100

85 CHAPTER THREE PROPOSED DEVELOPMENT Chapter Map This chapter fully describes the proposed developme nt of LOGIS using the ADDIE. This chapter presents the initial stages of development and is consistent with the requirements of Developmental Research. The propose d development can be compared to the actual development (chapter 4) to determine the significant elements in the development process, and to highlight important fac tors in the creation of effective instruction. This chapter coupled with chapter 4 ( Actual Development) forms a complete picture of the LOGIS experience from start to finis h. The ADDIE phases and components are based on Gagne et al. (2005) and are followed verbatim from the model summary presented in Gagne et al. (2005, p. 22). The prescriptiveness with which the phases are followed should provide some insulation from critiques concerning the theoretical and practical adaptation of the model. By following the suggested path, the effect of the model as it p ertains to the development of LOGIS can be critically and unbiasedly examined. The fol lowing map describes the organization of the chapter: Proposed development Chapter map

PAGE 101

86 Research questions The ADDIE model Analysis Design Development Implement Evaluate Summary Research Questions 1. How does the use of the ADDIE model influence the c reation of the LOGIS instructional application? To evaluate the ADDIE process, each phase will be s ubject to the following five questions based on (Gagne et al., 2005, p. 355): 1. Was the rationale for including or omitting this ph ase clear? 2. Are all required elements of this phase accounted f or? 3. Was this phase temporally cost effective to complet e? 4. Are the products from this phase observable and mea surable? 5. Are the products from this phase consistent with th e overall goal of the instructional application? These questions will be concisely answered at the e nd of each phase and the results will be reported in chapter four “Actual De velopment”.

PAGE 102

87 2. Is LOGIS an effective form of instruction? The effectiveness of LOGIS will not be determined e xperimentally. This research question will be addressed in chapter four “Actual Development” and also reported in chapter five “Conclusions”. Data from the one-to-o ne evaluation, the small-group evaluation, and the field trial will be compiled in an effort to determine the effectiveness or value of the instructional application. The ADDIE Model This study investigated the model-based creation of instruction and ultimately sought to determine the effectiveness and merit of the instruction. Because of the complexity involved in describing and subsequently using the model, organization became a paramount issue. Gagne et al. (2005) desc ribed the ADDIE model as “an organizing framework”. Not only did the model help in the creation of the instruction, but the model also aided in documentation process. The properties of the model thus forced the researcher to examine the organization o f all aspects of the study, including documentation, before actually beginning the study. This chapter presents a detailed description of the phases, sub-components and workflow of the study. This road map will provide an opportunity to maintain contact with both high and low level design requirements. Each phase will be described and each sub-component explained, and these will be used as a guide to create a logical path for the current study. Not all phases and sub-componen ts will be applicable to this study, but where differences occur, a rationale for exclusion will be provided. In cases where

PAGE 103

88 modifications are made to the basic structure of th e phases or sub-components, there will be ample explanation for the changes. Gagne et al. (2005) based their description of the model on the development of a course. This study is purposed at developing an in structional application covering one unit that is comprised of several tutorials. Withi n this chapter, course and unit related references will be demarcated with parentheses foll owed by the appropriate replacement word or phrase if necessary. This will result in a visible exposition of the amount of changes required to complete this study. Analysis Phase Analysis Component 1 (First determine the needs for which instruction is the solution.) Graphing is an important component of a course at a large southeastern university. The course is entirely online and read ing has proven to be an ineffective form of graphing instruction in the course. Previous ef forts by the instructor to create graphing instruction resulted in only modest educational gai ns, thus there is a need for effective graphing instruction. Learners are required to learn general graphing con cepts and how to create simple, cumulative, bar, and multiple baseline grap hs. Based on these parameters, carefully designed instruction paired with the oppo rtunity for practice might fulfill the course requirements and at the same time provide le arners with a meaningful learning experience.

PAGE 104

89 Analysis Component 2 (Conduct an instructional analysis to determine the target cognitive, affective, and motor skill goals for the [course] unit.) The instructional analysis will highlight the knowl edge, skills, and attitudes (KSA) that learners are expected to acquire. It is important to note that because new instruction will not be created, generating high-le vel goals will be a step backwards. Instruction for this particular task currently exis ts, thus the goals and objectives presented in this section are more akin to reorganization rat her than creation. Given that the current study was focused on a unit for which goals existed the goals presented in this section can be viewed as too specific to be goals, but too general to be objectives. These goals are nonetheless very applicable because they state more than “learner will graph”, thus accelerating the study by eliminating some of the i ntermediate steps between goals formation and the derivation of objectives. The instructional goals for the unit were classifie d based on Gagne et al. (2005). The purpose of this classification is to help the i nstruction sequencing and organization, and also to help provide focus for the assessment. Although each goal can be placed in multiple classifications, they will be placed in th e single class that is best suited for that goal. Table 3 is a representation of both the anal ysis and the classification of the goals.

PAGE 105

90 Table 3 Initial Goals and Tasks Classifications Goal Task classification Knowledge After successfully completing this topic, learners will be able to discriminate among common graphing terms and concepts. Intellectual Skills – Discrimination After successfully completing this topic, learners will be able to identify the parts of a graph. Verbal Information After successfully completing this topic, learners will be able to describe concepts involved in the contro l and measurement of behavior. Verbal Information After successfully completing this topic, learners will be able to state why graphing data is important. Verbal Information Skills After successfully completing this topic, learners will be able to construct a multiple baseline graph base d on a given dataset. Intellectual Skills – Problem Solving After successfully completing this topic, learners will be able to construct a cumulative graph based on a given dataset. Intellectual Skills – Problem Solving

PAGE 106

91 Goal Task classification After successfully completing this topic, learners will be able to construct a semi-logarithmic chart. Intellectual Skills – Problem Solving After successfully completing this topic, learners will be able to construct a simple bar graph. Intellectual Skills – Problem Solving After successfully completing this topic, learners will be able to construct a simple line graph. Intellectual Skills – Problem Solving Attitudes After successfully completing this topic, learners will choose to graph data when it is the optimal solutio n. Attitude Analysis Component 3 (Determine what skills the entering learners are ex pected to have, and which will impact learning in the [course] unit.) This unit will be used in a course where behavioral principles are used as a foundation for coursework, thus learners are expect ed to have a fundamental grasp of the environment’s role in the shaping and maintaining o f behavior. The graphing unit is chapter four in the Alberto and Troutman (2006) tex tbook, and after progressing through chapters one, two, and three, learners are expected to be familiar with PI as an instructional method and the online nature of the c ourse. Learners in the course are predominantly juniors an d seniors and they are expected to have at least a fundamental idea about graphing, but this is not a requirement. It is reasonable to suggest, given the educational level of the students in the course, that

PAGE 107

92 they are at least familiar with the rudiments of si mple graphing. It is not necessary that they have in-depth knowledge, but a certain assumed proficiency allows the current study to eliminate, for example, instruction on how to dr aw a straight line. Learners are not expected to have any particular di sposition or motivation towards graphing. The course is open to students of varyin g backgrounds, intellectual levels, and preparedness. It is expected, however, that studen ts will be sufficiently motivated to perform well on the unit because performance on the unit’s subsequent quiz affects the final course grade. Analysis Component 4 (Analyze the time available and how much might be a ccomplished in that period of time. Some authors also recommend an analysis of the cont ext and the resources available.) This instructional unit has several existing parame ters. The total number of tutorials must be kept to a minimum, and the total tutorial time must not exceed 2 hours. The 2-hour time limit was determined based on the r esults and reactions of learners who completed graphing instruction in prior semesters. The instructional unit will eventually be delivered such that learners can work at their o wn pace, but for the purposes of this study, instructional time must be kept at or around 2 hours. The course does not cover bar graphs and semi-logar ithmic graphs in detail. Based on the lack of emphasis on these types of gra phs, constructing bar graphs and semi-logarithmic charts will be removed from the go als. It must be noted that bar graphs and semi-logarithmic charts pose a significant fore seeable programming problem. Under normal circumstances, these graph types would not b e removed but based on the lack of

PAGE 108

93 course emphasis and in the interest of time and com plexity, the current study will not include bar or semi-logarithmic graphs. The goals are thus restated in Table 4. Table 4 Revised Goals and Task Classifications Goal Task classification Knowledge After successfully completing this topic, learners will be able to discriminate among common graphing terms and concepts. Intellectual Skills – Discrimination After successfully completing this topic, learners will be able to identify the parts of a graph. Verbal Information After successfully completing this topic, learners will be able to describe concepts involved in the contro l and measurement of behavior. Verbal Information After successfully completing this topic, learners will be able to state why graphing data is important. Verbal Information Skills After successfully completing this topic, learners will be able to construct a multiple baseline graph base d on a given dataset. Intellectual Skills – Problem Solving

PAGE 109

94 Goal Task classification After successfully completing this topic, learners will be able to construct a cumulative graph based on a given dataset. Intellectual Skills – Problem Solving After successfully completing this topic, learners will be able to construct a simple line graph. Intellectual Skills – Problem Solving Attitudes After successfully completing this topic, learners will choose to graph data when it is the optimal solutio n. Attitude Design Phase Design Component 1 (Translate (course) unit goals into overall perform ance outcomes, and major objectives [for each unit of the course] for the unit.) The goals outlined in Table 4 are sufficient for th is step, because the emphasis is on the instructional unit not the course. Design Component 2 (Determine the instructional topics [or units] to b e covered, and how much time will be spent on each.) The topics for the tutorials will be based on the i nstructional goals established in the Analysis phase of this chapter, where each goal can be considered a topic. It is important to remember that because some graphing in struction already exists, the

PAGE 110

95 instructional topics generated are in part a reorga nizing and re-labeling of current instruction. Each tutorial will be based on one or more topics in the following manner: The Control And Measurement Of Behavior After successfully completing this topic, learners will be able to describe concepts involved in the control and measurement of behavior. The Importance Of Graphing After successfully completing this topic, learners will be able to state why graphing data is important. Basic Graphing After successfully completing this topic, learners will be able to identify the important parts of a graph. After successfully completing this topic, learners will be able to construct a simple line graph. Behavioral Graphing Concepts After successfully completing this topic, learners will be able to discriminate among graphing terms and concepts. The Cumulative Graph After successfully completing this topic, learners will be able to construct a cumulative graph based on a given dataset. The Multiple Baseline Graph After successfully completing this topic, learners will be able to construct a multiple baseline graph based on a given dataset.

PAGE 111

96 The unit composition is already established, but no t the time frame for each tutorial. Although there is an overall time limit of 2 hours for completing the unit, the time limit for each tutorial cannot be reliably det ermined in advanced. It is important to note that the content and subsequent duration of ea ch tutorial will not be based on random estimates. This study will employ a test/re vise strategy, the outcome of which will be an optimized set of tutorials. This strate gy is detailed in the Development section of this chapter. Design Component 3 (Sequence the [units] tutorials with regard to the [course] unit objectives.) The previous Design Component section produced six tutorials and they are logically arranged as follows: 1. The Control And Measurement Of Behavior 2. The Importance Of Graphing 3. Basic Graphing 4. Behavioral Graphing Concepts 5. The Cumulative Graph 6. The Multiple Baseline Graph The Basic Graphing tutorial will contain general graphing concepts no t related to behavioral graphing. It will be unique in that its delivery will be dependent on the learners’ performance on a pre-assessment measure. Based on the principles of Bayesian Model discussed in the literature review section of this study, a simple pre-assessment will be used to determine if the learner needs to c omplete the Basic Graphing tutorial. It is reasonable to assume that a significant number o f learners are already versed in basic

PAGE 112

97 graphing techniques, thus an extra tutorial based o n content that is already mastered would probably be aversive to a majority of the lea rners. There is a possibility that some learners will have an inadequate prerequisite skill base. These learners will be accommodated because the Basic Graphing tutorial will be a requirement for them. If the Basic Graphing tutorial is a requirement for a learner, it will be delivered at the appropriate time, after the The Importance of Graphing tutorial. Design Component 4 (Flesh out the units of instruction, identifying th e major objectives to be achieved during each [unit] tutorial.) Important concepts or rules are presented for each tutorial in the form of key words or short statements. These concepts will be adjusted based on feedback from the content expert and trials of tutorials. 1. The Control And Measurement Of Behavior a. Experimentation b. Measurement is an important aspect of the science o f behavior c. Controlling behavior involves the manipulation of e nvironmental variables d. It is important to remain in contact with behavior 2. The Importance Of Graphing a. Data and graphing b. The advantages of visual displays of data c. statistical procedures versus visual presentations d. Feedback and its importance e. Variations in data

PAGE 113

98 f. Interpretation of data 3. Basic Graphing a. Axes b. Coordinates c. Point d. The origin e. Scale of axes f. Hatch marks g. Slope of a line h. The title and legend i. Graphing data 4. Behavioral Graphing Concepts a. Data scale and path b. Scale breaks c. Dependent and independent variables d. Functional relationship e. Trends f. The importance of time as a unit of measure 5. The Cumulative Graph a. The cumulative record b. Upward slope of the depth c. Difficulty in reading d. Rate of responding and its effect on the graph

PAGE 114

99 6. The Multiple Baseline Graph a. Graphing multiple datasets b. Starting at the origin c. Phases and their indications d. The indication of special events Design Component 5 (Define lessons and learning activities for each [u nit] tutorial.) Each tutorial will follow the Programmed Instructio n methodology. This means that the content for each tutorial will be a logica l linear presentation of frames addressing each objective. The content programming will adher e to the principles highlighted in the review of the literature, and will be accomplished under the supervision of the content expert. Each appropriate tutorial will have an accompanying practice task where the learner can practice the content presented in the t utorial. Tutorials 1, 2, and 4 will not have any accompanying practice tasks because they p rimarily deal with abstract concepts or rules. The practice tasks will be delivered usi ng a simulation. Consistent with the reviewed literature, the simulation will be paired with other forms of instruction, in this case PI, in an effort to increase learning and subs equent performance. The fundamental premise is that good instruction paired with the op portunity to practice will produce improved learning and performance. The design of the simulation is discussed in the De velopment phase of this chapter, but there are several parameters that must be defined in this subsection. Based on the reviewed literature, the simulation must

PAGE 115

100 1. be consistent and a logical extension of its paired tutorial. 2. resemble the physical graphing environment, thus it must display a graphing grid and have drawing tools. 3. present stimuli requiring the learner to respond, a nd forward progress must be contingent upon a correct response. 4. provide the learner with sufficient graphing tools such that the learner can edit previous responses. 5. continuously monitor the learners’ performance. The practice tasks for each appropriate topic are a s follows: 1. Basic Graphing (construct a simple line graph based on data provided) 2. The Cumulative Graph (construct a cumulative graph based on the data provided) 3. The Multiple Baseline Graph (construct a multiple b aseline graph based on data) Design Component 6 (Develop specifications for assessment of what stud ents have learned.) Learners will be assessed based on the 8 general go als created in the Analysis phase. Verbal information and discrimination (Know ledge) achievement will be assessed using a posttest containing multiple-choice, altern ative-choice, and short-answer items. Problem-solving skills (Skills) acquisition will be assessed by requiring the learner to construct paper-based graphs using provided pencil, graph paper, and data. The learners’ attitude towards graphing (Attitudes) will be obtai ned using a survey. Each assessment will be developed and validated in consultation wit h content and measurement experts.

PAGE 116

101 Development Phase Development Component 1 (Make decisions regarding the types of learning act ivities and materials.) Consistent with the idea that the development of in struction is not a one-time endeavor, accommodations must be made to continuall y assess and revise the instruction. LOGIS will contain instructional, practice, assessm ent, and survey components. The lifecycle of the instruction can almost be consider ed infinite, thus it is important to facilitate the collection and analysis of data such that the instruction can be continually improved. The basic decision here is that the survey instrume nt and the pretest and posttest measures will not be separate attachments, but rath er these components will be an integral part of the instruction process. Development Component 2 (Prepare draft materials and/or activities.) The development of the interface and each instrumen t and measure will be fully described in this section. The LOGIS interface. LOGIS will be able to deliver tutorials, practice t asks, tests, and surveys within a single user friendly environment. It will be creat ed as an Applet application using the Java programming language. The Java programming la nguage is powerful, free, highly supported, and browser independent. A backend SQLserver database will be used to house data collected by the application.

PAGE 117

102 Figure 4 is an initial conception of LOGIS and desc ribes some of its proposed components. This is a proposed view of the tutoria l screen, where learners will complete tutorials based on behavioral graphing. The applic ation will be storyboarded to optimize the look-and-feel and to ensure the correct interac tion of the components. LOGIS Tutorial Frame # 10 of 20Percent Correct: 90% Exhibits Response: Options Practice Tutorial LOGIS Primer Control AndMeasurementImportance ofGraphing Basic Graphing BehavioralGraphingConcepts The CumulativeGraph The MultipleBaseline Graph Gradual changes in s_______e from one rate to anoth er can be hard to detect on cumulative graphs. [Exhibit 50 ] The Cumulative Graph Currentperformance Current task Task area Participantresponse Panel forcurrent task Currenttutorial Skippedbased onpretestperformance Current view Options forthe currentview Figure 4. A prototype of the interface for LOGIS showing the tutorial window. All learners must complete the LOGIS Primer The primer will acquaint the learner with the interface, and explain the functio n of all available tools. The primer will be refined after the second beta test, based on fee dback from the participant interviews. Learners will complete all available tutorials list ed in the Panel for current task If the Basic Graphing tutorial label is gray, the learners will be exemp t from taking that tutorial. The criterion for exemption from this sp ecific and unique tutorial is perfect performance on the pretest items designed to determ ine whether or not the learner has

PAGE 118

103 already mastered basic graphing concepts. Learners may choose to complete the Basic Graphing tutorial, but they are not obligated to do so if t hey are exempt. Learners will read the textual content presented in the Task area and respond in the designated Participant response box. Occasionally, exhibits (see Figure 5) will accompany the frame content. Exhibits are suppleme ntal information that provide examples for increased content clarity. After view ing an exhibit, the learner will be transferred back to the tutorial to continue with t he instruction. LOGIS Frame # 10 of 20Percent Correct: 90% Response: Options Practice Tutorial LOGIS Primer Control AndMeasurementImportance ofGraphing Basic Graphing BehavioralGraphingConcepts The CumulativeGraph The MultipleBaseline Graph The Cumulative Graph Tutorial Exhibit [50] Exhibit Figure 5 A prototype of the interface for LOGIS showing th e exhibit window. After completing select tutorials, learners will be transferred to the practice screen (see Figure 6) where they will complete a practice task designed to supplement the previously completed tutorial.

PAGE 119

104 LOGIS Practice Frame # 4 of 20Percent Correct: 95% Options Practice Tutorial The Cumulative Graph Label the ordinate axis Label 10 Label T e x t Tools for thecurrent view Current view Graphinginstruction Graphingarea Currentperformance Options forthe currentview Figure 6. A prototype of the interface for LOGIS showing the practice window. During the practice task, learners will perform dis crete tasks, and forward stepwise progress will be contingent upon the accur ate performance of each step. Practice tasks will involve the step-by-step creati on of a specific graph using the tools provided in the Tools for the current view panel, where each step must be successfully completed before the practice continues. The direc tions for each step will be displayed in the Graphing instruction box, and learners will complete the step inside th e Graphing area while receiving feedback on the correctness of the ir action. At any point during the tutorial or the practice, l earners will be able to see their current progress in the Current performance panel. In addition, they will be able to adjust the look-and-feel of the interface using the Options button. The survey component of LOGIS will deliver survey i tems (see Figure 7) requiring learners’ response. The Likert scale whi ch uses a 5-point response scale from

PAGE 120

105 strongly agree to strongly disagree, will be replac ed by a sliding scale with the same terminal parameters. The slide’s position will rep resent a corresponding number between 1.000 and 5.000. This implementation should increa se the precision of the survey instrument because learners will have greater lever age in reporting their opinions. LOGIS Survey Item # 1 of 10 Options Practice Tutorial After receiving graphing instruction in LOGIS, I am now more likely to use graphs to answer behavioral ques tions. Graphing Attitude Survey Strongly Agree Strongly Disagree Surveyitem Surveyresponse Figure 7 A prototype of the interface for LOGIS showing th e survey window. The assessment component (see Figure 8) will allow LOGIS to collect pretest and posttest data. Learners will read test items in th e Test item area then respond in the Response box.

PAGE 121

106 LOGIS Test Item # 7 of 25 Options Practice Tutorial The slope of a ________ graph can be negative. Assessment Answer: Test item Response Figure 8 A prototype of the interface for LOGIS showing th e knowledge assessment window. The LOGIS interface is expected to evolve based on expert opinion and advice, and results from the alpha and beta tests. The LOGIS development process. LOGIS will be developed using strict guidelines. F irstly, initial content will be created for the tutorials, survey and assessment. These items will be created under the direction of the subject matter and measurement exp erts. The items will then be alpha tested by the researcher and select individuals who are both knowledgeable about the content and are familiar with the study. The conte nt for the tutorials, survey, and assessment will be revised based on the alpha test results and reactions by the experts and peers.

PAGE 122

107 The content will then be beta tested by students en rolled in one of the behavioral courses. These students will have experienced PI, thus the instructional methodology should be familiar to them. Students will voluntar ily complete the content and participation will not affect their course grade in any way. All content items will then be examined and revised, based on feedback from the be ta test group. After the first beta test and subsequent revisions, an application interface will be created. This will be done under the guidance of a subject matter expert and an instructional design expert. Additional survey ite ms will then be created under the guidance of the measurement expert. These new surv ey items will solicit data on the learners’ attitudes towards the interface and overa ll attitudes towards the application. Next, the practice task will be created under the g uidance of the subject matter expert. The revised content, including the new survey items and practice task will then be coupled to the interface and this combination will essentially be LOGIS. LOGIS will then be alpha tested by the researcher and peers an d that process will be followed by revisions. Following the alpha test, LOGIS will be beta tested by students enrolled in one of the behavioral courses. These students will have e xperienced PI but not LOGIS and they will be different from the first beta testers. Sim ilar to the first beta testers, these students will be under performance based course contingencie s. After completing the beta test, select participants will be randomly chosen and verbally interviewed about LOGIS and subsequently i ts interface. The interviews will not be taped and the researcher will be the sole in terviewer. The interview will be based on three questions:

PAGE 123

108 What did you like most about (LOGIS/the interface)? What did you like least about (LOGIS/the interface) ? How would you make (LOGIS/the interface) better? Important points from these interviews will be reco rded on paper and used in the revision process. The development of LOGIS is fully described by Figu re 9:

PAGE 124

109 LOGIS Development Create Items Alpha Test Items Assemble LOGIS Create Items Revise Items Tutorials 1 Survey Test 1 1 Tutorials 2 Survey Test 2 2 Deliver Tutorials Administer Survey Administer (Post) Tests Test 2 Tutorials 2 Survey 2 Beta Test 1Group A Review ADDIE Revise Items Tutorials 3 Survey Test 3 3 Practice Task Survey Items 1 1 Interface 1 Tutorials 3 Survey Test 3 3 Practice Task 1 1 Alpha Test LOGIS Revise LOGIS Test 4 Tutorials 4 Survey 4 Beta Test 2Group B Review ADDIE Revise LOGIS Practice Task 2 Interface 2 Evaluate LOGIS ExperimentalEvaluationof LOGIS Tutorials 4 Survey Test 4 4 Practice Task 2 Interface 2 2 Tutorials 5 Survey Test 5 5 Practice Task 3 Interface 3 3 Tutorials 5 Survey Test 5 5 Practice Task 3 Interface 3 3 2 # Event: Linked Component: Current Iteration: Interface 1 Deliver Tutorials Administer Survey Administer (Post) Tests Figure 9 A flowchart of the LOGIS components and instrumen ts development iterations.

PAGE 125

110 The Knowledge Assessment development. The LOGIS assessment component will focus on two ca tegories, knowledge assessment and skill assessment. The knowledge ass essment instrument will be developed based on input from both the subject matt er expert and measurement expert. The knowledge assessment instrument will be develop ed based on the 10-step process outlined by Crocker and Algina (1986, p. 66 ): 1. Identify the primary purpose(s) for which the test measurements will be used 2. Identify behaviors that represent the construct or define the domain 3. Prepare a set of test specifications, delineating t he proportion of items that should focus on each type of behavior identified in Step 2 4. Construct an initial pool of items 5. Have items reviewed (and revised as necessary) 6. Hold preliminary item tryouts (and revise as necess ary) 7. Field test the items on a large sample representati ve of the examinee population for whom the test is intended 8. Determine statistical properties of the items, and when appropriate, eliminate items that do not meet preestablished criteria 9. Design and conduct reliability and validity studies for the final form of the test 10. Develop guidelines for administration, scoring, and interpretation of the test scores (e.g., prepare norm tables, suggest recommen ded cutting scores or standards for performance, etc.). Each step will be completed before the assessment i s used in the evaluation. Below is a detailed description of each of the 10 s teps:

PAGE 126

111 1. Identify the primary purpose(s) for which the test measurements will be used This assessment will serve two purposes. Firstly, it will be used to discriminate among learners, using performance as a means of dis cerning the levels of content mastery that each learner has acquired. Secondly, it will be used as a diagnostic test to determine if a learner has qualified to skip the first tutori al ( Basic Graphing ). The different objectives, discrimination and diagno stic, are somewhat disjoint and imply different levels of assessment difficulty. T his means that ideally the discrimination items will be of medium difficulty where variances in performance will be maximized, and diagnostic items will be somewhat easy such tha t problem areas are revealed. For the purposes of the current study, diagnostic items wil l be at least medium difficulty because the objective is to determine if a learner’s basic graphing skills are strong enough to warrant skipping the first tutorial. 2. Identify behaviors that represent the construct or define the domain This assessment will reveal the absolute performanc e levels of the learners. The behaviors that define this domain are based on the major goals for each tutorial that are defined in the Design phase of this chapter and lis ted in Appendix A. Each major objective will have corresponding frames (individual presentations of content), and these frames will be the foundation f or the items. This eliminates the need for item specification because the frames themselve s contain stimuli that must be mastered during instruction and demonstrated during the assessment. It is important to note that frames will not simply be copied to the a ssessment. Normally tutorial frames have formal or thematic prompts present, or they ma y contain references to previous or subsequent frames. This means that the content of each frame will be examined for the

PAGE 127

112 potential to be adapted to the assessment. The maj or advantage of using the frames’ content as the foundation for the assessment items is that this method will help keep the assessment items consistent with the instructional content. It is important to point out that the value of the assessment is not diminished by the initial loose specification items. Crocker and Algina (1986) proposed that as specificity increases, practicality may decrease an d that flaws in tightly specified structures may propagate to every item created. Th ey suggested that a certain level of subjectivity and leeway is needed when creating ite m if issues of practicality and flaw avoidance are to be maximized. 3. Prepare a set of test specifications, delineating t he proportion of items that should focus on each type of behavior identified in Step 2 Crocker and Algina (1986) used Bloom’s (1956) taxon omy when determining assessment specifications. Bloom is not treated wi thin the current study but Gagne et al. (2005, p. 61) provided a comparison (see Table 5) o f Bloom’s taxonomy and Gagne’s classification scheme: Table 5 A Comparison of Bloom's Taxonomy and Gagne's Classi fication Scheme Bloom Gagne Evaluation Cognitive Strategy, problem solving, rul e using Synthesis Problem solving Analysis Rule using Application Rule using

PAGE 128

113 Bloom Gagne Comprehension Defined concepts, concrete concepts, and discriminations Knowledge Verbal Information It is evident that there is overlap between the two classifications, thus, this study will use Gagne’s classification as a base for the a ssessment specification. Upon inspection, Gagne et al.’s (2005) classificati on reveals two strands, Ruleusing and Non Rule-using (Gagne considered Verbal I nformation a separate learning domain). Rule-using is subsumed by problem-solving thus problem-solving will be the first class. The assessment specifications will be based on these two classes: 1) problemsolving and 2) defined concepts, concrete concepts, and discriminations. This implies that each item on the assessment will be classified as either problem-solving or defined concepts, concrete concepts, and discriminations. Problem-solving items will compose 70% of the asses sment and 30% percent will be defined concepts, concrete concepts, and discrim inations items. This distribution describes the importance of higher order activities but does not ignore lower level requirements. The weights of the classes may be ad justed based on expert advice. The assessment will contain items from each major g oal area in each tutorial and will be distributed according to Table 6.

PAGE 129

114 Table 6 Tutorial Weight Distribution Tutorial Weight The Control And Measurement Of Behavior 20% The Importance Of Graphing 20% Basic Graphing 0% Behavioral Graphing Concepts 20% The Cumulative Graph 20% The Multiple Baseline Graph 20% The decision to exclude Basic Graphing from the assessment is based on the fact that Basic Graphing is a prerequisite for Behavioral Graphing Concepts The Cumulative Graph and The Multiple Baseline Graph Assessing the last three goals sufficiently reveals Basic Graphing It is expected that a pool of items will be created for each tutorial, covering each major goal. These items will be trial tested durin g the two beta tests and reduced such that the predetermined distribution for the items a nd item class is achieved. 4. Construct an initial pool of items The initial pool of items will be constructed by fi rst considering each frame (stimulus and correct response) of every tutorial a nd drawing from those frames an appropriate sample representing each major objectiv e. Each item in the sample will then be modified removing cues, prompts, and inappropria te external references. Next, each item will be assigned a class based on the item’s c ontent. It is expected that some

PAGE 130

115 statements will be generally more suited for a part icular class and they will be assigned accordingly. Each statement will then be examined and an appropriate item format will be assigned. The format will include: alternativechoice, for example true/false and yes/no; multiple-choice; and short-answers. The fo rmat assignment will be based on the item’s content and class because some formats are m ore suitable for certain classes, likewise each class has optimal formats. After the format assignment is complete, each item will be finalized based on the guidelines and checklist developed by Reynolds, Liv ingston, and Willson (2006, p. 205). The alternate-choice items, for example true/false items, will be developed using the guidelines in Appendix B. After the alternative-ch oice items have been created, a checklist (Reynolds et al., 2006, p. 207) will be u sed to finalize those items. If any statement is unchecked, the item will be revised. The checklist is listed in Appendix C. The multiple-choice items will be developed using t he Reynolds et al. (2006, p. 190) guidelines listed in Appendix D. The printed format of the multiple-choice items is very important, this aspect will be guided by Reyno lds et al. (2006, p. 190) and will determine the completeness of the first guideline “ Use a printed format that makes the item as clear as possible”. The print format guide lines are listed in Appendix E. After the multiple-choice items have been created, a chec klist (Reynolds et al., 2006, p. 199) will be used to finalize the items. If any stateme nt is unchecked, the item will be revised. The checklist is listed in Appendix F. The short-answer items will be developed using the Reynolds et al. (2006, p. 232) guidelines listed in Appendix G. After the short-a nswer items have been created, a

PAGE 131

116 checklist (Reynolds et al., 2006, p. 233) will be u sed to finalize the items. If any statement is unchecked, the item will be revised. The checklist is listed in Appendix H. These guidelines will form a basis for the developm ent of the various items. Where appropriate, the guidelines will be modified or adapted to suit the requirements of the current study. 5. Have items reviewed (and revised as necessary) The pool of items will be reviewed by experts. The review will be based in part on the following considerations: Accuracy Appropriateness or relevance to test specifications Technical item-construction flaws Grammar Offensiveness or appearance of “bias” Level of readability (Crocker & Algina, 1986, p. 81) 6. Hold preliminary item tryouts (and revise as necess ary) Preliminary tryouts will be completed in first beta test, and the item pool is expected to be significantly larger than the target amount. Statistical procedures and expert examination will reduce the number of items in the pool before the second beta test. 7. Field test the items on a large sample representati ve of the examinee population for whom the test is intended

PAGE 132

117 This field test will be completed in the second bet a test. The number of items in the pool will be significantly less than in the fir st beta test. After the second beta test, statistical procedures and expert examination will reduce the number of items in the pool to the target number of items for this assessment. 8. Determine statistical properties of the items, and when appropriate, eliminate items that do not meet preestablished criteria The Item Difficulty Index will be calculated for ea ch item in the pool and they will each be judged based on optimal p values provided by Reynolds et al. (2006, p. 144), where the p value is the proportion of learners that answered the item correctly. The desired p value for alternate-choice items is .85, for short -answers it is .50, and for multiple-choice items it is .74 for items with four distracters and .69 for items with five distracters. Items that are either too high or too low will be examined and possibly modified or removed from the pool. Cronbach’s Alpha will be also be used to estimate t he internal consistency of the instrument. Items with low item-total correlations will be modified or removed. Multiple-choice items will be subject to Distracter Analysis to ensure that the distracters are performing well. Distracters are s upposed to attract learners in the bottom 50% group while correct answers are supposed to att ract the top 50% group. Distracter Analysis will entail a manual examination of the da ta and decision regarding revision or elimination will be made with expert consultation. The beta tests are expected to produce data which w ill be examined using a table similar to Figure 10 allows for the immediate exami nation of all data points, facilitating the decision-making process. Multiple-choice items totals will be housed in fields 1

PAGE 133

118 through 5 depending on the number of distracters in the item. Alternate-choice items totals will be housed in field 1 and field 2. Shor t-answers items responses will not be entered into the table. Item Responses Item 1 2 3 4 5 item-total correlation p Figure 10 A prototype knowledge assessment data worksheet. 9. Design and conduct reliability and validity studies for the final form of the test Validity and reliability studies will be conducted at both beta tests. Studies conducted after the first beta test are not the nor m, but in this case it is necessary to assess the progress of the instrument at every step becaus e there are only two opportunities for beta testing. Validity refers to the correctness or accuracy of i nterpretations made based on collected data (Reynolds et al., 2006). The valida tion process, which is the collection and analysis of evidence to support the interpretation, is divided into three groups: Contentrelated, Construct-related and Criterion-related. Construct-related evidence of validity can be viewed as a superset of both content-related evidence, and where applicable, criterion-related evidence (Fraenkel & Wallen, 2006 ; Reynolds et al., 2006; Trochim, 2002). The investigation of the construct usually involves an in-depth analysis of the content, its purpose, and all pertinent criteria. To avoid confusion among the different terms, this study will consider content-related evi dence as sufficient for the validation process.

PAGE 134

119 Content-related evidence of validity for the assess ment will be determined prior to each beta test. It is important to note that the b eta tests involve a pool of items and although the pool is not the target assessment, it is important that the validation process be applied at each step to ensure an optimal final product. Content and measurement experts will be asked to ex amine items and determine if the items address the content domain. The experts will be presented with the item, purpose, major goal, and class. They will be asked to determine if the item mirrors its purpose, addresses its major goal, and if it is bot h appropriate and a good representation of the class to which it is assigned. The items wi ll then be revised based on feedback from the experts. Reliability refers to the consistency of the obtain ed data (Fraenkel & Wallen, 2006). A reliability coefficient estimate will be calculated after each beta test. An Internal Consistency method will be used because on ly one administration of the assessment measure will occur. The knowledge asses sment will use the alpha coefficient (Cronbach’s Alpha). 10. Develop guidelines for administration, scoring, and interpretation of the test scores (e.g., prepare norm tables, suggest recommended cut ting scores or standards for performance, etc.) The knowledge assessment will be delivered online o ne item at a time without the opportunity to backtrack. This will reduce the use of clues from previous items and provide a more controlled testing environment. The assessment will be electronically scored and learners will not have running tally of their score when they are taking the assessment.

PAGE 135

120 It is expected that participants in the current stu dy will make mistakes on the short-answer items. Errors including incorrect spe lling, wrong tense, and singular versus plural, are just a few of the mistakes that are exp ected. Each response for every participant will be examined prior to any data anal ysis. If obvious errors are present, credit will be given to the participant and their s core will be updated The Skills Assessment development. The skills assessment instrument will be developed based on a modified version of the ten step process outlined by Crocker and Alg ina (1986, p. 66): 1. Identify the primary purpose(s) for which the test measurements will be used This assessment will be used to discriminate among learners, using performance as a means of discerning the levels of skill master y that each learner has acquired. 2. Identify behaviors that represent the construct or define the domain LOGIS delivers instruction to learners such that th e terminal skill behavior is the construction of cumulative graphs and multiple base line graphs. These two behaviors form the domain for the skills assessment. 3. Prepare a set of test specifications, delineating t he proportion of items that should focus on each type of behavior identified in Step 2 The skills assessment will have four required items two cumulative graphs of similar difficulty and two multiple baseline graphs of similar difficulty, where the difficult of the items will be assessed by the cont ent expert. These graphs will represent an even distribution of the domain behaviors. The practice tasks within LOGIS are the primary ski lls instruction agents. They will contain discrete steps that lead to the creati on of a correct graph. Each step is in

PAGE 136

121 essence an item and can be treated as such. These items (steps) will be collected for both the cumulative graph tasks and the multiple-baselin e tasks and they will be used as the basis for the item specifications for the skills as sessment. This means that all items in the LOGIS practice tasks will be examined and possibly adapted to become assessment points in the skills assessment. 4. Have items reviewed (and revised as necessary) The four skills items will be reviewed by experts. The review will be based on Crocker and Algina (1986, p. 81): Accuracy Appropriateness or relevance to test specifications Technical item-construction flaws Grammar Offensiveness or appearance of “bias” Level of readability After the skills items have been created, the check list listed in Appendix I will be used to finalize the items. If any statement is unchecked, the items will be revised. 5. Hold preliminary item tryouts (and revise as necess ary) Preliminary tryouts will be completed in the first beta test and the four skills items will be adjusted based on performance. The revisio n will take the target performance criterion for the instructional unit into considera tion. It is important to note that the terminal skill performance requirement is that the learners construct correct graphs. To this end, revisions will only occur if they do not jeopardize the overall goals and target mastery levels of the instruction.

PAGE 137

122 6. Field test the items on a large sample representati ve of the examinee population for whom the test is intended Field testing will occur in the second beta test, f ollowed by further revision if necessary. 7. Design and conduct reliability and validity studies for the final form of the test Validity and reliability studies will follow the sa me theme as those used in the knowledge assessment. Content-related evidence of validity for the assessment will be determined prior to each beta test. Content and me asurement experts will be asked to examine the four skills items and determine if they address the content domain, that is, if they require participants to demonstrate what they have learned, and if they are of the appropriate skill level. The skills items will be revised based on feedback from the experts. Three raters will independently score each skill as sessment item. The purpose of multiple raters is to ensure that the grading is co nsistent and without bias. Two scores will be examined to determine the reliability of th e skills assessment, and they will represent consensus and consistency estimates. It is important to return two scores because inter-rater agreement (consensus) does not necessarily imply inter-rater consistency (consistency) and vice-versa (Stemler, 2004). Both estimates will be used in tandem because the goal is to produce scores with h igh agreement and consistency, justifying the averaging of raters score to produce an aggregate score for each participant. First, the inter-rater agreement will be calculated This percent score will reflect the degree of agreement among the three raters. Se condly, an Intraclass Correlation Coefficient (ICC) will be calculated using the twoway random effects model. This

PAGE 138

123 model was chosen based on the flowchart for selecti ng the appropriate ICC (McGraw & Wong, 1996, p. 40). The raters in this study will be viewed as both a second factor and as a random sample. Their variability will be cons idered relevant and they will be selected from a pool of available raters. The resu lt is a two-way random effects model, and although it is computationally similar to the m ixed effects method where raters are considered fixed and their variability irrelevant, the two-way random effects model is generalizable to a larger rater population. If the ICC falls below .80 or the agreement falls below .90 (Fraenkel & Wallen, 2006), the rubr ic will be examined and possibly modified, the assessment will re-graded, and if ne cessary the raters will be retrained. The exact cutoff limits values for the agreement sc ore and the ICC estimate will be carefully considered and possibly reevaluated after data has been collected. It must be noted that it is entirely possible for the assessme nt to be reliable even if .90 agreement and .80 correlations does not occur. The three raters will be selected and trained prior to the first beta test. The first two raters will be knowledgeable about the content and familiar with the PI instructional method. The third rater will be from outside the c ontent field and will not be familiar with either the content or PI as an instructional m ethod. This representation of raters does not violate the randomness criterion for the I CC model because these raters are simply the individuals chosen. 8. Develop guidelines for administration, scoring, and interpretation of the test scores (e.g., prepare norm tables, suggest recommended cut ting scores or standards for performance, etc.)

PAGE 139

124 The skills assessment will require that participant s manually create four graphs. These graphs, two cumulative and two multiple-basel ine graphs, will be scored by three raters independently. The graphs will be scored ba sed on a rubric that will be developed based on the item specifications for the assessment The rater will assign points (0, 1, 2, or 3) to each item on the rubric, for every partici pant. The point values are described in Table 7. Table 7 Points Allocation for Items on the Skills Assessmen t Rubric Points Meaning 0 The item’s criterion is completely absent 1 The item’s criterion is minimally fulfilled. The item’s criterion is significantly over-represented or under-represented 2 The item’s criterion is satisfactorily represente d. The item’s criterion is minimally over-represented or minimall y underrepresented 3 The item’s criterion is perfectly represented The points earned by a participant will be summed a nd a percent score will be calculated based on the participant’s earned points and the total possible points for the graph. The maximum number of points will occur if the participant is awarded three points for every item on the rubric. The percent s cores for each of the four graphs will be averaged and the resulting score will be participan t’s overall score on the skills assessment.

PAGE 140

125 The Survey development. The Graphing Attitude Survey instrument will be dev eloped based on a modified version of the ten step process outlined by Crocker and Algina (1986, p. 66): 1. Identify the primary purpose(s) for which the test measurements will be used The Graphing Attitude Survey will be used to descri be the attitudinal characteristics of the participants. 2. Identify behaviors that represent the construct or define the domain The survey items will be categorized into three gro ups, attitude towards graphing, attitude towards the LOGIS interface, and attitude towards the LOGIS application. Attitude towards graphing will reflect the learners ’ willingness to use graphing techniques to solve problems, where choosing to use graphing techniques is equated to a positive attitude towards graphing. Consistent wit h the review of the literature, the attitude construct will be operationalized as choice Items involving attitude towards the LOGIS interfac e will not be created until after the first beta test because the LOGIS will no t be assembled until that time. Attitude towards the interface will be measured by learners’ report on various aspects of the interface; choosing to use the interface is equated to a positive attitude towards the interface. The learners’ attitude towards the LOGIS applicatio n is a measure of the learners’ perception of the value or worth of LOGIS. Those who perceive LOGIS to be a useful and effective application will more than likely res pond positively to items in this category, choosing LOGIS if given the choice of ins tructional applications.

PAGE 141

126 3. Prepare a set of test specifications, delineating t he proportion of items that should focus on each type of behavior identified in Step 2 Each item in the survey will belong to only one cat egory and the categories will have an equal number of items. 4. Construct an initial pool of items The initial pool of items will focus on the partici pants’ attitude towards graphing. This pool will be constructed based on the guidelin es provided by Crocker and Algina (1986, p. 80). The guidelines are listed in Append ix J. After the survey items have been created, the checklist listed in Appendix K will be used to finalize the items. If any statement is unchecked, the items will be revised. 5. Have items reviewed (and revised as necessary) The survey items will be reviewed by experts. The review will be based on Crocker and Algina (1986, p. 81): Accuracy Appropriateness or relevance to test specifications Technical item-construction flaws Grammar Offensiveness or appearance of “bias” Level of readability 6. Hold preliminary item tryouts (and revise as necess ary) Preliminary tryouts will be completed in the first beta test, and the item pool is expected to be significantly larger than the target amount. Expert examination of the

PAGE 142

127 resulting descriptive statistics will reduce the nu mber of items in the pool before the second beta test. 7. Field test the items on a large sample representati ve of the examinee population for whom the test is intended This field test will be completed in the second bet a test. The number of items in the pool will be significantly less than in the fir st beta test. At this point, survey items focused on attitude towards LOGIS and attitude towa rds the LOGIS interface will be added to the pool. After the second beta test, exp ert examination of the resulting descriptive statistics will reduce the number of it ems in the pool to the target number of items for this survey. 8. Determine statistical properties of the items, and when appropriate, eliminate items that do not meet pre-established criteria The survey data will be statistically examined usin g exploratory factor analysis where items are associated a priori with factors. To facilitate interpretation, the output will be rotated using the Varimax rotation method. 9. Design and conduct reliability and validity studies for the final form of the test Validity and reliability studies will be conducted after both beta tests and will follow a theme similar to the knowledge assessment. The theoretical background for the procedures is discussed in the The Knowledge Assess ment development section. Prior to each beta test, content and measurement ex perts will be asked to examine items and determine if they address the content dom ain. The experts will be presented with the item and the category. They will be asked to determine if the item is both

PAGE 143

128 appropriate and a good representation of the catego ry to which it is assigned. The items will be revised based on feedback from the experts. A reliability coefficient estimate will be calculat ed after each beta test. An Internal Consistency method will be used because on ly one administration of the assessment measure will occur. The survey will use the alpha coefficient (Cronbach’s Alpha) to determine reliability and aid in the revi sion process. 10. Develop guidelines for administration, scoring, and interpretation of the test scores (e.g., prepare norm tables, suggest recommended cut ting scores or standards for performance, etc.) The Graphing Attitude Survey will be delivered elec tronically. A digital slide based on a 1.000 to 5.000 scale with three decimal places accuracy will be used in lieu of the Likert scale. The scale will range from strong ly positive to strongly negative with a 3.000 being neutral. Similar to the Likert scale, the slide is a bipolar, unidimensional, interval measurement scale that can be used for str uctured response formats (Trochim, 2002). The only item that will not use this method is the survey item that requests general feedback on the interface. That item will be an unstructured response format; therefore it will only be used as a guide in revisi ng the interface. Using a digital slide instead of the usual five point Likert Scale will i ncrease the accuracy of the responses without compromising the goals of the survey. The results from the Graphing Attitude Survey will be a part of the estimation of the usefulness and value of LOGIS. To facilitate t his estimation, an aggregate score will be calculated for the attitude towards graphing cat egory. The aggregate score will be an average of the scores of all the items in the attit ude towards graphing category. This

PAGE 144

129 means that along with the knowledge and skills asse ssments scores for the pre and posttests, each participant will have one score rep resenting graphing attitude before the treatment (pre-survey) and one score representing g raphing attitude after the treatment (post-survey). The pre and post surveys will conta in the same items for the attitude towards graphing category, but only the post survey will have items from the attitude towards the LOGIS interface and attitude towards LO GIS categories. The graphing attitude aggregate score is necessary for the estimation of the value of LOGIS and it cannot include either the attitude towards the LOGIS interface score or the attitude towards LOGIS score because neither ca n be assessed in the pre-survey. Threats to assessment validity Trochim (2002) listed several threats to validity a nd they must be account for within the current study. The current study contai ns two constructs that must be explicitly defined and operationalized. A part of this study is the estimation of the value of the instructional treatment and its relationship to learning. The two constructs, instruction and learning have been theoretically ex amined and translated into LOGIS. LOGIS is an application (instructional construct) t hat produces an observable outcome (learning construct). The fundamental constructs o f instruction and learning form the basis for threat analysis of the assessments: Inadequate Preoperational Explication of Constructs This threat is a result of poor initial definitions and operationalization of the constructs. It is expected that expert critiqu e will reveal inadequate definitions of constructs if they exist. Mono-Operation Bias

PAGE 145

130 This threat does not pertain to the assessment meas ures. Although the assessment measures are an integral part of LOGIS, it is accepted that different versions of LOGIS might yield different p erformance outcomes. The step-by-step creation of LOGIS reduces this thr eat because multiple opportunities exist to determine if LOGIS is perfor ming well. Mono-Method Bias Multiple opportunities to analyze the observations exist thereby reducing this threat. Interaction of Different Treatments This is not expected to be a significant threat, be cause the treatment is unique at least within the participants’ course. Interaction of Testing and Treatment This is a valid and accepted threat because the pre test assessment instrument might bias the treatment and subsequent performance outcome. The delay between the pretest and the treatment is not sufficient to reduce this threat. Restricted Generalizability Across Constructs This threat is accepted because the current study d oes not readily identify all possible constructs that may be impacted by the treatment. Confounding Constructs and Levels of Constructs This threat is accepted because the current study d oes not readily identify all possible constructs that may be impacted by the treatment, nor all possible forms of the treatment. LOGIS is not a so lution to any construct

PAGE 146

131 other than those described by the current study. The “Social” threats to construct validity will als o be addressed. Hypothesis Guessing, Evaluation Apprehension, and Experimenter Expectancies are not expected to be significant threats. Every attempt will be made to make the assessments as unobtrusive as possible and make their delivery as uniform and consistent as possible. Threats to the survey validity The survey is based primarily on the attitude const ruct, which has been defined and operationalized. In addition to the threats id entified for the assessments, the survey is susceptible to several validity threats. Mortality Location, Instrumentation and Instrument decay (Fraenkel & Wallen, 2006) are all pertinent to surveys but they are not expected to be significant for this survey. Mortal ity is a concern for longitudinal studies where missing participants translate into missing d ata. Location is not expected to be a factor because all participants will complete the s urvey in the familiar and nonthreatening computer laboratory. Instrumentation c oncerns are expected to be minimized because of the multiple refinements that will occur Instrument decay is not expected to be a factor because participants will have as much time as they need to complete the survey. Development Component 3 (Try out materials and activities with target audie nce members.) This sub-component is treated in the previous sub-c omponent “Prepare draft materials and/or activities”. Development Component 4 (Revise, refine, and produce materials and activiti es.)

PAGE 147

132 This sub-component is treated in the previous sub-c omponent “Prepare draft materials and/or activities”. Development Component 5 (Produce instructor training or adjunct materials.) This sub-component is beyond the scope of the curre nt study. Implement Phase Implement Component 1 (Market materials for adoption by teachers or stude nts.) This sub-component is not applicable to the current study. Implement Component 2 (Provide help or support as needed.) This sub-component is not applicable to the current study. Evaluate Phase Evaluate Component 1 ( Implement plans for student evaluation.) Learner evaluation is an integral part of LOGIS. T he knowledge, skills and attitude measures will be used in part to determine the effectiveness and value of LOGIS. The second research question “Is LOGIS an effective form of instruction?” will be determined after the field trial is completed and w ill be based on non-experimental analysis. The field trial procedures are fully dis cussed in the section “Implement plans for program evaluation.” Effectiveness will be determined based on the (KSA) goals defined earlier in this Chapter. Those goals will serve as the basis for q uantitative judgment on the

PAGE 148

133 effectiveness of the instruction, in essence whethe r or not learning occurred. The goals will be measured by a posttest for the knowledge co mponent (K), a posttest for the skills component (S), and a survey (Graphing Attitude Surv ey) for the attitude component (A). Effectiveness will be based on academic parameters but statistical data will be used to support estimates of usefulness or value. This means that LOGIS will be considered useful and effective if it produces educ ationally significant differences from pretest to posttest. Educationally significant diff erences will be characterized by at minimum a mean 10% increase in performance from the pretest to the posttest. The 10% mark is not arbitrary, an increase of 10% is a guar antee that a student’s grade will increase by one letter grade, thus an educationally significant increase. The attitude measure’s contribution to effectiveness will also b e based on the 10% figure. In this case, the 10% will simply be used for consistency. It must be noted that attitudinal measure of effectiveness will be based solely on th e aggregate score of the attitude towards graphing category, as described in the “The Survey developm ent” section in this chapter. Descriptive statistics will be reported in addition to the differences in pretest and posttest scores. The mean and distribution of the scores will add insight into the effectiveness on LOGIS. It must be restated that t he intent is not claim that LOGIS causes increased performance. The objective is simply to determine an estimate of the application’s value or usefulness.

PAGE 149

134 Evaluate Component 2 (Implement plans for program evaluation.) The formative evaluation of LOGIS will entail a on e-to-one evaluation, a small-group evaluation, and a field trial (Dick et al., 2005; Morrison et al., 2004; P. L. Smith & Tillman, 2005). These three formative eval uation steps will be conducted sequentially where the results from one step will b e used as a guide to initiate the next step. The researcher will be responsible for recording ke y observations and reactions, allowing the participant to focus entirely on the i nstruction. This will reduce the participant’s workload and encourage meaningful int eraction between the researcher and the participant. The revision process will occur after each evaluati on. In the case of the one-toone evaluation, revisions will occur after each par ticipant has completed the instruction. Performance data, survey results, and descriptive e valuation information will be summarized then analyzed and recommended changes wi ll be made. It is important to note that errors, for example typographical errors or broken links, will be corrected immediately. Other errors, for example complex wor ds or phrases, might require adding or removing qualifiers or explanatory sections. If errors appear to result from participant specific attributes, those errors will be noted and the decision to address those errors will be made after all evaluations are complete. The de cision to revise participant specific errors will be made based on the researcher’s and e xpert insight, error complexity, available time, and available resources.

PAGE 150

135 The participants for the evaluation phase are expec ted to come from an undergraduate course at a major southeastern univer sity. The course is expected to have an enrollment of between 80 and 100 students. These students generally have diverse majors, are at different academic levels on the con tinuum from freshman to senior, and represent different age groups. These students do not have a graphing component in their course, but they will have been exposed to behavior al principles and will be comfortable with PI as an instructional format. It is importan t to note that no student will take more than one type of evaluation. The evaluation will be completed in a computer labo ratory at a major southeastern university. The computer laboratory contains twent y personal computers arranged in five rows with four machines in each row. There are two types of computers in the laboratory, 13 computers carry the Intel Pentium 35 0 Megahertz processor and 7 carry the AMD 1.4 Gigahertz processor. The AMD computers are much faster than the Intel computers, but they all run the Windows XP operatin g system and have similar software packages installed. All computers in the laborator y are connected to the internet via 100 Megabits per second Ethernet connections. One-to-One Evaluation The one-to-one evaluation is used to detect obvious errors in instruction and also to obtain initial performance data and user reactio ns. During this evaluation, the designer interacts with individual users as the users comple te the instruction. This interaction provides the avenue through which data and reaction s can be observed and recorded (Dick et al., 2005).

PAGE 151

136 Based on recommendations by Dick et al. (2005), thr ee students will be selected to perform this evaluation. These students will re present the upper, middle and lower performers in the course. To select the three stud ents for the evaluation, all students will be ranked based on their current course grade. The first selection will be the student at the middle position of the upper 25% of the rank or der. The second selection will be the student at the overall middle position of the rank order. The final selection will be the student at the middle position of the lower 25% of the rank order. The data collection procedure for the one-to-one ev aluation will be based on Dick et al. (2005, p. 283). Descriptive information wil l be recorded by the researcher and will be gathered using the following questions as the ba sis for verbal interaction: 1. Did you have any problems with specific words or ph rases? 2. Did you have any problems with specific sentences? 3. Did you understand the themes presented in the inst ruction? 4. How effective was the sequencing of the instruction ? 5. How would you describe the delivery pace of the ins truction? 6. Was sufficient content presented in each section? 7. Was there sufficient time to complete the instructi on? 8. What is your overall reaction to the instruction? 9. What are specific strengths of the instruction? 10. What are specific weaknesses of the instruction? These questions will encourage the participant to v erbalize both strengths and weaknesses of the instruction, as well as provide a n opportunity for general comments related to the instructional application.

PAGE 152

137 The researcher will establish rapport with the part icipant by encouraging the participant to react freely and honestly to the ins truction. Interaction between the researcher and the participant is critical to this evaluation. The participant will be asked to verbalize each step, thus facilitating rich and meaningful dialog between the researcher and the participant. The performance data, survey results, and one-to-on e evaluation information for the participant will be combined to form a general picture of the instruction. These results will be used to further refine the applicat ion. Based on expected changes in the instruction and differences between participants, o bservations and results are expected to differ among the one-to-one evaluations. Small-Group Evaluation The small-group evaluation is used to test the effe ctiveness of the one-to-one evaluation, and also to determine if participants c an complete the instruction without intervention (Dick et al., 2005). Based on recommendations by Dick et al. (2005), 9 s tudents will be selected to perform this evaluation. These students will repre sent the upper, middle and lower performers in the course. To select the nine stude nts for the evaluation, all students will be ranked based on their current course grade. The first selection will be the 3 students at the middle position of the upper 25% of the rank or der. The second selection will be the three students at the overall middle position of th e rank order. The final selection will be the three students at the middle position of the lo wer 25% of the rank order.

PAGE 153

138 Unlike the one-to-one evaluation, the researcher wi ll not interact with the participant during the instruction. Except in extr eme cases where, for example equipment failure occurs, the researcher will limit interacti on until the instruction is complete. The data collection procedure for the small-group e valuation will be primarily based on the posttest, the survey, and descriptive information (Dick et al., 2005, p. 288). Descriptive information will be recorded by the res earcher and will be gathered using the following questions as the basis for verbal interac tion: 1. Did you have any problems with specific words or ph rases? 2. Did you have any problems with specific sentences? 3. Did you understand the themes presented in the inst ruction? 4. How effective was the sequencing of the instruction ? 5. How would you describe the delivery pace of the ins truction? 6. Was sufficient content presented in each section? 7. Was there sufficient time to complete the instructi on? 8. What is your overall reaction to the instruction? 9. What are specific strengths of the instruction? 10. What are specific weaknesses of the instruction? These questions will encourage the participant to v erbalize both strengths and weaknesses of the instruction, as well as provide a n opportunity for general comments related to the instructional application. The rese archer will establish rapport with the participant by encouraging the participant to react freely and honestly to the instruction. This will facilitate meaningful dialog between the researcher and the participant.

PAGE 154

139 The performance data, survey results, and descripti ve information provided by the participants will be combined to form a general pic ture of the instruction. These results will be used to further refine the application. The participants for the small-group evaluation are not expected to complete the instruction at the same time. This means that mino r revisions might occur between evaluations thus observations and results are expec ted to slightly differ among the evaluations. Field Trial The field trial evaluation is used to test the effe ctiveness of the small-group evaluation, and also to determine if the instructio n is effective under normal learning circumstances (Dick et al., 2005). Based on recommendations by Dick et al. (2005), 20 students will be randomly selected to perform this evaluation. Unlike the on e-to-one and small-group evaluations, the researcher will not interact with the participa nts during the instruction or after the instruction. Except in extreme cases where, for ex ample equipment failure occurs, there will be no interaction with the participant. The data collection for the field trial evaluation will include the posttest and the survey results (Dick et al., 2005, p. 291). The ap plication will be revised based on the resulting data. Evaluate Component 3 (Implement plans for unit [course] maintenance and revision.) This sub-component is not applicable to the current study.

PAGE 155

140 Summary This chapter outlined the proposed model-based deve lopment of LOGIS. Consistent with the goals of Developmental Research the combination of this chapter (Proposed Development) and chapter four (Actual Dev elopment) should present a complete picture of the process that was used to de velop LOGIS. The prescriptive nature of the documentation of phases should provide an av enue to answer the research question while helping to organize the development process. It must be noted, that under normal circumstances, the Evaluation phase might not be a summative evaluation. Considering the view that the creation of instruction is a dyna mic and ongoing process, the relevance of summative evaluation is questionable. Creation as a process that requires constant evaluation and revision implies that the summative evaluation is simply an in-depth formative evaluation.

PAGE 156

141 CHAPTER FOUR ACTUAL DEVELOPMENT Chapter Map This chapter describes the changes from the propose d development to the actual development of LOGIS. These changes are discussed within the framework of the research questions. Collectively, Chapter Three (P roposed Development) and Chapter Four (Actual Development) present a complete pictur e of the development process from conception to implementation. The following map de scribes the organization of this chapter: Actual Development Chapter map Research question Analysis Phase Report and Reflection Analysis Phase Critique Design Phase Report and Reflection Design Phase Critique Development Phase Report and Reflection Development Phase Critique Implement Phase Report and Reflection

PAGE 157

142 Implement Phase Critique Evaluate Phase Report and Reflection Evaluate Phase Critique Summary The researcher kept a written journal of observatio ns pertaining to the use of the model, instrument creation, and software developmen t. The purpose of this log was to document the development process and provide a poin t of reference for future commentary on the development of the application. Using a log was not an initial consideration in the proposed development, but the decision to create an observation log was made at the beginning of the Analysis phase. T he log was used to record procedures, observations, thoughts, questions, and comments. A significant portion of the critique of each phase is the result of reflecting on the mater ial written in the log book. LOGIS took approximately 33 weeks to complete. The re were periods of high activity and productivity and there were times when less work was accomplished. This situation made it difficult to determine the exact amount of time that was spent on each phase. General estimates of the time spent on each phase were determined based on the computer-logged dates of files used in the study, l ogbook entries, and researchergenerated estimates. These estimates are not exact but they provide a basis for commentary on the temporal cost of each phase.

PAGE 158

143 Research Questions 1. How does the use of the ADDIE model influence the c reation of the LOGIS instructional application? To evaluate the ADDIE process, each phase will be s ubject to the following five questions based on (Gagne et al., 2005, p. 355): 1. Was the rationale for including or omitting this ph ase clear? 2. Are all required elements of this phase accounted f or? 3. Was this phase temporally cost effective to complet e? 4. Are the products from this phase observable and mea surable? 5. Are the products from this phase consistent with th e overall goal of the instructional application? These questions are concisely answered at the end o f each phase and the results are presented in this chapter. 2. Is LOGIS an effective form of instruction? This research question is addressed in this chapter and also reported in chapter five “Conclusions”. Data from the one-to-one evalu ation, the small-group evaluation, and the field trial were compiled to address this r esearch question. Analysis Phase Report and Reflection The Analysis Phase was very demanding in terms of t ime and planning. The established need for the instruction and the existe nce of goals provided a good starting position and reduced the workload for this phase, b ut the classification of the goals proved to be very difficult. The refining of goals forced the researcher to visualize each

PAGE 159

144 step towards the final product in fine details. Wh ile visualization is commonly used, generating a solution path of such detail was not a nticipated. The detailed visualization process was necessary because it was very important to quickly establish what was possible given the available resources, time, and e xpertise. It was very important to project, in detail, what the requirements would be and then make adjustments based on the anticipation of problems – all before classifyi ng the goals. The researcher decided to exclude bar graphs and semi-logarithmic graphs beca use in addition to the lack of course emphasis on these topics, it was hard to visualize the programming code that could implement these graphs and also accommodate cumulat ive and multiple base-line graphs. Based on the level of difficulty experienced during the programming stage, the decision to exclude bar graphs and semi-logarithmic graphs w as in retrospect justified. The Analysis phase not only forced the detailed vis ualization of the actual application, it forced a re-envisioning of the cycl e of instruction, specifically the role of assessment within that cycle. The commitment to th e ADDIE phases led to the use of the KSA classification scheme, and using this scheme me ant that this instruction had to include assessments for knowledge, skills, and atti tudes. In this context, the assessment focuses on the participant, the content, and the ap plication. This is a departure from the norm in that most assessments are seen as important but separate parts of instruction, and in most case, the instruction refers to the content alone. This phase forced the researcher to consider assessment as an integral and indisting uishable part of the instruction cycle, and also an integral and indistinguishable part of the application development cycle. The key point is that the assessments in this study ass ume expanded roles because they

PAGE 160

145 highlight, and to an extent mediate, the relationsh ips between the learner and the content, the learner and the application, and the applicatio n and the content. The process of re-envisioning of the instruction cy cle led the researcher to question what type of application would be suitable for this study, and again focusing on what was possible given the available resources. I n this study, instruction included content, practice, and assessment, thus the conclus ion was that the instructional application had to be flexible and easily updatable Flexibility and updateability were important qualities because both the content and th e application itself would be frequently modified and revised based on the result s of the assessments. In this case, flexibility and updateability included the ability to add or remove rules, examples, and practice items or entire modules. This requirement led the researcher to conclude that in addition to being an instructional application, LOG IS should be viewed as an Instructional Engine. The term Instructional Engine is appropriate beca use it accurately conveys the fact that the program should deliver a “class” of instruction. The engine in this case would deliver the graphing instruction, b ut it would also be able to deliver other similar instruction as deemed necessary. This beco mes very important when considering the decomposition of instructional concepts. If th e instructional engine is to deliver instruction on a certain concept, then it must also be able to deliver instruction on the component concepts in the event that the participan t needs additional help. The decision to build an instructional engine had a significant effect on the initial conception of the program. Attempting to build an engine meant that the software program had to be as abstracted as possible, but still focused on the overall inst ructional objective. Abstracting the engine meant that the d esign and implementation of the engine

PAGE 161

146 would be as loosely coupled to the instructional co ntent as possible. In essence, the engine would be designed almost independent of the content, allowing for increased modularity, usability and updatability. The idea o f abstraction is not prevalent in the instructional design area but it is a staple of sof tware programming, thus, the theoretical and practical underpinning are established. Analysis Phase critique 1. Was the rationale for including or omitting this ph ase clear? The decision to include this phase was reasonable. Planning is accepted as an appropriate first step in many tasks, and it proved to be extremely beneficial in the case of this study. 2. Are all required elements of this phase accounted f or? The required elements of the Analysis phase were al l necessary and consequently they were all completed. The only caveat is that t here was no need to create goals because these existed along with the original graph ing instructional unit. 3. Was this phase temporally cost effective to complet e? This phase was not temporally cost effective and it consumed an unexpected amount of time. The Analysis phase was completed i n slightly less than six weeks and that accounted for 18% of the total time spent on t he development of the application. Although 18% is a relatively small proportion of th e total time, the researcher did not anticipate that the Analysis phase would take six w eeks. Most of the six weeks were spent on the instructional analysis (Analysis Compo nent 2). The process of conducting the instructional analysis required that the resear cher pay close attention to the long-term design of the application because the results of th e instructional analysis would inform

PAGE 162

147 the future design and development of the applicatio n. In essence, extra care was taken during the Analysis phase to anticipate and as much as possible avoid future problems. Several factors could have contributed to the exces sive time taken during this phase. The significant factors appear to be based on particular attributes of the researcher and the nature of the study as opposed to being inh erent to the analysis process. The two most identifiable factors are the experience level of the researcher and role of the researcher in the study, both of which, along with their implications, are discussed in Chapter 5. 4. Are the products from this phase observable and mea surable? The Analysis phase resulted in goals that were obse rvable and measurable. In addition to concrete goals and classifications, thi s phase also defined the time constraints and the participant prerequisites. 5. Are the products from this phase consistent with th e overall goal of the instructional application? The products of the stage are consistent with the o verall goal of the application. Design Phase Report and Reflection The goals generated in the Analysis phase were used as the starting point for the Design phase. They were the basis for the tutorial topics and the basis for the assessments. The activities for each lesson and th e assessments specifications were also developed in this phase. Generating the items for each tutorial proved to be very challenging. The most difficult part was the modification of the existing tutorial content to meet the required

PAGE 163

148 length and format. The original tutorial contained 11 tutorial sets and these were reduced to 5 tutorial sets namely: The Control And Measurem ent Of Behavior, The Importance Of Graphing, Behavioral Graphing Concepts, The Cumu lative Graph, and The Multiple Baseline Graph. The Primer and Basic Graphing sets were not derived from the 11 original tutorials, they were created by the resear cher. Table 8 is a comparison of the 11set original graphing tutorial and the 5 correspond ing LOGIS tutorial sets. Table 8 A Comparison of the Original Graphing Tutorials and the Derived LOGIS tutorials Attribute Original Graphing Tutorial Set Derived LOGIS Tutorial Set Total number of sets 11 5 Total number of frames 357 230 Total number of words 11077 7689 Average number of frames per set 32.45 46 Average number of words per set 1007 1537.80 Average number of words per frame 31.03 33.43 Table 8 shows the increases in frame density and wo rd density, confirming the fact that although the number of tutorials sets dec reased from 11 to 5, the density of the content increased as exemplified by the increase of average number of frames per set from 32.45 to 46 frames. Table 9 shows the final distribution and correct se quencing of the LOGIS tutorial sets. The Basic Graphing tutorial was positioned a fter the Primer because the researcher

PAGE 164

149 felt that basic graphing should be completed first. This was logically sound, and made the programming of the application slightly easier. Table 9 The Number of Frames in the LOGIS Tutorials and Pra ctice Tasks Tutorial Set Name Number Of Frames Number Of Items In The Practice Task Primer 26 5 Basic Graphing 27 13 The Control And Measurement Of Behavior 26 The Importance Of Graphing 46 Behavioral Graphing Concepts 84 The Cumulative Graph 42 15 The Multiple Baseline Graph 32 44 During the Design phase, the total number of tutori al sets was deemed more important than the length of individual tutorial se ts. In retrospect, the rationale behind placing high importance on the number of tutorial s ets was flawed. During previous semesters, students who completed the original 11-s et graphing tutorial reported frustration at the number of tutorial sets (11 sets ) that they were required to complete. The researcher considered the data and concluded th at it was more important to minimize the number of sets, at the expense of the length of each set. This did not appear to work because participants ultimately reported frustratio n at the number of frames in each

PAGE 165

150 tutorial set, especially the Behavioral Graphing Co ncepts set which had 84 frames. These results are reported in this chapter The 11 original graphing tutorial sets had a combin ed total of 357 frames, and the LOGIS tutorial sets and practice tasks had a combin ed total of 360 frames. Although the number of tutorial sets in LOGIS is less than the n umber of tutorial sets in the original tutorial, the LOGIS tutorial sets contained more to tal frames (including practice frames) than the original tutorial. This significance of t his situation was not evident until the development phase and by that time it was too late to make large scale changes because of the beta testing deadlines. Table 10 shows a co mparison between the original graphing tutorial and the LOGIS tutorial. Table 10 A Comparison of the Original Graphing Tutorials and the LOGIS Tutorials Attribute Original Graphing Tutorial LOGIS Tutorial including Practice Tasks LOGIS Tutorial excluding Practice Tasks Total number of sets 11 7 7 Total number of frames 357 360 283 Total number of words 11077 11699 9251 Average number of frames per set 32.45 51.43 40.43 Average number of words per set 1007 1671.29 1321.57

PAGE 166

151 Attribute Original Graphing Tutorial LOGIS Tutorial including Practice Tasks LOGIS Tutorial excluding Practice Tasks Average number of words per frame 31.03 32.50 32.69 The original graphing tutorial had more sets than t he LOGIS tutorial, but the latter had more frames per set and more words per frame. The relatively high frame density is consistent when the practice tasks are included (51 .43 frames per set) and when the practice tasks are excluded (40.43 frames per set). This situation directly resulted in participant frustration, which is reported in this chapter. The Primer tutorial and the corresponding practice task were created during the Development phase, not the Design phase. This was not an oversight because the necessity of a Primer was evident from the beginnin g of the study. The Primer was not initially perceived as instruction, thus it was dev eloped separately from the rest of the tutorial sets. In retrospect, this was an error be cause all tutorial sets should have been viewed as instruction and consequently the Primer s hould have been created with a focus on systematic development, similar to the other tut orials. Design Phase Critique 1. Was the rationale for including or omitting this ph ase clear? The Design phase was critical because it resulted i n the formation of most of the instructional content. This phase could not have b een omitted. 2. Are all required elements of this phase accounted f or?

PAGE 167

152 The required elements of the Design phase were all completed. 3. Was this phase temporally cost effective to complet e? This phase took an significant amount of time, but this was expected. The Design Phase was completed in about nine weeks, which was approximately 27% of the total time. Most of the time in this phase was spent dec onstructing the original tutorial sets and assembling the sets that would be used for LOGI S. The researcher expected that a few problems to occur when converting instruction f rom one form to another, but no problems occurred. The creation of new content and the modification of existing content were expected to be labor intensive and this proved to be the case. This phase was temporally cost effective and it would be difficult to argue otherwise given the fact that the expectation was that phase would take a long ti me. 4. Are the products from this phase observable and mea surable? The Design phase resulted in the detailed documenta tion of the goals, the creation of the tutorials sets, the finalization of the inst ruction sequence, and the creation of assessment specifications. The products are all ob servable and measurable. 5. Are the products from this phase consistent with th e overall goal of the instructional application? The products of the stage are consistent with the o verall goal of the application. Development Phase Report and Reflection The interface and the application The final version of the LOGIS interface does not d iffer significantly from the proposed design, primarily due to the extensive pla nning during the Analysis phase.

PAGE 168

153 Although the interface design changed from concept to implementation, the changes were superficial and were largely the result of incorpor ating new features into the application. The basic interface components and their interactio ns were maintained from the proposed design (see Figure 4) to the completed application. Figure 11 shows the final version of the LOGIS interface. Figure 11 A view of the final version of the LOGIS interfac e. During the Development phase it became evident that the terms that were being used were insufficient. The references to tutorial s and practice tasks became cumbersome during the programming of the applicatio n. At that point, the researcher decided to change some of the terminologies to bett er reflect the components of especially the interface. The terms Modules and Tasks were introduced as “containers”,

PAGE 169

154 where Module was used as a high-level container for all possible Tasks. This meant that the LOGIS application could now be described as hav ing modules containing tasks, and the tasks were a combination of tutorials and pract ices. There were 12 modules: Pretest, Pre-survey, Graphing Proficiency Test, Primer, Grap hing Basics, The Measurement of Data, The Importance of Graphing, Behavioral Graphi ng, Cumulative Graphs, Multiple Baseline Graphs, Post-survey, and Posttest. The tu torials created during the Design phase were embedded within their respective modules and were paired with at least one form of practice. The advantage of using the modul e concept as a high-level container was that it reflected the theme that everything wit hin LOGIS is a part of the instruction. Using the simple concept of modules allowed non-tra ditional forms of instruction, for example tests and surveys, to be implemented on the same conceptual and physical level as traditional forms of instruction, for example tu torials. Practice is one of the fundamental principles of LOGIS. During the Development phase, the researcher decided that the participants should have the ability to practice graphing even if the module they were comp leting did not have a Guided Practice task. This led to the development of the Freelance Practice. The Freelance Practice, labeled Free Practice on the interface, g ave the participant the opportunity for unguided practice during any non-survey or non-test module. The Freelance Practice task was available after the participant completed a tutorial, but if a Guided Practice task was present, then the Freelance Practice became ava ilable after the Guided practice task was completed. The purpose of the Freelance Practi ce task was to provide unguided practice after the completion of each module’s form al instruction.

PAGE 170

155 The final LOGIS interface was comprised of seven ar eas: Information, Navigation, Work, Options, Feedback, Instructions, and Tools (see Figure 12). Figure 12. The significant areas of the LOGIS interface. The Information area displayed the current module, the score on the current task, and the frame count for current task. The frame count, wh ich is an indication of progress, was displayed both textually and graphically. The Navi gation area displayed the modules and tasks. The Work area displayed the contents for th e tutorials, the tests, the surveys, and the initial instructions for each task. The Work a rea also doubled as the graphing workspace during the Guided Practice and Free Pract ice tasks. The Options area allowed the participant to change the color theme, the font style, and the text size. The Feedback area provided textual feedback to the participant i f a response was incorrect, prompting the participant to “Try Again”. The Instructions a rea provided explicit directions on what

PAGE 171

156 should be done at that point in the instruction. T he Tools area provided the participant with the tools to respond. Tutorials and tests req uired a text box for alphanumeric input (see Figure 12) and the practice tasks required gra phing tools for graphical input, as visible in Figure 13. Surveys required a slide bar that corresponds to numeric input (see Figure 14). Figure 13. A view of the LOGIS interface showing the grid and graphing tools.

PAGE 172

157 Figure 14 A view of the LOGIS interface showing a survey it em and the slide bar. The functionality of LOGIS can be best described by looking at the major features that were implemented: The application environment LOGIS was developed as a stand-alone Java applicati on. The original specification called for a Java Applet, but during the Development phase it became apparent that browser restrictions and the a vailable browser space would cause significant problems. Programming LOGIS as a stand-alone application allowed the researcher to control every aspect of t he display environment. As defined in the proposed specifications, a Micros oft SQLServer database was used as the container for both input and output dat a.

PAGE 173

158 The LOGIS interface was designed for a minimum scre en resolution of 1024 pixels wide and 768 pixels high. A rudimentary form of versioning was used to ensure that participants were always using the most current form of LOGIS. After a successful login, the current application version was queried from the da tabase and then compared to the participant’s version. If the versions did not match, the application halted and then a message and URL address were displayed. The message prompted the participant to download the current LOGIS version a t the URL address. Login procedures Login was based on a username and password. In thi s study, the username was the participant’s university login identification a nd the password was the participant’s student number. The participant’s completed tasks were queried from the database after a successful login, after which the appropriate links and icons in the navigation area were enabled. This session management ensured that participants could return to their current module in the event that they needed to restart the application. Participants had access to only the current module and previously completed modules. This ensured that the modules were comple ted sequentially. Options The color theme option controlled the background co lor, the text color, and the highlight color for buttons and text. Changing the color theme applied the selected color theme to the entire interface. The font option allowed the participant to select a mong Monospaced, Serif, and

PAGE 174

159 SanSerif fonts. These fonts had differing sizes an d letter spacing but they did not decrease readability. This option gave the partici pant an opportunity to use their preferred font. The text size option allowed the participant to cha nge the size of the text in the Work Area and in the Instructions Area. These area s are labeled in Figure 12. Tutorials When a module was selected by a participant, the tu torial task for that module automatically started. Exhibits were originally envisioned to be in a tabb ed (embedded) window separate from the tutorial frame, but this was chan ged in an effort to make the LOGIS interface less demanding on the participant. The exhibits were displayed in the Work Area beneath the frame text, as seen in Figure 11. Participants used a text box to respond to frames. The text input was controlled using regular expressions. This prevents participa nts from entering illegal and possibly harmful text. Each frame required a response. Feedback in the fo rm of “Try Again” was presented in the Feedback Area (see Figure 12) if a response was incorrect. If the participant had exhausted the maximum number of res ponse attempts allowed for that frame, the correct answer was displayed and th en the learner was required to type the correct answer to proceed to the next fram e. Practice tasks The participant read the current instruction from t he Instructions Area, and then used the mouse to draw or place shapes on the grid (see Figure 13).

PAGE 175

160 Participants could draw solid or dashed lines; plac e points, text, or scale breaks; move any drawn object; erase specific objects; or t hey could erase all the objects on the grid. All objects drawn by the participant were initially colored black. If the object was incorrect, feedback in the form of “Try Again” was presented in the Feedback Area (Figure 12) and the object’s color was changed to red. If the participant had exhausted the maximum number of response attempts a llowed for that frame, the correct object was displayed in colored blue as a h int. The participant then had to draw the correct object to proceed to the next fram e. LOGIS used the location of pixels on the drawing ar ea to determine the position of the mouse pointer. The participant, however, di d not need to be precise when drawing objects. During this study the fidelity of the answers was set to 10 pixels, thus an object only needed to be within 10 pixels of the correct answer to be considered correct. For example, if the tutoria l asked the participant to place a point at pixel position (100,250), correct answers would be defined by the square: (90,240), (110,240), (110,260), and (90,260). This reduced the need for precise mouse-manipulation on the part of the learner. To avoid repetition, some graph elements were autom atically populated. For example, the participant was asked to place the fir st two hatch marks on an axis, after which the rest of the hatch were automaticall y populated. Freelance Practice did not have any tutorial or pr actice task associated with it. This area provided the participant with the opportu nity to interact freely with the application. The Freelance Practice did not evalua te the participant’s responses,

PAGE 176

161 consistent with the idea of free interaction. Tests and surveys A perfect score on the Graphing Proficiency test ex empted the participant from the Graphing Basics module, although the Graphing B asics module could still be completed if the participant wished. The current score was not displayed when the partic ipant was taking a test. The survey slide (see Figure 14) presented an oppor tunity to collect the participant’s attitude with a high level of precisi on. The extreme numerical values of the slide were 1 and 1000 and they were l abeled strongly disagree and strongly agree respectively. The slide was centere d by default and this corresponded to a numerical value of 500. Participants had to confirm survey responses that w ere truely neutral. If the participant submitted a neutral response (the slide is in the middle position), a confirmation dialog box would popup and require tha t the participant confirm the neutral selection. This prevented participants fro m repeatedly clicking the submit button without actually giving their opinions. The original development plan for LOGIS (see Figur e 9) was significantly changed during the Development phase. Figure 15 sh ows the actual development plan that was used.

PAGE 177

162 Create Content and Instruments Create Interface Deliver Instruction One-to-OneEvaluation Review ADDIE Alpha Test LOGIS SmallGroupEvaluation Deliver Instruction Revise Content and Instruments Revise Interface Revise Content and Instruments Revise Interface FieldTrial Deliver Instruction Revise Interface Revise Content and Instruments Revise Interface Figure 15 A flowchart of the actual development process. The rationale behind changing the development plan was focused on the definition of evaluation. The originally planned p rogram evaluation of LOGIS included

PAGE 178

163 one-to-one evaluation, small-group evaluation, and a field trial. These evaluations were slated to occur during the Evaluation phase, but af ter careful consideration no clear reason emerged as to why these three evaluations co uld not be implemented in the Development phase and repurposed as a form of forma tive evaluation. The benefits of this change in perspective seemed to heavily outwei gh any possible negative effects. Firstly, the original beta tests were fully contain ed within the three evaluations. The new plan would provide three opportunities for evaluati on instead of the proposed two beta tests opportunities. Secondly, the new plan more a ccurately reflected the idea that although LOGIS is a singular application, its inter face and its content could be seen as independent individual components. In essence, the new plan reflected the idea of LOGIS as an instructional engine, where the applica tion and the interface are somewhat decoupled from the content. Thirdly, the new plan better modeled the correct sequence of development events. Unlike the original plan, t he new plan clarified that the creation and revision of the content and the interface occur red simultaneously where the content affected and was affected by the interface. Finall y, the new plan highlighted the high level of interaction between the researcher and the participants that should occur early in beta testing. In this case, the one-to-one evaluat ion presented an opportunity to address issues immediately, allowing instant revision of th e content and the interface while the application was being tested. The population described in “Evaluate Component 2” in Chapter 3 formed the pool of students who were eligible to complete the evaluation tasks, and 47 students volunteered to evaluate LOGIS. The participants we re divided into 3 groups. The oneto-one evaluation group contained 3 participants, t he small-group evaluation group

PAGE 179

164 contained 13 participants, and the field trial grou p contained 31 participants. Participants were assigned to groups based on their current cour se grade and the specifications outlined in the Evaluation phase of the proposed de velopment (chapter 3). In certain cases, however, participants were placed in a parti cular group because it was more convenient for the participant. This did not prese nt a problem because the group did not need to be rigorously defined because the purpose o f this evaluation was to refine the instruction, not to experimentally investigate the effect of the instruction. Tutorial Development The tutorial tasks were developed as proposed in th e Design phase, with a small but significant change to the manner in which frame s were presented. During the programming of the application and after consulting with the content expert, a full professor who specializes in the content area and i s responsible for the behavioral courses, the researcher decided to introduce a feat ure that would guarantee that each participant responded correctly to each frame. Un der normal conditions, each frame is assigned a number of attempts that determines how m any mistakes the participant can make before the frame is considered wrong. For exa mple, a frame that is assigned two attempts allows one mistake, and then the learner m ust make the correct response on the second attempt or no points are earned for that fra me. If the second attempt is wrong, the correct answer is displayed and then the next frame is presented. The new feature, however, required that the participant enter the co rrect response even if the second attempt was wrong thus the instruction only proceed ed when the participant entered the correct response. Under the new scheme, when the m aximum number of attempts was exhausted the correct answer was displayed but the participant had to type the correct

PAGE 180

165 response to continue. The benefit of this feature was that the learner came in contact with the correct response on each frame. Under normal circumstances, each frame of each tas k would be examined for revision based on the percentage of participants wh o responded correctly to that frame. Frames that did not have a high percentage of corre ct responses, for example frames that were answered correctly by less than 90% of the par ticipants, would be examined for errors and revised. During this study, however, th e participant had to respond correctly to each frame before instruction could continue thu s in essence every participant answered every frame correctly, although points wer e not earned if the participant exceeded the maximum number of attempts. This desi gn decision was pedagogically sound but it made the analysis of the resulting dat a difficult. It was now difficult to determine if many participants entered wrong respon ses or one participant entered many wrong responses for a given frame. Given this situ ation, each frame had to be examined individually to determine if the frame contained er rors that affected many participants or if the number of wrong responses were from a few pa rticipants. The tutorial task data, which excluded the practice data, were examined in a systematic manner starting with frames that contain ed the highest number of wrong responses, that is, frames with high error rates. Table 2 in Appendix L (Table 1 is the legend for Table 2) shows the number of wrong respo nses for each frame of each task and is the basis for the analysis of the tutorial t ask data. Frames that had high error rates were examined for patterns that might indicate why participants entered wrong answers. If no discernable pattern could be seen, the frame was marked for revision or deletion.

PAGE 181

166 The reasons why certain frames had high error rates can be divided into two categories, participant generated and frame generat ed. Participant generated errors were errors caused by the participant. Examples of thes e errors included grammar mistakes, spelling mistakes, and using the wrong tense. Fram e 3 of the Basic Graphing tutorial task (task 6 in Appendix L) is an example of a grammar m istake where some participants entered the response “two axis” as opposed to the c orrect response “two axes”. Spelling mistakes were the single most common reason why par ticipants got frames wrong. Words like perpendicular, labeled, and cumulative, are just a few of the words that participants regularly misspelled. The second cate gory, frame generated, refers to errors resulting from the frame’s content. In most cases, participants responded incorrectly to the frames that were poorly primed or excessively c omplex. The same result occurred when frames contained grammar, spelling, or content errors. Frame 6 of the Behavioral Graphing Concepts tutorial task (Task 10 in Appendi x L) is an example of a frame that had no discernable pattern of errors because it was poorly primed, thus it needs to be revised or removed from the tutorial task. Knowledge Assessment Development The knowledge assessment was created using the met hodology outlined in chapter three. The final version of the assessment contained 40 items, 20 alternate-choice items, and 20 short-answer items (see Appendix M). As planned, 70% of the items were higher order items. Table 11 shows the distribution of the items.

PAGE 182

167 Table 11 The Distribution of the Knowledge Assessment Items Tutorial Weight Short Answer Alternate Choice The Control And Measurement Of Behavior 20% 4 4 The Importance Of Graphing 20% 4 4 Basic Graphing 0% 0 0 Behavioral Graphing Concepts 20% 4 4 The Cumulative Graph 20% 4 4 The Multiple Baseline Graph 20% 4 4 The knowledge assessment items were created by the researcher then examined by a member of a research consultancy group at the researcher’s university. The consultant made suggestions regarding the wording o f certain items, and recommended that multiple-choice items be excluded because of t ime constraints. Based on the consultant’s recommendation, no multiple-choice ite ms were included in the knowledge assessment. The items were tested during the one-to-one evaluat ion sessions and mistakes including typographical errors and grammatical erro rs were corrected before the statistical analysis of the data. This meant that participants were not penalized for spelling mistakes or simple grammatical errors. Th e small-group evaluation and field trial evaluation were conducted after the one-to-on e evaluation, and they included the

PAGE 183

168 refined 40 items. The items, however, were not cha nged during the small-group or field trial evaluations. Of the 47 participants who participated in the stud y, 35 completed the final version of the Knowledge Assessment. The remaining 12 participants received assessments that were either revised or the applica tion returned an error during their assessment. These 12 participants included the thr ee participants chosen for the one-toone evaluation, and the next 9 participants who sta rted the application from the smallgroup evaluation group. Table 12 shows data that d escribes the overall performance on the posttest. Table 12 A Summary of the Statistical Data from the Posttest Source n Min Max Mdn M SD skewness kurtosis Posttest 35 37.50 85.00 62.50 62.00 11.08 -0.44 SE=0.40 0.27 SE=0.78 The posttest scores were approximately normally dis tributed based on the values for skewness and kurtosis. As a guide, if the skew ness and kurtosis values fall within twice their Standard Error ranges from positive to negative, then the distribution is considered symmetrical with respect to normal distr ibution. Twice the skewness standard error (0.40) resulted in a range of -0.80 to 0.80. The posttest skewness (-0.44) fell within the -0.80 to 0.80 range, thus the distribution was symmetrical in terms of the skewness. The kurtosis (0.27) was within twice its standard e rror’s (0.78) range of the -1.56 to 1.56,

PAGE 184

169 thus the distribution was neither overly peaked nor flat. Figure 16 is a histogram of the posttest scores showing the distribution of the gra des. Figure 16. A Histogram of the posttest scores showing the dis tribution of the scores. The posttest mean ( M = 62.00, SD = 11.08) indicated that performance on the posttest was generally unsatisfactory. Three extre mely low scores, 37.50%, 37.50%, and 40.00%, decreased the overall mean, but these outli ers did not explain the overall poor scores. The low posttest scores were also a source of concern because half the posttest was Alternate-Choice items consequently it was poss ible that participants received higher than normal scores by guessing the correct answers on the Alternate-Choice items. The posttest was a Criterion Referenced test and normal ly a highly negatively skewed distribution is the preferred outcome. Considering the nature of the test and the test

PAGE 185

170 items, a low mean score may indicate a failure of t he instruction, a failure of the testing instrument, or a combination of both. In this case however, it must be noted that this was only the initial stage in developing a valid an d reliable Knowledge Assessment. Under real-world conditions, the 40-item pool would be reduced using, for example, the Item Difficulty Index and the Item Discrimination i ndex, thus the resulting data would be a more accurate reflection of the participants’ mas tery of the instruction. The internal consistency reliability, the Cronbach’ s Alpha, of the posttest was .73. Appendix N describes the data that was generated fr om the posttest. In addition to the Item Difficulty Index, the Point-Biserial Correlati on ( r pb) was used to analyze each test item. The Point-Biserial Correlation (Pearson Prod uct Moment Correlation) indicates the relationship between a participant’s score on an in dividual item (dichotomous) and the participant’s overall test score (continuous). It is normally used to explore how the item discriminates among participants, and this is norma lly a useful statistic for tests designed to spread participants, for example, Norm-Reference d tests. In this case, however, it was used with the Item Difficulty Index to provide addi tional insight into items that may problematic. Items with negative Point-Biserial Co rrelations, for example -.08 for item number 5 (see Appendix N) may not be measuring the correct construct and should be examined and revised or eliminated. Based on the I tem Difficulty Indexes, the PointBiserial Correlations, and the nature and wording o f the items themselves, items 5, 6, 13, 18, 24, 36 in Appendix N were the first items selec ted for review. In addition, the selected items had Corrected Item-Total Correlation s of -.16, -.13, .30, .28, .33, and .11 respectively, making them prime candidates for revi sion because they appeared to be negatively or weakly correlated to the overall test It must be noted that in addition to

PAGE 186

171 having the largest negative Point-Biserial Correlat ion and Corrected Item-Total Correlation, item number 5 also had the greatest ef fect on the statistical reliability of the posttest. If item number 5 was removed, the Cronba ch’s Alpha would increase from .73 to .74. Although some items have moderate difficul ty indexes, for example items 1, 3, 8, 15, and 31, they could be useful because they discr iminate between individual participants. The Knowledge Assessment was administered as the pr etest and as the posttest, where the posttest immediately followed the instruc tional content. Of the 35 participants who completed the final version of the Knowledge As sessment, 29 participants completed matching pretests and posttests, that is, the pretest and posttest had identical items. The remaining 6 participants completed post tests that were edited after the pretest was completed. Table 13 shows data that describes the overall performance on the pretest and posttest (matching items) by the 29 par ticipants. Table 13 A Summary of the Data from Participants Who Complet ed Matching Pretests and Posttests Source n Min Max Mdn M SD skewness kurtosis Pretest 29 27.50 57.50 42.50 43.36 8.22 -0.22 SE=0.43 -0.88 SE=0 .85 Posttest 29 37.50 85.00 65.00 62.41 11.96 -.54 SE=0.43 .01 SE=0.85

PAGE 187

172 The Cronbach’s Alpha of the pretest was .50 and it was .73 for the posttest. The pretest had a low reliability alpha, but that value has to be considered with caution because it could have been the result of random gue ssing. Using twice the skewness and kurtosis standard error ranges as guides, the prete st and posttest scores were determined to be normally distributed. The skewness of the pr etest (-0.22) was between -0.86 and 0.86, and its kurtosis (-0.88) was between -1.69 an d 1.69. The skewness of the posttest (0.54) was between -0.86 and 0.86 and its kurtosis ( 0.01) was between -1.69 and 1.69. Based on the standard error ranges, both distributi ons do not significantly depart from symmetry with respect to skewness and kurtosis. Fi gure 17 shows the histograms of the pretest and the posttest scores of the 29 participa nts and provides an opportunity to compare the participants’ performance on the tests.

PAGE 188

173 Figure 17 Histograms of the pretest and posttest showing th e distribution of the scores. Participants were expected to perform poorly on the pretest but although the mean ( M = 43.36, SD = 8.22) was low, it was higher than expected. The possibility of inflated scores should have been anticipated because similar to the posttest, the pretest was 50% Alternate-Choice items, therefore, it is possible t hat participants received higher than normal scores on the pretest simply by guessing the correct answers on Alternate-Choice

PAGE 189

174 items. Similar to the posttest, the pretest was on ly an initial screening of test items and it was expected to return more consistent results as t he items were revised. The 29 participants who completed the matched prete st and posttest were a subset of the 35 participants who did not experience probl ems while completing the posttest. The posttest data of the 29 participants were very consistent with the data from the overall 35-participant group. The outliers, visibl e in Figure 18, probably had great influence in the overall mean score of the 29-parti cipants subset because of the smaller sample size. These outliers did not entirely expla in the low mean ( M = 62.41, SD = 11.96) but they provided further evidence that the Knowledge Assessment might need refining.

PAGE 190

175 Figure 18. Boxplots of the scores for the 29 participants with matched pretests and posttests. The means for the pretest and the posttest were com pared using the pairedsamples t test procedure. This procedure assumes that the m ean differences are normally distributed and that each variable pair is observed under the same conditions, although the variances of each variable can be equal or uneq ual. A visual inspection of the histograms in Figure 17 and the boxplots in Figure 18 revealed that the data were normally distributed. The normality of the scores was confirmed by the skewness and kurtosis, where each was within the bound of twice its standard error. Each pretest-

PAGE 191

176 posttest pair did not occur under exactly the same conditions. Some participants completed the pretest and posttest locally in the c omputer laboratory while others completed the tests remotely. This difference is s ignificant but acceptable because this is an initial analysis that will be used to refine the Knowledge Assessment. The paired-samples t test revealed significant differences in the prete st scores and the posttest scores, t(28) = -10.30, p < .01, = .01 and a large effect size d = 1.91. The effect size (Cohen's d ) was the mean difference between the item pairs di vided by the standard deviation of the item pairs. The one-tail ed p-value (p < .005) indicated that the mean posttest score ( M = 62.41, SD = 11.96) was significantly higher than the mean pretest score ( M = 43.36, SD = 8.22 ). The data provided an opportunity for an initial review of the usefulness of the LOGIS application. There was an increase in the scores from pretest to posttest on the Knowledge Assessmen t but LOGIS cannot be assumed as the cause of the increase. What the data does sugg est, however, is that the instructional application was on the right track. A 10% increase from pretest to posttest was established as the criterion for educational signif icance, thus an increase of 19.05% confirms that the results were educationally signif icant within the defined criterion. Although the results were significant, they did not represent acceptable performance because the posttest mean of 62.41% did not represe nt a passing grade. These results should be considered within the context that the ev aluation was not experimental and that the assessment instrument was in the initial stages of being refined. It is expected that as the instruction and the assessment instrument impro ve, the results will be more significant and useful.

PAGE 192

177 Skills Assessment Development There were significant differences between the prop osed development of the skills items (Development Component 2 in chapter three), a nd the actual development of the items. The initial steps of the modified Crocker a nd Algina (1986, p. 66) 10-step process were implemented. The primary purpose and the repr esentative behaviors were identified, the specifications were generated and t he items were created then reviewed by the content expert. The Skills Assessment original ly contained four items, two items required that the participant draw Cumulative graph s, and the other two items required that the participant draw Multiple Baseline graphs. After considering the time it would take for participants to complete the Skills Assess ment, the decision was made to remove one type of each graph. The resulting Skills Asses sment had two items, one Cumulative graph item and one Multiple Baseline graph item (se e Appendix O). The Skills Assessment was only tested during the on e-to-one and the small-group evaluations, and it was not delivered before the in struction as a pretest. The decision to forgo giving the Skills Assessment as a pretest, st emmed from the concern that the postSkills Assessment results would be significantly in fluenced by a Skills pretest. The original flowchart of this study would have accommo dated a pre-Skills test because sufficient time would have elapsed between the pret est and the posttest to minimize the testing threat. Changes in the overall design of t he study and time constraints prevented the optimal pre-post design. Validity and reliability studies were limited to ex amination by the content expert, thus the items did not change during the different evaluations. The original study design called for multiple raters to be used and the Intra class Correlation Coefficient (ICC) to be

PAGE 193

178 calculated. In actuality, only one rater was used, and the ICC was not calculated. The change was made based on the advice of the content expert. The rationale was that the Skills Assessment contained items that the particip ant was required to know after completing the instruction. Thus, the aim was not to generate skills items that would distribute the participants, rather the items refle cted the standard that participants should achieve. The Skills Assessment was graded by the researcher using the point allocation scheme proposed in Table 7 and a rubric. The rubri c was developed by the researcher and it consisted of 12 points that were common to b oth the Cumulative and the Multiple Baseline graphs, and 2 points that were specific to the Multiple Baseline graph. The rubric covered the following points: 1. The ratio of the X and Y axes lengths is appropriat e. 2. The X-axis is scaled appropriately. 3. The Y-axis is scaled appropriately. 4. The units on the X-axis are labeled correctly. 5. The units on the Y-axis are labeled correctly. 6. The X-axis is labeled correctly. 7. The Y-axis is labeled correctly. 8. The data points are plotted correctly. 9. The data points are demarcated correctly. 10. The data points are connected using straight lines. 11. The graph has an appropriate title. 12. The conditions are labeled correctly (Multiple Base line graph only).

PAGE 194

179 13. The conditions are separated correctly (Multiple Ba seline graph only). Consistent with the proposed design, participants w ere graded based on the rubric after which the score was converted to a percentage A total of 16 participants completed the Skills Assessment. These participants were fro m the one-to-one evaluation group and the small-group evaluation group. Table 14 is a su mmary of the 16 participants’ grades on the Skills Assessment. Table 14 A Summary of the Statistical Data from the Skills A ssessment Source n Min Max Mdn M SD skewness kurtosis Cumulative 16 39.39 93.94 77.27 75.76 13.78 -1.12 SE=0.56 2.02 SE=1.0 MultipleBaseline 16 0.00 97.44 67.95 61.21 29.35 -0.59 SE=0.56 0.46 SE=1.09 The overall performance on the Cumulative Graph tas k was average (M=75.76%, SD=13.78), while performance on the Multiple-Baseli ne Graph task was below average (M=61.21, SD=29.35). The Boxplots in Figure 19 sho ws the distribution of the Skills Assessment grades.

PAGE 195

180 Figure 19. Boxplots of the Skills Assessment scores In an effort to determine the specific areas where participants experienced difficulty, the average score for each item on each rubric evaluation point was calculated. Table 15 shows the average score for each item on t he Cumulative Graph task and the Multiple-Baseline Graph task, where 3 is the maximu m points for each item.

PAGE 196

181 Table 15 The Average Score on each Rubric Evaluation Point f or each Skills Assessment Graph Source 1 2 3 4 5 6 7 8 9 10 11 12 13 Cumulative 1.7 1.9 1.6 2.9 2.6 2.9 2.8 1.6 2.6 2.7 1.7 MultipleBaseline 1.7 1.4 1.8 2.1 2.4 2.1 2.1 1.7 2.4 2.2 1.5 1.3 1.1 The averages of the items revealed several areas wh ere participants had difficulty. Participants performed poorly on items 1, 2, 3, 8, and 11 on both the Cumulative Graph task and the Multiple-Baseline Graph task. They al so performed poorly on items 12 and 13 on the Multiple-Baseline Graph task. The data r evealed that participants had difficulty with the axes lengths and scales (items 1, 2, and 3 ), and with the graph titles (item 11). Most importantly, participants had difficulty plott ing a correct graph (item 8), and in the case of the Multiple-Baseline Graph task, they had difficulty determining and displaying the required conditions. These results suggest tha t the instruction needs to be revised to address these issues. The ideal response would be to optimize the current instruction as opposed to increasing the length of the instruction The rubric item averages revealed that participants were only moderately successful at the non-trivial aspects of graphing. This result was consistent with the feedback that suggested that the instruction was to o long, and covered too many different aspects of graphing. It must be noted that partici pants could get high scores even if they plotted incorrect points on their graphs. Consider ing that the instructional content and the

PAGE 197

182 assessment rubric were both focused on the entire p rocess of creating a graph, a participant could receive a high final score if the y correctly constructed the axes and labels but plotted wrong points. The decision to m ake the weight of item 8 (correctly plotting points) the same as the other aspects of t he graphing, seemed reasonable at the time. In a more realistic setting, the concept of what constitutes a correct graph would be different and in that scenario, the rubric would re flect the increased importance of plotting the correct points. The overall average for the Skills Assessment was 6 7.71% (SD=15.46%). Chapter three established the criterion for effecti veness as a 10% increase from pretest to posttest, but because the Skills Assessment was not administered before the instruction, thus the criterion cannot be evaluated. The useful ness of a pretest for the Skills Assessment is questionable because even if the resu lts were statistically and educationally significant, an aggregate posttest sc ore of 67.71% still did not represent a passing grade. There are many possible reasons for the poor perfor mance on the Skills Assessment. The instruction, the assessment instru ment, and the rubric are all areas that should be examined for potential problems. The fir st area of concern, however, was the length of the instruction. Participant feedback su ggested that the single biggest issue was that the instruction was too long and this might ha ve consequently resulted in sub-par performance. Survey Development There were significant differences in the proposed design of the survey and the actual design. Most of the changes were the result of the overall change in the design of

PAGE 198

183 the study. The original proposal involved a more s ystematic refinement process where items would be revised or eliminated in tandem with the development of the LOGIS application. In reality, the survey was created, r efined, then implemented, thus the items did not change during the one-to-one evaluation, th e small group evaluation, or the field trial. As proposed, the survey comprised three parts: atti tude towards graphing, attitude towards the interface, and attitude towards the LOG IS application. The original survey created by the researcher contained 14 items, 5 ite ms focused on attitude towards graphing, 4 items focused on attitude towards the i nterface, and 5 items focused on attitude towards the LOGIS application. The 14 ite ms were evaluated by nine students enrolled in a graduate level measurement class at t he researcher’s university. The students were given a paper copy of the survey item s by their professor and they reacted to the items, making corrections, noting observatio ns, and making suggestions on the paper containing the items. The students detected several typographical errors and noted that certain sentences had to be revised because th ey contained technical language that might be unfamiliar to the participants. They also highlighted other issues including tense shifting between items, items that were wordy items that seemed to focus on ability and not attitude, and items that asked to participa nt to make a choice but did not provide alternatives from which to choose. Some students q uestioned the use of negative statements, but did not cite the negative statement s as errors. The revised items were then examined by an advanced doctoral student in the measurement and evaluation department at the univer sity. The doctoral student commented on the revised items and confirmed some o f the concerns raised by the

PAGE 199

184 previous nine reviewers. The doctoral graduate stu dent recommended that the survey be extended to included items related to specific aspe cts of the interface and specific aspects of the application. The rationale was that partici pants would find it difficult to respond to some items if they, for example, liked the icons bu t disliked the buttons. The doctoral graduate student suggested that the survey would no t be able to capture details regarding participant preferences because the survey did not contain enough items. Given the nature of the study and the time and resources need ed to create and validate a larger survey, the researcher decided to forgo expanding t he survey and instead concentrate on improving the current items. The revised items were then examined by a member o f a research consultancy group at the researcher’s university. The consulta nt confirmed the doctoral student’s concern regarding the number of items needed to ful ly address the interface and the application, but was comfortable with the survey gi ven its purpose. The consultant identified some errors and made suggestions regardi ng the structure of certain items. Based on the recommendations from the doctoral stud ent and the research consultant, the survey was reduced to 11 items. Of the 11 survey items, 4 focused on attitude towards graphing, 3 items focused on attit ude towards the interface, and 4 items focused on attitude towards the LOGIS application. Negatively worded items (3, 7, and 11) were kept because their advantages were conside red to outweigh their disadvantages, but they were recoded during analysis to reflect a positive response. Table 16 shows the final survey items and their categories.

PAGE 200

185 Table 16 The Items in the Final Version of the Survey Survey Item Category 1. It is important to know how to graph data. 2. Graphing is a useful skill to know. 3. I avoid graphing data whenever possible. 4. In the future I would create a graph to solve a problem when it is appropriate and useful Attitude towards graphing 5. The interface is easy to use. 6. In the future I would use this interface if it were available. 7. The interface is difficult to use. Attitude towards the interface 8. I think the application is useful. 9. This application helped me learn about graphing. 10. In the future I would use this application if it were available. 11. The application is unhelpful. Attitude towards the application After considering the suggestions of the nine gradu ate students, the researcher concluded that participants might not be familiar w ith the terms interface and application The researcher decided to precede each item in t hose categories with a small descriptor. “The word interface refers to the text buttons, icons, etc. that help you

PAGE 201

186 interact with a software program.” preceded items i n the attitude towards the interface category, and “The word application refers to the i nterface and the instructional content of a software program.” preceded items in the attit ude towards the application category. Consistent with the proposal, the survey was delive red using a digital slide. Instead of the proposed 1.000 to 5.000 scale, the s lide was scaled using integers from 1 to 1000 with incremental steps of 1 unit. There is a loss of precision when moving from 4000 discrete points (1.000 to 5.000) to 999 discre te points (1 to 1000), but the trade-off was acceptable because it was easier to program, ma nage, and interpret integers than decimals. It must be noted that the decision to us e the 1 to 1000 scale was arbitrary. The survey was originally designed to have one item with an unstructured response format but that feature was omitted becaus e of programming difficulty. The ability to capture comments and free-form reactions would have been a good feature, but the same data were gathered during the one-to-one e valuations and the small-group evaluations. The major validity threat to the instrument was the Location threat. All participants did not complete their survey in the c omputer lab, thus there is a possibility that their location influenced the results. This t hreat was not considered to be significant and no effort was made to minimize the potential ef fects. The survey was delivered before and after the instr uction, that is, participants experienced a pre-survey and a post-survey. The pr e-survey contained the 4 items related to attitude towards graphing and the post-survey co ntained 11 items covering all three survey categories. Of the 47 participants who part icipated in the study, 45 successfully completed both the pre-survey and the post-survey. Each of the remaining 2 participants

PAGE 202

187 returned data that were missing one response, thus all their survey responses were excluded from the survey analysis. The survey responses were based on an interval scal e from 1 to 1000 with increments of 1 unit. This design is significantly different from the more common Likert scale design. Surveys that use the Likert scale re turn data that are ordinal in nature, but LOGIS was designed to return interval data. The re searcher envisioned that using an interval-level scale would provide participants wit h greater control over their responses. The idea was that participants could fine-tune thei r responses and ensure that the survey slider was at the position that accurately reflecte d their attitude. The distinction is important because the level of measurement signific antly influences how data are analyzed and subsequently interpreted. The decisio n to use an interval scale from Strongly Disagree (1) to Strongly Agree (1000) was based on the rationale that at minimum the data could be converted to the ordinal 5-point Likert scale by collapsing the data into 5 equal ranges. Where possible, both o rdinal and interval analysis are included. To obtain ordinal data, the interval ran ge was divided into five equal sections and then assigned a value based on the Likert scale Table 17 shows the interval ranges that correspond to the Likert scale. Table 17 The Likert Scale and Corresponding Interval Ranges Likert Label Likert Scale Interval Range Strongly Disagree 1 1 to 200 Disagree 2 201 to 400

PAGE 203

188 Likert Label Likert Scale Interval Range Neutral 3 401 to 600 Agree 4 601 to 800 Strongly Agree 5 801 to 1000 Descriptive statistics about data gathered from the post-survey responses are shown in Table 18. Table 18 A Summary of the Post-Survey Data Survey Item n M SD Mdn 1. It is important to know how to graph data. 45 798.8 7 236.44 856 2. Graphing is a useful skill to know. 45 815.64 211.39 848 3. I avoid graphing data whenever possible. 45 608.27 283.91 619 4. In the future I would create a graph to solve a problem when it is appropriate and useful 45 707.27 262.36 746 5. The interface is easy to use. 45 680.11 252.91 738 6. In the future I would use this interface if it were available. 45 699.51 271.89 750 7. The interface is difficult to use. 45 712.69 258.49 759 8. I think the application is useful. 45 721.04 266.49 793 9. This application helped me learn about graphing. 45 741.93 257.02 793

PAGE 204

189 Survey Item n M SD Mdn 10. In the future I would use this application if it were available. 45 730.78 259.15 776 11. The application is unhelpful. 45 804.49 220.84 869 The post-survey data showed that in general, partic ipants had high positive attitudes towards graphing, the interface and the a pplication. The internal consistency reliability of the 11-item post-survey was.89. A F actor Analysis using Principal Components Analysis was conducted to determine if t he 11 items were measuring the attitude categories and also to highlight items tha t need revision or items that should be eliminated. The sample size (n = 45) is somewhat s mall, but this procedure should provide an initial estimate of the reliability of t he instrument. The attitude categories were treated independently but it is clear that there is a relationship between the application and the interface; the interface is included in the application. This initial analysis assumes that the categories are independent, hence the Varimax rotation, but it is expected that further revision of the items would l ead to deeper critique of relationship between the application and the interface. The Kai ser-Meyer-Olkin (KMO) statistic and Bartlett's test revealed that the factor analysis w as appropriate for these data. The KMO statistic was .763 for the data implying that relia ble factors could be found from the data. The Bartlett's test of sphericity was highly signif icant (p < .001) for the data, thus there were valid relationships among the variables. Tabl e 19 shows the rotated component matrices for the data excluding loadings that were less than .5.

PAGE 205

190 Table 19 Rotated Component Matrices of the Post-Survey Data for the 45 participants Component Survey Item 1 2 3 1. It is important to know how to graph data. .780 2. Graphing is a useful skill to know. .856 3. I avoid graphing data whenever possible. .834 4. In the future I would create a graph to solve a problem when it is appropriate and useful .856 5. The interface is easy to use. .818 6. In the future I would use this interface if it were available. .630 7. The interface is difficult to use. .896 8. I think the application is useful. .656 .590 9. This application helped me learn about graphing. .934 10. In the future I would use this application if it were available. .652 11. The application is unhelpful. .823 The three components that were revealed corresponde d to the three attitude categories. The Cronbach’s alpha for the three sca les were: attitude towards graphing (component 1) was .88, attitude towards the interfa ce (component 2) was also .88, and

PAGE 206

191 attitude towards the application (component 3) was .86. Item 8 “I think the application is useful” was loaded on both attitude towards the int erface and attitude towards the application, highlighting the need for closer exami nation of that item. Based on the data, the initial analysis of the post-survey suggested t hat the survey was consistent with its intended purpose. As a point of interest, the researcher completed a factor analysis of the ordinal data; the interval data converted to Likert scales. The factor analysis revealed 2 components for the ordinal data. Component 1 (item s 7 to 11) was a combination of attitude towards the interface and attitude towards the application, and component 2 (items 1 to 4) was attitude towards graphing. Comp onent 1 had a Cronbach’s alpha of .89 and component 2 had an alpha of .85. The resul ts of the ordinal data highlighted the possible relationship between the attitude towards the interface and the attitude towards the application. It is expected, however, that fut ure evaluations will more clearly define and separate the interface construct from the appli cation construct. The pre-survey contained the first 4 items of the s urvey and focused on attitude towards graphing. These four items formed the basi s from which the pre-survey and the post-survey were compared. Descriptive statistics about data gathered from the presurvey responses are shown in Table 20.

PAGE 207

192 Table 20 A Summary of the Pre-Survey Data Survey Item n M SD Mdn 1. It is important to know how to graph data. 45 778.8 7 227.74 788 2. Graphing is a useful skill to know. 45 798.31 233.76 826 3. I avoid graphing data whenever possible. 45 591.20 265.25 653 4. In the future I would create a graph to solve a problem when it is appropriate and useful 45 653.07 288.92 691 The internal consistency reliability (Cronbach’s al pha) of the 4-item pre-survey was .84, suggesting the survey was consistent with its intended purpose. The pre-survey data showed that in general, participants had high positive attitudes towards graphing. This result suggested that there might not be a sig nificant difference between the attitude towards graphing before the instruction and the att itude towards graphing after the instruction. A comparison of the pre-survey and the first four i tems of the post-survey revealed that the responses were very consistent, t hat is, the scores were similarly distributed. Figure 20 shows a comparison of the p re and post survey responses across the first four survey items. The intervals on the abscissae correspond to the Likert scale intervals.

PAGE 208

193 Figure 20. Histograms of the pre-survey and post-survey showi ng the distribution of the responses across the survey items ( n = 45). Figure 21 is a Scatterplot of the average pre-surve y responses and average postsurvey responses for each participant. The Scatter plot shows that the responses cluster in the area representing high attitude towards graphin g before and after the instruction.

PAGE 209

194 Figure 21. A Scatterplot of the average pre and post survey r esponses. The pre-survey and post-survey were also examined i n an effort to determine if there were significant differences in the responses The data were analyzed using the paired-samples t test procedure. The t test revealed no significant differences between the pre-survey and post-survey responses on any of the survey items. The p-value for each survey item was greater than the .05 alpha lev el thus the failure to reject the null hypothesis that the means for the pre-survey and po st-survey were different. Table 21 shows the results of the t test analysis.

PAGE 210

195 Table 21 Results from the T Test Analysis on the First 4 Ite ms on the Pre-Survey and Post-Survey Survey Item Mean Diff. SD of Diff. t df Sig. 2-tailed Cohens d 1. It is important to know how to graph data. -20.00 194.57 -0.69 44 .49 -.102 2. Graphing is a useful skill to know. -17.33 178.95 -0.65 44 .52 -.097 3. I avoid graphing data whenever possible. -17.07 224.49 -0.51 44 .61 -.076 4. In the future I would create a graph to solve a problem when it is appropriate and useful -54.20 234.38 -1.55 44 .13 -.231 The data confirmed that with respect to the attitud e towards graphing construct, the pre-survey and post survey responses were not d ifferent. The mean post-survey response ( M = 732.51, SD = 213.86) was 3.85% greater than the mean pre-surv ey response ( M = 705.36, SD = 210.31). The difference between the means is le ss than the 10% benchmark established in chapter three, thus th e initial effectiveness of LOGIS in terms of attitude change could be viewed questionab le. The researcher does not suggest that LOGIS is without value. This initial analysis reveals that the fundamental constructs of the survey and the survey items themselves need to be examined and refined.

PAGE 211

196 Reflecting on the instruction, it is now clear that of the seven modules participants experienced, only one “The Importance Of Graphing” contained content that focused on the rationale for graphing. Of the 360 instruction al frames that participants completed, only the 46 frames in the “The Importance Of Graphi ng” module addressed the issue of to attitude towards graphing. It is ambitious to t hink that an attitude could be changed when less than 13% of the instructional content was dedicated to that purpose. Graphing Proficiency Test Development The purpose of the Graphing Proficient Test was to determine the starting position of the participants. If a participant scored 100%, an arbitrarily chosen standard, on this test they would then be exempt from completing the Graphing Basics module. No participant scored 100%, therefore all the particip ants were required to completed the Graphing Basics module. There was no proposed deve lopment guide for the Graphing Proficiency Test, thus it was not systematically de veloped. This mistake on the part of the researcher significantly affected the value of this instrument. The final version of the test consisted of 10 items, 8 were Short-Answer ite ms and 2 were Alternate-Choice items. The final Graphing Proficiency test items are listed in Appendix P. Errors were found in the first version of the test and those items were revised upon discovery. The first 16 participants who comp leted the study experienced the first version of the proficiency test while the remaining 31 participants experienced the final version. Due to a programming syntax error, the la st frame on the test returned erroneous data for 33 participants. It must be noted that th e first and last frames of the Graphing Proficiency Test were not true test items. The fir st item was the introduction frame explaining the test, and the last item was the conc lusion frame explaining the next step in

PAGE 212

197 the instruction for participants. The internal con sistency reliability of the proficiency test, the Cronbach’s alpha, was .46. Table 22 describes the overall performance on the second version of the Graphing proficiency Test. Table 22 A Summary of the Statistical Data from the Graphing Proficiency Test Source n Min Max Mdn M SD skewness kurtosis Graphing Proficiency Test 47 20.00 90.00 50.00 52.77 18.38 0.36 SE = 0.35 -1.02 SE = 0.68 Figure 22 is a histogram of the grade distribution for the Graphing Proficiency Test showing the performance of the participants.

PAGE 213

198 Figure 22 A Histogram of the proficiency test scores showin g the distribution of the scores. The lack of a development plan for the proficiency test negatively affected the resulting data. The internal consistency was low a nd participants performed poorly on the test. Based on the data, it is expected that t his test would be significant revised. Descriptive Information From The One-To-One Evaluat ion Descriptive information was collected using questio ns introduced in Dick et al. (2005, p. 283): 1. Did you have any problems with specific words or ph rases? 2. Did you have any problems with specific sentences?

PAGE 214

199 3. Did you understand the themes presented in the inst ruction? 4. How effective was the sequencing of the instruction ? 5. How would you describe the delivery pace of the ins truction? 6. Was sufficient content presented in each section? 7. Was there sufficient time to complete the instructi on? 8. What is your overall reaction to the instruction? 9. What are specific strengths of the instruction? 10. What are specific weaknesses of the instruction? In addition to the reacting to preceding 10 questio ns on paper, the three participants in the one-to-one evaluation were aske d to verbally comment on the interface, the content, the functionality of the ap plication. They also reported their likes, dislikes, and changes they would make to the applic ation. Two of the participants in the one-to-one evaluation had course averages of A at t he time of the evaluation. The third participant had a course average of D at the time o f the evaluation. The actual distribution of participants in the one-to-one eval uation was slightly different from the proposed distribution. The proposed distribution i ncluded one student that was a medium performer, but this was not possible due to schedul ing conflicts. Appendix Q shows the three participants’ actual written reactions to the 10 questions. The one-to-one evaluation group provided significan t help in getting the LOGIS to work properly. They identified, for example, co ntent errors, grammatical errors, and typographical errors. Observing the participants i n this group use the application revealed errors that affected program flow and synt ax errors that affected program function. The contributions of the one-to-one grou p were significant, but most of the

PAGE 215

200 important revelations occurred as a direct result o f observing the participants as they worked. For example, watching these early particip ants manipulate the mouse to complete the practice tasks led to the realization that the original 1-pixel fidelity setting was too exact. The first participant completed som e of the graphing practice tasks while the fidelity setting was one pixel. This fidelity setting required that the participant locate individual pixels on the monitor. The researcher n oticed that the participant was becoming increasingly frustrated because of the eye strain that resulted from having to concentrate intensely on the computer monitor. The researcher changed the fidelity setting to 10-pixels allowing for a margin of error when graphing. The participant completed the instruction with a significantly redu ced level of frustration. The identification and correction of this issue would n ot have been possible if the researcher did not directly observe the participant. The importance of textual or verbal reactions canno t be minimized, but based on the textual responses in Appendix Q it is clear tha t observing the participants as they worked was a superior way of gathering data than wr itten reactions by the participants. The original intent of the questions was to generat e user feedback that could be used when revising the application, but it would be very hard to revise the application based on the written feedback in Appendix Q. The questions did not readily lend themselves to lengthy written responses thus participants’ respon ses were very short and on occasions cryptic. It is apparent that the questions were to o general for this application, and that more useful data could have been recovered if the q uestions were more specific to this application.

PAGE 216

201 In addition to the questions, the participants were asked to verbally respond to 6 aspects of the task: the interface, the content, th e functionality of application, their likes, their dislikes, and changes they would make. Appen dix R is summary of the verbal reactions given by the one-to-one evaluation group. The data obtained from the informal interview (see Appendix R) were significantly superior to the data obtained from th e written questions. The participants addressed specific issues and made suggestions that they felt would make the application better. It is possible that the unstructured natur e of the interview resulted in less inhibition on the part of the participants, or it i s possible that the 6 aspects were sufficiently task-specific to generate useful feedb ack. Regardless of the reasons, the data obtained from the informal interview were extremely helpful in fixing immediate critical errors and also the data helped shaped an overarchi ng view of the ways in which the application could be optimized. The one-to-one evaluation is used to detect obvious errors in instruction and also to obtain initial performance data and user reactio ns. It is now clear that to achieve those objectives, questions requiring written responses n eed to be worded such they elicit rich responses from participants. The data from this ev aluation suggested that in addition to the LOGIS application iteslf, the evaluation items also needed revision. Additionally, the participants must possess a basic knowledge about t he application, the process, or both so that their critique is well informed.

PAGE 217

202 Descriptive Information From The Small-Group Evalua tion Similar to the one-to-one evaluation group, the 13 participants who were in the small-group evaluation group wrote reactions to the 10 questions introduced in Dick et al. (2005, p. 283). Appendix S is a summary of the par ticipants’ responses. Similar to the one-to-one evaluation, most reactions were very sho rt and sometimes cryptic. Some participants who provided detailed feedback but thi s seemed to be a function of the participant and not a result of the evaluation item s. The one-to-one evaluation and small-group evaluatio n provided useful data despite the shortcomings of the 10 questions. Apar t from the typographical, content, and application errors that were revealed, participants overwhelmingly reported that the instruction was too long. In defense of the instru ction, it must be noted that under normal circumstances participants would not be required to complete all the modules at one sitting, thus the length of the instruction would n ot be a significant issue. The most important issue revealed was the relationship betwe en the number of frames and the number of modules. Participants preferred more mod ules and fewer frames per modules even if the total number of frames remained the sam e. This revelation should significantly influence future versions of LOGIS, a s the instruction should now move towards more modules and fewer frames per modules. Development Phase Critique 1. Was the rationale for including or omitting this ph ase clear? The rationale for including this phase was clear. This phase could not have been omitted. 2. Are all required elements of this phase accounted f or?

PAGE 218

203 The required elements of the Development phase were all necessary and consequently they were all completed 3. Was this phase temporally cost effective to complet e? It was difficult to estimate the effectiveness of t his phase because the researcher expected that this phase would consume most of the overall development time. The development phase took approximately 18 weeks and t hat translated to about 55% to the total time. More than 50% of the time in this phas e was spent programming and testing the application. The remaining time was divided am ong developing the content, the assessments, and the surveys. 4. Are the products from this phase observable and mea surable? The Development phase resulted in goals that were o bservable and measurable. The application, content, and assessments were all artifacts of this phase. 5. Are the products from this phase consistent with th e overall goal of the instructional application? The products of the stage were consistent with the overall goal of the application. Implement Phase Report Gagne et al. (2005, p. 22) listed the two component s of the Implement Phase: Market materials for adoption by teachers or studen ts, and Provide help or support as needed. These two components were not addressed d uring the current study. Implement Phase Critique 1. Was the rationale for including or omitting this ph ase clear?

PAGE 219

204 This study did not go beyond the design and develo pment of LOGIS, thus the Implement phase was not required. 2. Are all required elements of this phase accounted f or? The question is not applicable. 3. Was this phase temporally cost effective to complet e? The question is not applicable. 4. Are the products from this phase observable and mea surable? The question is not applicable. 5. Are the products from this phase consistent with th e overall goal of the instructional application? The question is not applicable. Evaluate Phase Report The actual implementation of the Evaluation Phase c omponents occurred in the Development phase. This did not occur by design; n o other solution seemed practical and reasonable. Considering that the products of t he Evaluation phase were all used in the revision process, the role of a separate and in dependent Evaluation phase becomes unclear. It is important to note that the overarch ing question is not the necessity of evaluation rather the question is whether an evalua tion phase is necessary. The case could be made that most of the evaluation should oc cur during the development phase. An evaluation would only be used if, for example, t he application were being experimentally evaluated.

PAGE 220

205 Evaluate Phase Critique 1. Was the rationale for including or omitting this ph ase clear? The components of this phase were implemented in th e Development Phase because no other solution could be found. 2. Are all required elements of this phase accounted f or? The “Implement plans for unit (course) maintenance and revision” component was not implemented because it was outside the scop e of this study. 3. Was this phase temporally cost effective to complet e? The components of the phase were cost effective t o complete. 4. Are the products from this phase observable and mea surable? The products from this phase were observations and measurements. 5. Are the products from this phase consistent with th e overall goal of the instructional application? The products of the stage are consistent with the o verall goal of the application. Summary This chapter outlined the actual model-based develo pment of LOGIS. The prescriptive nature of the documentation of phases provided an avenue to answer the research question and helped to organize the develo pment process. The researcher viewed the creation of instruction as a dynamic and ongoing process that required constant evaluation and revision, thus the Evaluati on phase was omitted and its components implemented in the Development phase.

PAGE 221

206 The data from this initial analysis of LOGIS reveal ed that the application has the potential to be effective if it is revised accordin gly. The data revealed a statistically and educationally significant increase in the Knowledge component although the mean posttest score was low. The Skills component could not be tested because no premeasure was taken, but data revealed a low mean sco re on this component. The data from the Attitude component revealed no significant difference from the pre-survey to the post-survey, but the overall ratings of attitudes t owards graphing, the interface, and the application were positive. LOGIS was not evaluated using an experimental design thus the increases cannot be attributed entirely to the application. These data were produced from the first iteration of testing on LOGIS and it is expected that future revisions would increase the value of the application.

PAGE 222

207 CHAPTER FIVE CONCLUSIONS Chapter Map This chapter answers the two main research question s, presents a general reflection, and discusses future research direction s. The research questions are discussed in practical terms and within a theoretical context The following map describes the organization of the chapter: Conclusions Chapter map Research question 1 Organization Time Perspective Research question 2 General Reflection Future direction Summary

PAGE 223

208 Research Question 1 How does the use of the ADDIE model influence the c reation of the LOGIS instructional application? The ADDIE model had a significant impact on the cre ation of LOGIS. The influence of the model can be categorized into thre e groups: Organization, Time, and Perspective. The effects of the model can be conci sely stated as: Organization The model forced a high level of planning to occur. The model dictated the sequence of tasks to be acco mplished. This reduced frustration because there was never a question of w hat to do next. The model facilitated the definition of terminal st ates and made it easier to determine when a task had been completed and when i t was time to move to the next item. The model caused the researcher to constantly monit or the high-level view (the “big picture”) and the low-level details, and the w ay each affects the other. Time The model influenced the overall development time. Perspective The model forced the researcher to revisit previous ly held assumptions about the design of instruction. The model facilitated reflection and diagnosis afte r the study was completed.

PAGE 224

209 Organization Planning and organization are features inherent in the ADDIE model and following the components in each phase is one way t o increase the probability that the project will result in good products. In theory, the model should provide a framework that guides the development of an application while facilitating a certain level of creativity on the part of the developer. In practic e, the researcher can confirm that the ADDIE model provided a structure that kept the deve lopment process on target. The importance of planning was clearly visible duri ng the development process. The data derived from planned elements were consist ent and meaningful, but data derived from under-planned elements, for example th e Graphing Proficiency test, were not reliable. In the case of the Graphing Proficie ncy test, no participant received 100% thus they all had to complete the Basic Graphing mo dule. The consequences of this under-planned test were not severe but in general a developer should ensure that every event is as planned and as controlled as possible. This speaks to the interconnectedness of the development process. If any element of the process is not sufficiently planned, the effects will ripple throughout the entire developme nt process reducing the usefulness and effectiveness of the final product. One clear advantage of ADDIE is the facilitation of transition between components and phases. The research was never lost during the development process and this was primarily because of ADDIE’s built-in defi nitions of terminal products. Using the model required that the researcher define termi nal states, thus the researcher always knew when a task was completed and when the next ta sk could begin. This capability was important because it provided a mechanism to en sure that individual components

PAGE 225

210 received their fair share of time and resources. T he ability to estimate time and resources early in the development process made the planning process easier. The identification of terminal states was also important because it was a source of motivation. The completion of each component marked the beginning of another, and this was a significant motivator because each completion meant that the researcher w as one step closer to completing the project. LOGIS contains many elements, and each element was designed to function within the broader context of the application. The ADDIE model helped to create a situation where the researcher could concentrate on individual low-level tasks while maintaining a high-level perspective (the “big pict ure”) on the project. The benefit was that during the planning and execution of a compone nt, the researcher could easily change perspective to see how the individual compon ent would affect the overall project. Maintaining a high-level perspective allowed the re searcher to identify potential problems and take corrective measures early in the process. This ability was critical because the researcher was the sole developer and w as responsible for every aspect of the project. It could be argued that the use of a mode l is the most important consideration when a single developer is creating an application. This is exemplified by the fact that the structure of the ADDIE model provided the resea rcher with a mechanism for forecasting, thus helping to avoid significant prob lems in the later stages of development. The researcher categorized developers as less-skill ed, moderately-skilled, and skilled, where experience significantly contributes to but does not determine skill level. This researcher can be described as a moderately-sk illed developer. The planning and organization features that are inherent in the ADDI E model are very helpful to less-

PAGE 226

211 skilled and moderately skilled developers. Develop ers who do not have sufficient organizational or project-management skills can use the model as a structured environment to complete their task. In this scenar io, the model masks or compensates for some of the deficiencies of the developer by provid ing a structured framework that can be used as a pathway for successful development. The model’s disciplined and methodical approach to development is beneficial to less-skill ed developers, but it might also be a source of frustration for those developers. The de veloper might feel restrained in a structured environment and this might lead to a sit uation where components or phases are eliminated because they initially appear unimportan t. It is critical that less-skilled developers understand that the structured and syste matic nature of the model is the source of model’s usefulness. If a quality product is exp ected, the less-skilled developer must resist the urge to take shortcuts by eliminate comp onents. It may be the case that moderately-skilled and skil led developers do not need to use a model or that they can be successful despite skipping components. It is clear that skilled developers possess a level of experience th at translates into faster and more accurate development but this does not necessarily mean that a model is not needed, or that they do not use models. It can be argued that skilled developers use models and do not skip components. Their expertise and experienc e makes it appear as though they skip components when in fact the components are simply b eing completed efficiently, internally, or both. A moderately-skilled develope r may have some of the skills necessary for fast and accurate development. These developers must constantly and critically evaluate themselves to ensure that they operate within the bounds of their skill

PAGE 227

212 level. If they assume that they are at a higher le vel, these moderately-skilled developers will be susceptible to the same dangers that less-s killed developers face. The skill level of the researcher was an important variable in the development process. This researcher can confirm that although there were issues regarding workload, the project would have been significantly more diff icult without the model. The ADDIE model acted as a mask and compensated for the moder ate experience of the researcher. The organizational and project-management skills of the researcher were enhanced by the model and it was easy to stay on target. The resea rcher maintained the belief that if the ADDIE steps were earnestly followed, the resulting product would be valuable. This “blind faith” was necessary because the researcher did not have prior experience developing this type of instructional application. The importance of the role of the researcher in the development process cannot be understated. In this project, the researcher was t he analyzer, the subject matter expert, the designer, and the programmer. In this case, the ma nagement and execution of these roles placed extraordinary burden on the researcher and t he temptation to eliminate components was always present. The constant re-vis ualizing of the development process and the constant forecasting of possible problems w ere very frustration at times. This was probably not due to the presence of the model, but more an artifact of the researcher as the sole developer. It could be argued that man y issues related to workload and frustration would not exist if the researcher was n ot the sole developer. The relationship between the need for a model and t he role of the developer is complex. The researcher was the sole developer in this project and consequently the researcher was extremely reliant on the structure o f the ADDIE model. It is difficult to

PAGE 228

213 comment of what would have happened if the research er was not the sole developer, or if another model, or no model, was used in the develop ment process. The researcher, however, can make suggestions that are based on the experiences gained during this study. It is reasonable to suggest that as the dev eloper assumes a greater role and therefore a greater workload in a project, the deve loper should be more inclined to use the framework that a model provides. At minimum, the f ramework would provide a mechanism to organize the workload and ultimately h elp to create a successful product. Time It became clear, upon reflection, that using ADDIE and the steps outlined in Gagne et al. (2005) was not the most temporally eff icient way to develop LOGIS. A significant amount of time was spent adapting the r equirements of the project to components in the ADDIE phases. This situation was evident during the Analysis and Design phases where goals and objectives were clari fied. During these phases, the researcher used the ADDIE components to redefine pr e-existing goals in an effort to make the goals fit the model’s structure. This tas k was not wasted effort because those redefined goals were critical during later phases. The key issue is to what degree a model accommodates a given project. It is clear that som e models are better suited for certain tasks, but it can be argued that the generalness of the ADDIE model makes it usable for all projects, but not optimally suited for any proj ect. In this case, the ADDIE model forces the developer to spent time generating and r efining the products of the early phases, at the cost of development time. This extr a time is necessary because the developer must compensate for ADDIE’s generalness b y ensuring that each element of each component is well defined and developed. The task of detailing every aspect of a

PAGE 229

214 project is inherently a worthwhile task, but it mig ht not be necessary if a model is optimally suited for a given project. The ADDIE mo del is not optimally suited for any task thus its success is dependent upon the systema tic, structured, and detailed completion of every aspect of a project. This study addresses key arguments made by Gordon a nd Zemke (2000) against ISD. The researcher can confirm the claim by Gordo n and Zemke (2000) that the model is rigid, but this was expected given that the mode l is systematic and linear in its approach to development. The researcher does not a gree, however, that using the model as prescribed produces bad results. Bad results ar e the natural consequences of bad ideas, poor planning, haphazard development or faulty impl ementation. If followed earnestly, without skipping steps, the model will compensate f or some of the less-skilled developer’s deficiencies but it cannot transform a bad idea into a good idea. Instead of focusing on the product, a more valid criticism of the model might be its effect on the development process. The model might have an adver se effect on the development process under certain conditions, but criticisms of the model cannot be made without commenting on the role of the developer in the proj ect, the skill level of the developer, and the nature of the task. Those variables seem t o influence the development process significantly and thus they should be included in a ny critique of the ADDIE model. Perspective Using the ADDIE model as outlined in Gagne et al. ( 2005) resulted in an analysis of the learning process and the nature of instructi on. Completing the components in each phase and determining what to include and where, wa s literally an exercise in shaping a value system. In essence, working with ADDIE cause d the researcher to make value

PAGE 230

215 judgments regarding learning and instruction. For example, the researcher had to reconcile the idea of evaluation; its place in the development cycle and its role in instruction. In this study, the researcher conside red evaluation to be an important component in the instruction process but it was not limited to the evaluation of learners. The researcher concluded that the application, whic h includes the interface elements and their interaction, was just as important as the con tent in terms of the instruction and learning processes. Using this viewpoint, equal em phasis was placed on both the development and refinement of the application and t he development and refinement of the content and assessment instruments. Consequent ly, learners were evaluated in terms of content (knowledge and skills tests) and in term s of their interaction with the application (survey). It is important that develop ers recognize the importance of the application in the instruction and learning process es. It is not unreasonable to suggest that well-designed content and assessment instrumen ts are less effective and reliable if their delivery systems are not equally well-designe d. The key perspective is that the content, the assessment, and the application (the i nterface elements and their interactions) are all equally important in the evaluation process Using the ADDIE model influenced the researcher’s v iew on the general nature of instructional applications. The ADDIE mode emphasi zes the use of formative evaluation in the development process. If an evaluation revea ls a deficiency, the application must allow the developer to implement the required chang es with relative ease, hence the concept of an Instructional Engine. LOGIS was desi gned to be flexible enough to accommodate a range of instruction. For example, i f an evaluation revealed that learners were weak in the area of plotting points, a Module covering that topic could be easily

PAGE 231

216 added to LOGIS. This was the result of developing LOGIS such that it could accommodate instruction on any topic related to lin ear graphing. The underlying principle was the perspective that the application and the content could be decoupled such that the application could accommodate various content. The benefit of this perspective was that additional similar content cou ld be easily added to LOGIS, and also the current content could be modified with full con fidence that the changes would not adversely affect the application. One artifact of using the model was the ability to critically look at the product and determine the location and cause of problems. The systematic nature of the model makes it easier to determine where errors occurred, the e lements that contributed to the error, and the elements that were affected by the error. The survey data can be used to illustrate the point that the model made it easy to find the s ource of errors. If LOGIS was evaluated using an experimental design and the data revealed that there was no significant attitude change from pre-survey to post -survey, the researcher might be prompted to investigate how much instruction time w as spent on attitude. Looking at the model, it would be easy to determine the exact poin t where adjustments should be made to cause a different outcome. In this case, an ext ended definition of the attitude goal would help emphasize the importance of attitude cha nge, but more significantly, the Design phase component “Determine the instructional topics (or units) to be covered, and how much time will be spent on each” would be expan ded to include more attitude based content.

PAGE 232

217 Research Question 2 Is LOGIS an effective form of instruction? a. Is LOGIS an effective form of instruction as measur ed by educationally significant differences in learners’ performance fr om the pretest to the posttest on the Knowledge measure? b. Is LOGIS an effective form of instruction as measur ed by educationally significant differences in learners’ performance fr om the pretest to the posttest on the Skills measure? c. Is LOGIS an effective form of instruction as measur ed by educationally significant differences in learners’ attitude towar ds graphing from the pre to the post Graphing Attitude Survey? The data from the analysis of LOGIS must be viewed within the correct context. The effectiveness of the LOGIS instructional applic ation was determined with several consideration. Firstly, LOGIS was not experimental ly evaluated thus these initial data results do not imply causality on the part of the a pplication. The initial data results simply gauge the initial value of LOGIS and provide a basis for comment on the potential of the application. Secondly, under normal circums tances, participants would not need to complete the entire instruction in one sitting. It is possible that the time taken to complete the task affected the performance of the p articipants. Finally, this study represents the first iteration in the development o f LOGIS. Further revision will increase the value of the application. The data in this initial analysis of LOGIS revealed that the application was more effective and useful in terms of the Knowledge meas ure than in terms of the Skills and

PAGE 233

218 Attitude measures. The Knowledge Assessment postte st scores ( M = 62.41, SD = 11.96) were significantly higher than the pretest ( M = 43.36, SD = 8.22). Although the participants performed statistically and educationa lly (a 10% increase from pre to post test) better on the posttest than they did on the p retest, their overall performance on the posttest was below average. These initial data rev ealed that several items should be revised or eliminated in an effort to continue the development of a valid and reliable knowledge assessment instrument. There was no pretest for the Skills task thus the c riteria for educational significance (a 10% increase from pre to post test) could not be assessed. Overall performance on the Skills Assessment (posttest) was average. Performance on the Cumulative Graph task was average ( M =75.76%, SD =13.78), while performance on the Multiple-Baseline Graph task was below average ( M =61.21, SD =29.35). These initial data suggest that the tutorial content and the grad ing rubric should be examined and refined to increase the value of the LOGIS. Participants had positive attitudes towards graphin g, towards the interface, and towards the application. The post-survey responses were 3.85% greater than the presurvey responses thus the increase was not educatio nally significant. General Reflection The current study used the ADDIE components outline d by Gagne et al. (2005) verbatim. Under normal circumstances, a developer would probably adjust the model’s components to meet the needs of the task, resulting in at least a reduction in frustration level. The resulting development process would ref lect the developer’s interpretation of

PAGE 234

219 model possibly resulting in an optimal solution to the task. The researcher’s use of the ADDIE components verbatim is a strong testimony to the fact that the model is flexible enough to produce good results even when a very con strained interpretation of its components is applied. This suggests that the mode l increases in value when it is optimized for a particular task. The effect of the model on the development process was categorized by Organization, Time, and Perspective. In addition t o the categories, the researcher identified three variables that also contribute to the relationship between the developer and the model: the researcher as the sole developer the skill level of the developer, and the nature of the task. The relationships among th e three variables, the model, and the developer are unclear but the researcher contends t hat they are important considerations. In terms of the researcher as the sole developer, i t is reasonable to suggest that a team implementation of ADDIE would have significant benefits in terms of the development process. Workload and subsequent frust ration levels are immediately reduced when the team approach is implemented. In addition, each team member can focus on specific tasks resulting in optimal soluti ons for each aspect of the project. The most significant drawback of the team approach is t he issue of organization. If organization and communication issues exist, the de velopment process is more prone to, for example, delays caused by duplicated effort. I n the team approach, organization and communication should be at the forefront of develop ment. The effect of the researcher’s skill-level on the d evelopment process is an important consideration. The researcher had extens ive programming experience and it is unclear how this study could have been completed wi thout significant programming

PAGE 235

220 ability. In this study, ADDIE’s facilitation of fo recasting was paired with the researcher’s ability to critical and accurately ass ess future issues, resulting in a complete view of future factors that might affect the projec t. This was important because the researcher could predict problem areas and take act ion early to avoid them. The individual contributions of the researcher’s abilit ies and the model’s characteristics are unknown but it is clear that they both influence th e development process. One of the most important reasons for using is mode l is the consistency that it affords. Using a model should result in products t hat are optimal and reproducible. One important consideration is the degree to which the model minimizes the effects of the developer such that similar results can be obtained by different developers. Based on the results of this study, it can be stated that the us e of the ADDIE model does not completely mitigate the effects of the developer, b ut does provide the mechanism through which different developers can produce similar resu lts. This consideration can be extended to Developmental Inquiry in general. It i s worthwhile to ask if a different developer followed the same process as this researc her, would the resulting product be similar to LOGIS? The idea of minimizing variance between researchers is pertinent to both the ADDIE model and Developmental Research. U nlike experimental methods, the ADDIE model and Developmental Research methods cann ot eliminate the variance between researchers. They do, however, provide a l evel of specificity greater than that of extremely qualitative methods, hence different deve lopers can produce results that are comparable. The researcher reviewed literature that called for an increased emphasis on Developmental Research. This researcher agrees wit h the assessment that the

PAGE 236

221 Instructional Technology field should place more fo cus on Developmental Research. From a practical perspective, the researcher did no t have any significant frame of reference for visualizing or executing this study. Theoretical assertions are important and they were used in this study, but there was a lack of practical usable models that the researcher could use to steer the development proce ss and the documentation of the process. This study could have benefited from an e stablished body of scholarship because, for example, the importance of monitoring “time” as a variable was not an initial consideration. A more developed body of research w ould have prompted the researcher to include or exclude certain elements thereby incr easing the value of the study. This study can serve as one of the initial data-bas ed examination of the development process. The hope is that it provides an additional perspective from which the development process can be examined. Future Direction Future research on this topic should include LOGIS, variables affecting ADDIE, and Developmental Research. The LOGIS application will be revised based on the data and conclusions in this study. The interface will be retooled with more at tention focused on font sizes, scrolling, and task instructions. The content will be revised to include more examples and to reflect better priming and prompting. The content will als o be revised to more modules with fewer frames per module. The assessments will be revised to increase their r eliability. Knowledge assessment items that were bad will be revised or r emoved, and the total number of items

PAGE 237

222 in the assessment will be reduced. The Skills ass essment rubric will be refined to reflect better standards and the grading scheme will be rev ised to reflect the increased importance of accurately plotting data points. The survey will be revised such that there is a clear distinction between the attitude towards the interface and attitude towards the application constructs. The Graphing Proficiency t est will be revised to include more systematically developed items. Addressing these i ssues should provide a good foundation for the second iteration of LOGIS refine ment. Three variables appear to be significant in the dev elopment process: the researcher as the sole developer, the skill level o f the developer, and the nature of the task. The relationship between these three variabl es and the use of a model should be examined carefully. Future researcher should try t o determine the contributions of each variable to the development process. Future research should include a more in-depth look at Developmental Research and its role within the field. More data-based res earch should be attempted in an effort the determine the true value and usefulness of Deve lopmental Research

PAGE 238

223 REFERENCES Ajzen, I. (2001). Nature and operation of attitudes Annual Review of Psychology, 52 2758. Alberto, P. A., & Troutman, A. C. (2006). Applied behavior analysis for teachers (7th ed.). Upper Saddle River, New Jersey: Pearson Merri ll Prentice Hall. Alessi, S. M., & Trollip, S. R. (2001). Multimedia for learning: Methods and development (3rd ed.). Needham Heights, Massachusetts: Allyn a nd Bacon. Barton, R. (1997). Computer-Aided Graphing: a compa rative study. Journal of Information Technology for Teacher Education, 6 (1), 59-72. Bloom, B. S. (1956). Taxonomy of educational objectives; the classificat ion of educational goals (1st ed.). New York,: Longmans Green. Brasseur, L. (1999). The role of experience and culture in computer grap hing and graph interpretive processes. Paper presented at the Proceedings of the 17th ann ual international conference on Computer documentation. Brasseur, L. (2001). Critiquing the Culture of Comp uter Graphing Practices. Journal of Technical Writing and Communication, 31 (1), 27-39. Burton, J. K., Moore, D. M., & Magliaro, S. G. (200 4). Behaviorism and instructional technology. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (2nd ed., pp. 3-36). Mahwah, N.J.: Lawrence Erlbaum. Campbell, J. O., Bourne, J., Mosterman, P. J., & Br odersen, A. J. (2002). The effectiveness of learning simulations for electroni c laboratories. Journal of Engineering Education, 91 (1), 81-87. Clark, R. C. (2002). The new ISD: Applying cognitiv e strategies to instructional design. Performance Improvement, 41 (7), 8-15. Clark, R. E. (1983). Reconsidering Research on Lear ning from Media. Review of Educational Research, 53 (4), 445-459.

PAGE 239

224 Clark, R. E. (2005). Guided experiential learning: Training design and evaluation: University of Southern California, Institute for Cr eative Technologies. Couture, M. (2004). Realism in the Design Process a nd Credibility of a Simulation-Based Virtual Laboratory. Journal of Computer Assisted Learning, 20 (1), 40-49. Crano, W. D., & Prislin, R. (2006). Attitudes and p ersuasion. Annual Review of Psychology, 57 345-374. Crocker, L. M., & Algina, J. (1986). Introduction to classical and modern test theory New York: Holt Rinehart and Winston. Cuban, L. (1986). Teachers and machines: The classroom use of technol ogy since 1920 New York, New York: Teachers College Press. Davis, D. R., Bostow, D. E., & Heimisson, G. T. (20 05). Experimental Evaluation of Incremental Prompting as a Feature of Web-Delivered Programmed Instruction (Applied Behavior Analysis). Paper presented at the Association for Behavior Analysis, Chicago, Illinois. Davis, D. R., Bostow, D. E., & Heimisson, G. T. (in press). Strengthening scientific verbal behavior: An experimental comparison of pro gressively prompted, unprompted programmed instruction, and prose. Journal of Applied Behavior Analysis de Jong, T., & van Joolingen, W. R. (1998). Scienti fic Discovery Learning with Computer Simulations of Conceptual Domains. Review of Educational Research, 68 (2), 179-201. De Vaney, A., & Butler, R. P. (1996). Voices of the Founders: Early Discourses in Educational Technology. In D. H. Jonassen (Ed.), Ha ndbook of research for educational communications and technology : a project of the as sociation for educational communications and technology (pp. 3-43). New York: Macmillan Library Reference USA. Dick, W., Carey, L., & Carey, J. O. (2005). The systematic design of instruction (6th ed.). Boston, Massachusetts: Pearson/Allyn and Bacon. Emurian, H. H., & Durham, A. G. (2003). Computer-ba sed tutoring systems: a behavioral approach. In J. A. Jacko & A. Sears (Eds.), The human-computer interaction handbook: fundamentals, evolving technologies and e merging applications (pp. 677-697): Lawrence Erlbaum Associates, Inc.

PAGE 240

225 Ericsson, K. A., Krampe, R. T., & Tesch-Rmer, C. ( 1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review, 100 (3), 363-406. Fraenkel, J. R., & Wallen, N. E. (2006). How to design and evaluate research in education (6th ed.). New York: McGraw-Hill. Gagne, R. M., Wager, W. W., Golas, K. C., & Keller, J. M. (2005). Principles of instructional design (5th ed.). Belmont, California: Wadsworth/Thomson Learning. Gal, I. (2002). Adult statistical literacy: Meaning s, components, responsibilities. International Statistical Review, 70 (1), 1-25. Gilbert, J. E., & Han, C. Y. (1999). Adapting instr uction in search of ‘a significant difference’. Journal of Network and Computer Applications, 22 (3), 149-160. Gordon, J., & Zemke, R. (2000). The Attack on ISD. Training, 37 (4), 42-45. Gredler, M. E. (2004). Games and simulations and th eir relationships to learning. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (2nd ed., pp. 571-581). Mahwah, N.J.: Lawrence Erl baum. Hansen, R. E. (2000). The Role of Experience in Lea rning: Giving Meaning and Authenticity to the Learning Process in Schools. Journal of Technology Education, 11 (2), 23-32. Hartley, J. (1998). Learning and studying : a research perspective London ; New York: Routledge. Jarvis, P. (1987). Adult learning in the social context London ; New York: Croom Helm. Jarvis, P., Holford, J., & Griffin, C. (2003). The theory & practice of learning (2nd ed.). London ; Sterling, VA: Kogan Page. Jones, T. S., & Richey, R. C. (2000). Rapid prototy ping methodology in action: A developmental study. Educational Technology Research and Development, 48 (2), 63-80. Kirschner, P. A., Sweller, J., & Clark, R. E. (2006 ). Why minimally guided instruction does not work. Educational Psychologist, 41 (2), 75-86. Kozma, R. B. (1991). Learning with Media. Review of Educational Research, 61 (2), 179211.

PAGE 241

226 Kritch, K. M., & Bostow, D. E. (1998). Degree of co nstructed-response interaction in computer-based programmed instruction. Journal of Applied Behavior Analysis, 31 (3), 387-398. Lane, D. M., & Tang, Z. (2000). Effectiveness of Si mulation Training on Transfer of Statistical Concepts. Journal of Educational Computing Research, 22 (4), 383396. Lee, J. (1999). Effectiveness of Computer-Based Ins tructional Simulation: A Meta Analysis. International Journal of Instructional Media, 26 (1), 71-85. Lockee, B., Moore, D., & Burton, J. (2004). Foundat ions of programmed instruction. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (2nd ed., pp. 1099-1130). Mahwah, N.J.: Lawrence E rlbaum. McDonald, J. K., Yanchar, S. C., & Osguthorpe, R. T (2005). Learning from programmed instruction: Examining implications for modern instructional technology. Educational Technology Research and Development, 53 (2), 84-98. McGraw, K. O., & Wong, S. P. (1996). Forming infere nces about some intraclass correlation coefficients. Psychological Methods, 1 (1), 30-46. McKenney, S., & van der Akker, J. (2005). Computerbased support for curriculum designers: A case of developmental research. Educational Technology Research and Development, 53 (2), 41-66. Merriam, S. B., & Caffarella, R. S. (1999). Learning in adulthood : a comprehensive guide (2nd ed.). San Francisco: Jossey-Bass Publishers. Miller, D. W. (1999). The Black Hole of Education R esearch. Chronicle of Higher Education, 45 (48), A17-A18. Miller, M. L., & Malott, R. W. (1997). The importan ce of overt responding in programmed instruction even with added incentives f or learning. Journal of Behavioral Education, 7 (4), 497-503. Mills, J. D. (2004). Learning Abstract Statistics C oncepts Using Simulation. Educational Research Quarterly, 28 (4), 18-33. Molenda, M. (2003). In Search of the Elusive ADDIE Model. Performance Improvement, v42 n5 May-Jun 2003 Monteiro, C., & Ainley, J. (2002). Exploring Critic al Sense in Graphing. Proceedings of the British Society for Research into Learning Math ematics, 22 (3), 61-66.

PAGE 242

227 Morrison, G. R., Ross, S. M., & Kemp, J. E. (2004). Designing effective instruction (4th ed.). Hoboken, New Jersey: John Wiley & Sons, Inc. Papanikolaou, K. A., Grigoriadou, M., Kornilakis, H ., & Magoulas, G. D. (2003). Personalizing the Interaction in a Web-based Educat ional Hypermedia System: the case of INSPIRE. User Modeling and User-Adapted Interaction, 13 (3), 213267. Park, O.-c., & Lee, J. (2004). Adaptive instruction al systems. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (2nd ed., pp. 651-684). Mahwah, N.J.: Lawrence Erlbaum. Pimentel, J. R. (1999). Design of net-learning syst ems based on experiential learning. Journal of asynchronous learning networks, 3 (2), 64-90. Ray, R. D. (1995a). A behavioral systems approach t o adaptive computerized instructional design. Behavior Research Methods, Instruments, & Computers 27 (2), 293-296. Ray, R. D. (1995b). MediaMatrix: An authoring syste m for adaptive hypermedia teaching-learning resource libraries. Journal of Computing in Higher Education, 7 (1), 44-68. Reeves, T. C. (1995). Questioning the Questions of Instructional Technolo gy Research. Paper presented at the Proceedings of the 1995 Annu al National Convention of the Association for Educational Communications and Technology AECT), Anaheim, Ca. Reeves, T. C. (2000a). Enhancing the worth of instructional technology res earch through "design experiments" and other development research strategies. Paper presented at the Symposium on International Perspectives on I nstructional Technology Research for the 21st Century., New Orleans, LA. Reeves, T. C. (2000b). Socially Responsible Educati onal Technology Research. Educational Technology, 40 (6), 19-28. Reeves, T. C. (2003). Storm Clouds on the Digital E ducation Horizon. Journal of Computing in Higher Education, 15 (1), 3-26. Reiber, L. P. (2004). Microworlds. In D. H. Jonasse n (Ed.), Handbook of research for educational communications and technology (2nd ed., pp. 583-603). Mahwah, N.J.: Lawrence Erlbaum.

PAGE 243

228 Reigeluth, C. M., & Frick, T. W. (1999). Formative research: A Methodology for Creating and Improving Design Theories. In C. M. Re igeluth (Ed.), InstructionalDesign Theories and Models: A New Paradigm of Inst ructional Theory (Vol. 2, pp. 633-651). Mahwah, N.J.: Lawrence Erlbaum Associ ates. Reynolds, C. R., Livingston, R. B., & Willson, V. ( 2006). Measurement and assessment in education Boston, MA: Allyn & Bacon. Richey, R. C., Klein, J. D., & Nelson, W. A. (2004) Developmental research: Studies of instructional design and development. In D. H. Jona ssen (Ed.), Handbook of research for educational communications and technol ogy (2nd ed., pp. 10991130). Mahwah, N.J.: Lawrence Erlbaum. Roth, W.-M., & McGinn, M. K. (1997). Graphing: Cogn itive Ability or Practice? Science Education, 81 (1), 91-106. Schunk, D. H. (2000). Learning theories : an educational perspective (3rd ed.). Upper Saddle River, N.J.: Merrill/Prentice-Hall. Seels, B., & Richey, R. (1994). Instructional technology : The definition and domai ns of the field Washington, D.C.: Association for Educational Com munications and Technology. Shute, V. J., & Psotka, J. (1996). Intelligent tuto ring systems: past, present and future. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology : a project of the association for educa tional communications and technology (pp. 570-600). New York: Macmillan Library Referen ce USA. Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard educational review, 24 86-97. Skinner, B. F. (1957). Verbal behavior New York,: Appleton-Century-Crofts. Skinner, B. F. (1958). Teaching Machines. Science, 128 969-977. Skinner, B. F. (1972). Beyond freedom and dignity Toronto ; New York: Bantam Books. Skinner, B. F. (1986). Programmed instruction revis ited. Phi Delta Kappan, 68 (2), 103110. Smith, L. D., Best, L. A., Stubbs, D. A., Johnston, J., & Archibald, A. B. (2000). Scientific Graphs and the Hierarchy of the Sciences Social Studies of Science, 30 (1), 73-94.

PAGE 244

229 Smith, M. K. (1999). Learning theory. Retrieved O ctober 10, 2005, from http://www.infed.org/biblio/b-learn.htm Smith, P. L., & Tillman, R. J. (2005). Instructional design (5th ed.). Hoboken, New Jersey: John Wiley & Sons, Inc. Song, S. H., & Keller, J. M. (2001). Effectiveness of motivationally adaptive computerassisted instruction on the dynamic aspects of moti vation. Educational Technology Research and Development, 49 (2), 5-22. Spelman, M. D. (2002). GLOBECORP: Simulation versus Tradition. Simulation & Gaming, 33 (3), 376-394. Steinberg, R. N. (2000). Computers in teaching scie nce: To simulate or not to simulate? American Journal of Physics, 68 (S1), S37-S41. Stemler, S. E. (2004). A comparison of consensus, c onsistency, and measurement approaches to estimating interrater reliability, Practical Assessment, Research & Evaluation (Vol. 9). Stoyanov, S., & Kirschner, P. (2004). Expert Concep t Mapping Method for Defining the Characteristics of Adaptive E-Learning: ALFANET Pro ject Case. Educational Technology Research and Development, 52 (2), 41-56. Tomporowski, P. D. (2003). The psychology of skill : a life-span approach Westport, Conn.: Praeger. Trochim, W. M. (2002, January 16, 2005). Research M ethods Knowledge Base. Retrieved March 6, 2006, from http://www.socialrese archmethods.net/kb/ Tudor, R. M. (1995). Isolating the Effects of Activ e Responding in Computer-Based Instruction. Journal of Applied Behavior Analysis, 28 (3), 343-344. Tudor, R. M., & Bostow, D. E. (1991). Computer-Prog rammed Instruction: The Relation of Required Interaction to Practical Application. Journal of Applied Behavior Analysis, 24 (2), 361-368. van den Akker, J. (1999). Principles and methods of development research. In J. van den Akker, R. M. Branch, K. Gustafson, N. Nieveen & T. Plomp (Eds.), Design approaches and tools in education and training (pp. 1-14). Dordrecht ; Boston: Kluwer Academic Publishers. Wiesner, T. F., & Lan, W. (2004). Comparison of stu dent learning in physical and simulated unit operations experiments. Journal of Engineering Education, 93 (3), 195-204.

PAGE 245

230 Wilson, B. G. (2005). Foundations for instructional design: Reclaiming the conversation. In J. M. Spector, C. Ohrazda, A. Van Schaack & D. A Wiley (Eds.), Innovations in instructional technology: Essays in honor of M. David Merrill (pp. 237-252). Mahwah, New Jersey: Lawrence Erlbaum Associates. Zemke, R., & Rossett, A. (2002). A Hard Look at ISD Training, 39 (2), 26-28,30-32,34.

PAGE 246

231 Appendices

PAGE 247

232 Appendix A: The Tutorials and Major Goals The Control And Measurement Of Behavior o Experimentation o Measurement is an important aspect of the science o f behavior o Controlling behavior involves the manipulation of e nvironmental variables o It is important to remain in contact with behavior The Importance Of Graphing o Data and graphing o The advantages of visual displays of data o statistical procedures versus visual presentations o Feedback and its importance o Variations in data o Interpretation of data Basic Graphing o Axes o Coordinates o Point o The origin o Scale of axes o Hatch marks o Slope of a line o The title and legend o Graphing data

PAGE 248

233 Appendix A: (Continued) Behavioral Graphing Concepts o Data scale and path o Scale breaks o Dependent and independent variables o Functional relationship o Trends o The importance of time as a unit of measure The Cumulative Graph o The cumulative record o Upward slope of the depth o Difficulty in reading o Rate of responding and its effect on the graph The Multiple Baseline Graph o Graphing multiple datasets o Starting at the origin o Phases and their indications o The indication of special events

PAGE 249

234 Appendix B: Guidelines for the Development of the Alternate-Choice Items 1. Avoid including more than one idea in the statement 2. Avoid specific determiners and qualifiers that migh t serve as cues to the answer. 3. Ensure that true and false statements are of approx imately the same length. 4. Avoid negative statements 5. Avoid long and/or complex statements. 6. Include an approximately equal number of true and f alse statements. 7. Avoid including the exact wording from the textbook

PAGE 250

235 Appendix C: Checklist for the Development of the Alternate-Choice Items 1. Does each statement include only one idea? ________ 2. Have you avoided using specific determiners and qua lifiers that could serve as cues to the answer? ________ 3. Are true and false statements of approximately the same length? ________ 4. Have you avoided negative statements? ________ 5. Have you avoided long and complex statements? _____ ___ 6. Is there an approximately equal number of true and false statements? ________ 7. Have you avoided using the exact wording from the t extbook? ________

PAGE 251

236 Appendix D: Guidelines for the Development of the Multiple-Choice Items 1. Use a printed format that makes the item as clear a s possible. 2. Have the item stem contain all the information nece ssary to understand the problem or question. 3. Provide between three and five alternatives. 4. Keep the alternatives brief and arrange them in an order that promotes efficient scanning. 5. Avoid negatively stated stems in most situations. 6. Make sure only one alternative is correct or repres ents the best answer. 7. Avoid cues that inadvertently identify the correct answer. 8. Make sure all alternatives are grammatically correc t relative to the stem. 9. Make sure no item reveals the answer to another ite m. 10. Have all distracters appear plausible. 11. Use alternative positions in a random manner for th e correct answer. 12. Minimize the use of "none of the above" and avoid u sing "all of the above." 13. Avoid artificially inflating the reading level. 14. Limit the use of always and never in the alternativ es. 15. Avoid using the exact phrasing from the text. 16. Organize the test in a logical manner. 17. Give careful consideration to the number of items o n your test. 18. Be flexible when applying these guidelines

PAGE 252

237 Appendix E: The Print Format Guidelines 1. Provide brief but clear directions. Directions shou ld include how the selected alternative should be marked. 2. The item stem should be numbered for easy identific ation, while the alternatives are indented and identified with letters. 3. Either capital or lowercase letters followed by a p eriod or parenthesis can be used for the alternatives. If a scoring sheet is used, make the alternative letters on the scoring sheet and the test as similar as possible. 4. There is no need to capitalize the beginning of alt ernatives unless they begin with a proper name. 5. When the item stem is a complete sentence, there sh ould not be a period at the end of the alternatives. 6. When the stem is in the form of an incomplete state ment with the missing phrase at the end on the sentence, alternatives should end wi th a period. 7. Keep the alternatives in a vertical list instead of placing them side by side because it is easier for students to scan a vertical list quic kly. 8. Use correct grammar and formal language structure i n writing items. 9. All items should be written so that the entire ques tion appears on one page.

PAGE 253

238 Appendix F: Checklist for the Development of the Multiple-Choice Items 1. Are the items clear and easy to read? ________ 2. Does the item stem clearly state the problem or que stion? ________ 3. Are there between three and five alternatives? ____ ____ 4. Are the alternatives brief and arranged in an order that promotes efficient scanning? ________ 5. Have you avoided negatively stated stems? ________ 6. Is there only one alternative that is correct or re presents the best answer? ________ 7. Have you checked for cues that accidentally identif y the correct answer? ________ 8. Are all alternatives grammatically correct relative to the stem? ________ 9. Have you checked to make sure no item reveals the a nswer to another item? ________ 10. Do all distracters appear plausible? ________ 11. Did you use alternative positions in a random manne r for the correct answer? ________ 12. Did you minimize the use of "none of the above" and avoid using "all of the above"? ________ 13. Is the reading level appropriate? ________ 14. Did you limit the use of always and never in the al ternatives? ________

PAGE 254

239 Appendix F: (Continued) 15. Did you avoid using the exact phrasing from the tex t? ________ 16. Is the test organized in a logical manner? ________ 17. Can the test be completed in the allotted time peri od? ________

PAGE 255

240 Appendix G: Guidelines for the Development of the Short-Answer Items 1. Structure the item so that the response is as short as possible. 2. Make sure there is only one correct response. 3. Use the direct-question format in preference to the incomplete-sentence format. 4. Have only one blank space when using the incomplete -sentence format, preferably near the end of the sentence. 5. Avoid unintentional cues to the answer. 6. Make sure the blanks provide adequate space for the student's response. 7. Indicate the degree of precision expected in questi ons requiring quantitative answers. 8. Avoid lifting sentences directly out of the textboo k and converting them into shortanswer items. 9. Create a scoring rubric and consistently apply it.

PAGE 256

241 Appendix H: Checklist for the Development of the Short-Answer Items 1. Does the item require a short response? ________ 2. Is there only one correct response? ________ 3. Did you use an incomplete sentence only when there was no loss of clarity relative to a direct question? ________ 4. Do incomplete sentences contain only one blank? ___ _____ 5. Are blanks in incomplete sentence near the end of t he sentence? ________ 6. Have you carefully checked for unintentional cues t o the answer? ________ 7. Do the blanks provide adequate space for the answer s? ________ 8. Did you indicate the degree of precision required f or quantitative answers? ________ 9. Did you avoid lifting sentences directly from the t extbook? ________ 10. Have you created a scoring rubric for each item? __ ______

PAGE 257

242 Appendix I: Checklist for the Development of the Skills Assessment Items 1. Are the statements or questions an accurate represe ntation? ________ 2. Is the item appropriate and relevant to test specif ications? ________ 3. Are there technical item-construction flaws? ______ __ 4. Did you use correct grammar? ________ 5. Did you use offensive or bias language? ________ 6. Is the level of readability appropriate? ________

PAGE 258

243 Appendix J: Guidelines for the Development of the Survey Items 1. Put statements or questions in the present tense. 2. Do not use statements that are factual or capable o f being interpreted as factual. 3. Avoid statements that can have more than one interp retation. 4. Avoid statements that are likely to be endorsed by almost everyone or almost no one. 5. Try to have an almost equal number of statements ex pressing positive and negative feelings. 6. Statements should be short, rarely exceeding 20 wor ds. 7. Each statement should be a proper grammatical sente nce. 8. Statements containing universals such as all, alway s, none, and never often intro-duce ambiguity and should be avoided. 9. Avoid use of indefinite qualifiers such as only, ju st, merely, many, few, or seldom. 10. Whenever possible, statements should be in simple s entences rather than complex or compound sentences. Avoid statements that contain if" or "because" clauses. 11. Use vocabulary that can be understood easily by the respondents. 12. Avoid use of negatives (e.g., not, none, never).

PAGE 259

244 Appendix K: Checklist for the Development of the Survey Items 1. Are the statements or questions in the present tens e? ________ 2. Did you avoid using statements that are factual or capable of being interpreted as factual? ________ 3. Did you avoid statements that can have more than on e interpretation? ________ 4. Did you avoid statements that are likely to be endo rsed by almost everyone or almost no one? ________ 5. Are there almost an equal number of statements expr essing positive and negative feelings? ________ 6. Are the statements short? ________ 7. Is each statement a proper grammatical sentence? __ ______ 8. Did you avoid statements containing universals such as all always none and never ? ________ 9. Did you avoid using indefinite qualifiers such as only just merely many few or seldom ? ________ 10. Did you use simple sentences? ________ 11. Did you avoid statements that contain "if" or "beca use" clauses? ________ 12. Did you use vocabulary that can be easily understoo d? ________ 13. Did you avoid use of negatives (e.g., not none never )? ________

PAGE 260

245 Appendix L: Wrong Responses for each Frame of eac h Tutorial Task Table 1 The Task Names and Task Numbers Used in Table 2 Task Task Number Primer 1 Primer Practice Task 2 Pretest 3 Pre-Survey 4 Graphing Proficiency Test 5 Basic Graphing 6 Basic Graphing Practice Task 7 The Control And Measurement Of Behavior 8 The Importance Of Graphing 9 Behavioral Graphing Concepts 10 The Cumulative Graph 11 The Cumulative Graph Practice Task 12 The Multiple Baseline Graph 13 The Multiple Baseline Graph Practice Task 14 Post-Survey 15 Posttest 16

PAGE 261

246 Appendix L: (Continued) Table 2 The Number of Wrong Responses for each Frame of eac h Tutorial Task Task (see Table 1 for task names) Frame 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 2 245 1 0 1 3 50 2 0 3 3 20 3 23 0 3 1 0 35 42 0 10 3 20 5 43 34 3 17 48 6 0 26 2 29 19 50 0 33 19 20 10 67 65 19 16 30 7 0 40 3 2 13 40 0 11 72 6 18 43 35 63 13 3 9 0 35 4 0 9 16 38 35 17 52 15 21 13 14 0 30 0 11 5 24 41 30 12 13 23 12 12 33 8 22 5 0 37 6 16 41 19 7 14 65 58 89 4 27 8 2 0 41 7 2 37 35 7 24 52 15 47 18 3 18 7 0 32 8 62 29 15 7 13 26 14 16 11 4 26 10 0 29 9 32 27 27 6 10 11 2 21 19 11 20 4 0 36 10 0 7 27 10 5 48 24 13 5 142 48 7 0 9 11 39 5 0 16 50 64 24 70 87 80 11 53 5 12 54 15 0 1 58 11 15 33 31 25 112 12 13 12 29 47 35 54 22 84 18 9 20 34 14 4 30 48 20 55 57 20 40 23 12 35 15 45 11 18 22 14 15 40 32 10 10

PAGE 262

247 Appendix L: (Continued) Task (see Table 1 for task names) Frame 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 16 88 18 53 23 31 31 63 86 87 10 17 1 29 21 18 30 61 28 32 30 27 18 28 26 57 1 0 60 62 17 46 35 19 61 26 1 8 9 11 24 62 18 31 20 47 13 38 23 10 30 10 9 8 21 52 25 16 40 7 36 12 4 6 22 24 13 21 25 27 17 13 1 8 23 33 4 68 17 3 7 27 7 4 24 34 29 85 23 9 9 31 14 37 25 25 4 3 79 2 36 14 69 3 26 11 19 72 28 37 37 6 27 14 24 37 31 49 22 13 28 5 22 6 8 3 11 9 29 30 68 54 8 5 9 26 30 14 13 28 7 2 10 31 27 69 11 37 5 19 32 7 60 25 12 7 6 33 5 4 3 17 29 9

PAGE 263

248 Appendix L: (Continued) Task (see Table 1 for task names) Frame 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 34 5 8 33 35 10 6 35 14 13 53 21 5 9 36 30 80 24 15 12 36 37 19 12 29 26 9 7 38 21 64 19 22 14 17 39 25 9 59 22 18 34 40 2 21 4 3 16 8 41 23 33 9 42 1 19 17 43 19 7 44 13 45 9 46 29 47 17 48 16 49 62 50 50

PAGE 264

249 Appendix L: (Continued) Task (see Table 1 for task names) Frame 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 51 11 52 44 53 5 54 15 55 16 56 20 57 53 58 41 59 42 60 7 61 4 62 15 63 32 64 4 65 55 66 44 67 39

PAGE 265

250 Appendix L: (Continued) Task (see Table 1 for task names) Frame 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 68 29 69 16 70 25 71 55 72 86 73 0

PAGE 266

251 Appendix M: The Final Version of the Items in the Knowledge Assessment 1. A Cumulative Graph can go up or down, depending on the data being used. (Type true or false) 2. The ______ ______ is a concise statement that, in c ombination with the axis and phase/condition labels, provides the reader with su fficient information to identify the independent and dependent variables. 3. Applied behavior analysis is characterized by the s earch for and demonstration of experimental ______ over socially important behavio r. 4. On a Cumulative Graph, response rates are compared with one another by comparing the slope of each data path. (Type true or false) 5. The Y axis (ordinate) represents the dimension of behavior. It represents the ______ variable. 6. ______ lines in a graph usually indicate a change i n treatments. 7. Experimental control is achieved when predictable a nd quantifiable ______ in an individual's behavior can be reliably and repeatedl y produced by the systematic manipulation of some aspect of the person's environ ment. 8. Data points can be placed on either axes without ca using distortion. (Type true or false) 9. Control is shown by a change in the independent var iable when we manipulate environmental variables. (Type true or false) 10. Visual analysis focuses upon both variability in da ta and trends. (Type true or false) 11. Multiple data paths can be used to represent measur es of the same behavior taken under different experimental conditions. (Type true or false)

PAGE 267

252 Appendix M: (Continued) 12. A ______ reveals, in a more durable way, the effect s of a procedure, suggests whether to try something new, or suggests whether t o reinstitute a previous condition. 13. Quantitative measurement, for example surveys, is c ritically important in a science of behavior. (Type true or false) 14. In order to objectively document and quantify behav ior change, direct and repeated ______ of behavior is conducted. 15. In applied behavior analysis, the Pie Chart is the most common form of data display. (Type true or false) 16. ______ is the frequency of responses emitted per un it of time, usually reported as responses per minute in applied behavior analysis. 17. A data ______ is created by connecting successive d ata points within a given phase or condition with a straight line. 18. Gradual changes in ______ from one rate to another can be hard to detect on cumulative graphs. 19. A phase change line indicates major changes in the dependent variable. (Type true or false) 20. Making valid and reliable decisions from the raw da ta is extremely difficult. (Type true or false) 21. In a ______ record, the Y-axis value of any data po int represents the total number of responses recorded since the beginning of data coll ection. 22. Behavior analysis takes the deterministic point of view. This philosophical point of view assumes that there are ______ for all events.

PAGE 268

253 Appendix M: (Continued) 23. A behavior analyst must maintain direct and continu ous contact with the behavior under investigation otherwise s/he cannot scrutiniz e it and its causes carefully. (Type true or false) 24. In a multi-tiered graph, the data from multiple ind ividuals are often stacked ______ within the same graph. 25. In a science of behavior we manipulate environmenta l variables while we measure changes in behavior. (Type true or false) 26. On a Cumulative Graph, a steep slope indicates a lo w rate of responding. (Type true or false) 27. A ______ variable is what you measure to learn whet her there is an effect in a possible cause/effect relationship. 28. On multiple-tier graphs more than one label identif ying the behavioral measure is appropriate. (Type true or false) 29. ______ analysis is used to determine whether a chan ge in behavior is consistent and related to the treatment. 30. Data points that fall on either side of a phase cha nge line should be connected. (Type true or false) 31. A ______ may be described as a major change in rein forcement contingencies, while a "condition" may be a minor variation of that phas e's contingency. 32. The X axis is a straight, horizontal line that repr esents the passage of ______ during repeated measures.

PAGE 269

254 Appendix M: (Continued) 33. Inaccurate placement of data points is an unnecessa ry source of distortion in graphic displays. (Type true or false) 34. People who are unfamiliar with Cumulative Records f ind them hard to read because they do not go downward when behavior ceases. 35. When multiple data paths are displayed on the same graph, only one line style should be used for the data paths. (Type true or false) 36. The ______ should use the same terms or phrases fou nd in the textual discussion of the procedure accompanying the graph. 37. One look at the most recent data point on a ______ graph reveals the total amount of behavior up to that point in time. 38. The graphic display of ______ allows and encourages independent judgments and interpretations of the meaning and significance of behavior change. 39. Taking averages of performances during various cond itions and plotting them would reveal trends in the data. (Type true or false) 40. Time is a variable in all experiments and should no t be distorted arbitrarily in a graphic display. (Type true or false)

PAGE 270

255 Appendix N: Itemized Summary of the Posttest Data Item Item Type Correct Answers Wrong Answers Item Difficulty Index r pb p (2-tailed) 1 Alternate-Choice 21 14 .60 .59 .00 2 Short-Answer 10 25 .29 .36 .03 3 Short-Answer 16 19 .46 .53 .00 4 Alternate-Choice 34 1 .97 -.01 .96 5 Short-Answer 7 28 .20 -.08 .67 6 Short-Answer 7 28 .20 -.04 .81 7 Short-Answer 11 24 .31 .13 .46 8 Alternate-Choice 21 14 .60 .60 .00 9 Alternate-Choice 8 27 .23 .41 .01 10 Alternate-Choice 33 2 .94 -.04 .82 11 Alternate-Choice 33 2 .94 .41 .01 12 Short-Answer 27 8 .77 .36 .03 13 Alternate-Choice 4 31 .11 .37 .03 14 Short-Answer 10 25 .29 .26 .13 15 Alternate-Choice 28 7 .80 .48 .00 16 Short-Answer 29 6 .83 .31 .07 17 Short-Answer 11 24 .31 .43 .01

PAGE 271

256 Appendix N: (Continued) Item Item Type Correct Answers Wrong Answers Item Difficulty Index r pb p (2-tailed) 18 Short-Answer 5 30 .14 .36 .04 19 Alternate-Choice 8 27 .23 .29 .09 20 Alternate-Choice 31 4 .89 .27 .11 21 Short-Answer 32 3 .91 .38 .02 22 Short-Answer 32 3 .91 .50 .00 23 Alternate-Choice 34 1 .97 .19 .28 24 Short-Answer 6 29 .17 .40 .02 25 Alternate-Choice 35 0 1.00 26 Alternate-Choice 32 3 .91 .55 .00 27 Short-Answer 26 9 .74 .02 .92 28 Alternate-Choice 30 5 .86 .09 .59 29 Short-Answer 15 20 .43 .09 .60 30 Alternate-Choice 28 7 .80 .52 .00 31 Short-Answer 22 13 .63 .47 .00 32 Short-Answer 32 3 .91 .55 .00 33 Alternate-Choice 31 4 .89 .07 .71 34 Alternate-Choice 32 3 .91 -.01 .94

PAGE 272

257 Appendix N: (Continued) Item Item Type Correct Answers Wrong Answers Item Difficulty Index r pb p (2-tailed) 35 Alternate-Choice 29 6 .83 .45 .01 36 Short-Answer 4 31 .11 .18 .30 37 Short-Answer 33 2 .94 .19 .28 38 Short-Answer 21 14 .60 .40 .02 39 Alternate-Choice 7 28 .20 .30 .08 40 Alternate-Choice 33 2 .94 -.01 .95

PAGE 273

258 Appendix O: The Skills Assessment Items Item 1: A teacher was concerned about a particular child’s use of profanity towards other students. The teacher decided to gather data on th e behavior, noting the occurrences of profanity each day for two weeks. The result was: Day Occurrences 1 2 2 1 3 3 4 0 5 2 6 1 7 0 8 3 9 1 10 2 Construct a Cumulative graph of the data. Item 2: A high school librarian was concerned about inciden ces of loud noises in the library during a specific time of the day. The librarian d ecided to try a simple solution and play classical music during this particular period.

PAGE 274

259 Appendix O: (Continued) As a baseline, the librarian collected data for 5 d ays, counting the number of noise incidences each day. The following week, classical music was played for 5 days and the library again counted the number of noise incidence s. During the third week, classical music was not played for the first 3 days, but it w as played during the last two days. The following data were collected: (Baseline: 6, 9, 7, 9, 10) (Classical Music: 5, 4, 4, 2, 3) (Baseline: 8, 10, 7) (Classical Music: 4, 3) Construct a graph of the data.

PAGE 275

260 Appendix P: The Graphing Proficiency Items 1. The horizontal axis is the ______ axis 2. The X and Y ______ of a point describe the point's location on a 2-dimensional plane. 3. ______ is the X coordinate of the origin. (Use num bers) 4. The axes of a graph are ______, this means that the y form a ______ (use numbers) angle at the point where they intersect. 5. It is important to label all tick marks. (Type true or false) 6. (2,4) (is/is not) ______ a valid representation of a pair of coordinates. 5-5 (is/is not) ______. 7. The ______ coordinate system describes a point usi ng 2 numbers. 8. A line is described by at least ______ points. (Use numbers) 9. A line segment is a part of a line bounded by 2 poi nts. A ______ is a line that starts at 1 point and extends to infinity. 10. Tick marks can be placed at different intervals alo ng an axis. (Type true or false)

PAGE 276

261 Appendix Q: Actual Responses from in the One-To-O ne Evaluation Group Question Participant 1 Participant 2 Participant 3 1 Yes No No 2 Yes No No 3 Yes Yes Some what 4 (no reaction) Very effective Very effective 5 The pace was fine Steady pace Slow 6 Yes Yes Yes 7 Yes Yes Yes 8 My overall reaction was misunderstood… (remaining reaction refers to the course content) At start a little difficult to understand Good 9 None really Helps user by allowing user to reinput correct reply Detailed 10 Not enough details on why graphing was important Directions could be clearer/easier to understand Time consuming

PAGE 277

262 Appendix R: A Summary of the Verbal Responses fro m Participants in the One-To-One Evaluation Group Topic Participant 1 Participant 2 Participant 3 Interface 1. Text too small. 2. Difficult to graph with mouse. 3. Some options were shaded and confusing. Unclear, not sure what to do next, but it became clear after a while. 1. Usable but crowded. 2. No time for freelance. Content 1. Good but not enough details. 2. Learn some things from the tutorials 3. Too long. Content was not hard. 1. Hard 2. Spelling errors 3. Pictures need to remain visible (no scrolling) Functionality Freelance was not mandatory so it was avoided Program did not work initially. 1. Overview needed 2. Make instruction precise and concise. Likes 1. Informative. 2. Opportunity to actually graph. 3. Graphing screen showed when the answer was wrong. Review Screen after incorrect answers. Tutorials move from one to the next easily.

PAGE 278

263 Appendix R: (Continued) Topic Participant 1 Participant 2 Participant 3 Dislikes 1. Instruction text size was too small. 2. Seeing the frame score was intimidating. 3. Took too long. 4. Would prefer more modules and fewer frames per module. Too long, would prefer more tutorials and fewer frames. Lay words in the tutorials. Changes 1. Increase text sizes. 2. No scrolling. 1. Increase text size. 2. clarify options. 1. Add an overview. 2. Streamline instruction.

PAGE 279

264 Appendix S: A Summary of the Small-Group Evaluati on Responses 1. Did you have any problems with specific words or ph rases? Ten participants reported no problems with specific words or phrases. The 3 participants who reported having problems did not specify the wo rds or phrases that caused the problems. 2. Did you have any problems with specific sentences? Ten participants reported no problems with specific sentences. The three participants who reported having problems did not specify the se ntences that caused the problems. One participant noted that a few sentences had erro rs, and another learner noted that some sentences could have been more precise. 3. Did you understand the themes presented in the inst ruction? Twelve participants reported that they understood t he themes, while one participant reported that they only somewhat understood the the mes. 4. How effective was the sequencing of the instruction ? All 13 participants reported that the instruction w as well sequenced and effective. One participant mentioned that the instruction was too long and repetitive, and another participant noted that it was not bad but could hav e been better. The participant did not specify in what ways the instruction could have bee n better. 5. How would you describe the delivery pace of the ins truction? Eleven responses described the instruction pace as good. The 2 participants who did not report the pace as being good noted that the instru ction was at times choppy and a little vague, and that there were too many frames. Four p articipants who reported that the pace was good also mentioned that the instruction was to o long.

PAGE 280

265 Appendix S: (Continued) 6. Was sufficient content presented in each section? Eleven participants reported that there was suffici ent content in each section, while the remaining two participants were uncertain about eit her the question or their responses to the question. 7. Was there sufficient time to complete the instructi on? Twelve participants reported that they had enough t ime to complete the instruction, while one participant reported that the time was not suff icient. 8. What is your overall reaction to the instruction? All 13 participants reacted positively to the instr uction, reporting that the instruction was effective and organized. In addition to the reacti ng positively, 6 participants added that the instruction was too lengthy. 9. What are specific strengths of the instruction? Specific strengths included: vocabulary and detaile d explanations, clarity and emphasis on terms, allows active working, step-by-step instr uction, simple and easy, the ability to repeat modules, and the use of repetition and pictu res. 10. What are specific weaknesses of the instruction? Specific weakness included: length, some vague sent ences, animation and color, and some unclear instruction. Most participants report ed that the instruction was too long. One participant noted that they would have preferre d fewer frames and more modules.

PAGE 281

266 Appendix T: LOGIS Java Code Email the author for the LOGIS Java code.

PAGE 282

About the Author Darrel Davis was born in Belize City, Belize but he spent much of his childhood in the capital city Belmopan. He completed his Bach elor of Science in Mathematics Education at the University College of Belize then earned a Master of Science in Computer Science at the University of South Florida His interest in technology and teaching led him to the Instructional Technology pr ogram in the College of Education at the University of South Florida Darrel has taught at the high school level and at t he university level. He has taught undergraduate students and graduate students and has extensive experience with online learning. Mr. Davis is currently a Heanon Wilkins Fellow at M iami University of Ohio.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001935446
003 fts
005 20080423135411.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 080423s2007 flua sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0002271
035
(OCoLC)226058462
040
FHM
c FHM
049
FHMM
090
LB1028.3 (ONLINE)
1 100
Davis, Darrel R.
4 245
The model-based systematic development of LOGIS online graphing instructional simulator
h [electronic resource] /
by Darrel R. Davis
260
[Tampa, Fla.] :
b University of South Florida,
2007.
520
ABSTRACT: This Developmental Research study described the development of an interactive online graphing instructional application and the impact of the Analysis Design Development Implementation Evaluation (ADDIE) model on the development process. An optimal learning environment was produced by combining Programmed Instruction and Adaptive Instruction principles with a graphing simulator that implemented guided contingent practice. The development process entailed the creation and validation of three instruments measuring knowledge, skills, and attitudes, which were components of the instruction. The research questions were focused on the influence of the ADDIE model on the development process and the value of the LOGIS instructional application. The model had a significant effect on the development process and the effects were categorized by: Organization, Time, and Perspective.In terms of Organization, the model forced a high level of planning to occur and dictated the task sequence thereby reducing frustration. The model facilitated the definition of terminal states and made it easier to transition from completed tasks to new tasks. The model also forced the simultaneous consideration of global and local views of the development process. The model had a significant effect on Time and Perspective. With respect to Time, using the model resulted in increased development time. Perspectives were influenced because previously held assumptions about instructional design were exposed for critique. Also, the model facilitated post project reflection and problem diagnosis. LOGIS was more valuable in terms of the knowledge assessment than the skills and attitudes assessments. There was a statistically and educationally significant increase from the pretest to posttest on the knowledge assessment, but the overall posttest performance was below average.Overall performance on the skills assessment was also below average. Participants reported positive dispositions toward LOGIS and toward graphing, but no significant difference was found between the pre-instruction survey and the post-instruction survey. The value of LOGIS must be considered within the context that this study was the first iteration in the refinement of the LOGIS instructional application.
502
Dissertation (Ph.D.)--University of South Florida, 2007.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 266 pages.
Includes vita.
590
Advisor: James White, Ph.D.
653
Developmental research.
Simulation.
ADDIE.
Model.
Systematic design.
Guided contingent practice.
Programmed instruction.
Adaptive instruction.
0 690
Dissertations, Academic
z USF
x Instructional Technology
Doctoral.
773
t USF Electronic Theses and Dissertations.
856
u http://digital.lib.usf.edu/?e14.2271