USF Libraries
USF Digital Collections

Full-page versus partial-page screen designs in web-based training

MISSING IMAGE

Material Information

Title:
Full-page versus partial-page screen designs in web-based training their effects on learner satisfaction and performance
Physical Description:
Book
Language:
English
Creator:
Grace, Phillip Eulon
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Scrolling
WBT
Interface design
CBI
Paging
Usability
Instructional design
Non-scrolling
Computer-based instruction
Web-based instruction
Screen layout
Dissertations, Academic -- Secondary Education -- Doctoral -- USF
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: This is a report on research regarding the screen layout of Web-based training (WBT) programs, conducted with an eye toward providing evidence-based guidance for the design and development of WBT interfaces. Specifically, the study investigated the relative instructional benefits of two general types of WBT screen design, full-page and partial-page, in terms of both learner performance and learner satisfaction. The main hypotheses of the study were that the full-page design option would yield significantly better outcomes in both categories of interest.The study employed a mixed-method design, generating both quantitative and qualitative data. The main phase of the study was experimental, following a factorial design to explore the relationships between a single treatment variable (WBT screen design) in two treatment conditions (partial-page WBT design and full-page WBT design) and two dependent variables (learner performance and learner satisfaction). Both a full-page and ^a partial-page version of the same Web-based tutorial were created, and 129 self-selected undergraduate students who reported having little or no experience with the tutorial subject matter were randomly assigned into the two treatment groups. Performance data were collected as scores on the tutorial's 18-item, multiple choice final exam, and satisfaction data were collected via a 10-item satisfaction survey. In addition, 59 of the study participants were randomly selected to participate in post-study session interviews.The results of the study yielded no significant difference between the two treatment groups for either learner performance or learner satisfaction; thus, making it impossible to reject the null hypothesis for either of the two primary research questions. The conclusion of this study was that the presence or absence of scrolling alone is not a significant factor either in how well a person performs in a WBT program or how satisfied they are with the learning experience . However, while analysis of the post-study session interview data supported this conclusion, the fact that a large majority of the interviewees stated a preference for the full-page, non-scrolling WBT interface design suggests that some elements inherent in the full-page design might warrant further consideration and/or study.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2005.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Phillip Eulon Grace.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 233 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001790631
oclc - 145202989
usfldc doi - E14-SFE0001520
usfldc handle - e14.1520
System ID:
SFS0025838:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Full Page Versus Partial Page Screen Designs in Web Based Training: Their Effects on Learner Satisfaction and Performance by Phillip Eulon Grace A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Phil osophy Department of Secondary Education College of Education University of South Florida Major Professor: James White, Ph.D. Ann Barron, Ed .D. Darrel Bostow, Ph.D. Robert Dedrick, Ph.D. Date of Approval: December 8, 2005 Keywords: scrolling, WBT, interface design, CBI, paging, usability, instructional design, non scrolling computer based instruction, Web based instruction screen layout Copyright 200 6 Phillip E ulon Grace

PAGE 2

Dedication First, to my mother and ste p father, Doris and Don Sm ith: t wo finer, more loving and supportive people cannot be found on this planet. They have my eternal love and gratitude. I am forever in their debt. And to my father, James E. who passed from this phase of the world before I was capable of truly communi cating with him I think he would have been pleased with this accomplishment. Finally, and most importantly, to my most beloved and ever enduring wife, Kathy, whose unyielding belief, perpetual encouragement, steadfast support and Job like fortitude were v ital. She was literally the inspiration for me to take up this enterprise. If not for her, I seriously doubt I would have ever begun it. Im quite sure I would not have finished it without her.

PAGE 3

Acknowledgments This endeavor did not take place in a vac uum. It was facilitated by a community of individuals who contributed to the effort either academically, professionally, psychologically, emotionally and/or materially. Foremost among these benefactors are the me mbers of my doctoral committee: Dr. James Wh ite, with whom it was always a pleasure to interact, provided years of sound guidance, instruction, and gentle prodding along the path; Dr. Robert Dedrick, perhaps the lynch pin in this project, made sure that m y statistical analyses were at least credible ; and Dr. Ann Barron and Dr. Darrel Bostow who dedicated their valuable time to the careful review of my work and provided their excellent insight and suggestions I also thank Dr. George Batsche, the outside chair for my study proposal and final defenses for helping to make each a positive experience. I would also like to thank several folks at the Louis de la Parte Florida Mental Health Institute (FMHI) where I both worked and conduct ed my study. Dr. Larry Thompson graciously allowed me to integrate the study into my work schedule and office space. Michelle Kunkel provided space, furniture and the computers to run my study. And Dennis Guillette and David Pinero of FMHI s Office of Information Technology helped keep my study lab computers connected and functioning I thank Dr. Tina L. Majchrzak who not only gave permission to adapt her Internet Programming courseware for my study, but provided invaluable help in its adaptation and gave her blessing of the final product. I also thank Janet Giles, who we nt above and beyond in getting this manuscript ready for final submission I am very grateful to several friends who provided support and encouragement throughout this who endeavor: Beverly Crockett, Carmen Bermudez, Isa and Malika Naster, Greg Curtis, and my peerless peers, Dr. Sally ( Szydlo ) Zabel, Dr. Shauna Schullo, Dr. Kim Walker, and Dr. Julie Jansse n. And a special thank you to Marilyn Washington, who helped grease the USF College of Education wheels. Finally, I thank the many individuals who helped t est my study design and Chris Wiesen of the University of North Carolina, who provided additional analysis support.

PAGE 4

i Table of Contents List of Tables ................................ ................................ ................................ ..................... vii List of Figures ................................ ................................ ................................ ................... viii Abstract ................................ ................................ ................................ ............................... x i Chapter One Introduction and Background ................................ ................................ ...... 1 Introduction ................................ ................................ ................................ .............. 1 Stat ement of the Problem ................................ ................................ ......................... 3 Purpose of This Study ................................ ................................ .............................. 6 Rationale for This Study ................................ ................................ .......................... 6 Rese arch Questions and Hypotheses ................................ ................................ ...... 7 Limitations ................................ ................................ ................................ ............... 8 Definitions and Acronyms ................................ ................................ ...................... 9 World Wide Web (a.k.a. the Web) ................................ ............................. 10 HTML ................................ ................................ ................................ ........ 10 Computer Based Instruction (CBI) ................................ ............................ 1 1 Web Based Tr aining (WBT) ................................ ................................ ...... 1 2 Usability (a.k.a. Web Usability) ................................ ................................ 12 Screen Design (a.k.a. Interface Design) ................................ ..................... 14 Scrolling ................................ ................................ ................................ ..... 14 Partial Page WBT Screen Design ................................ .............................. 15

PAGE 5

ii Paging ................................ ................................ ................................ ......... 16 Full Page WBT Screen Design ................................ ................................ .. 17 Chunking ................................ ................................ ................................ .... 18 Basic Web Page Programming T utorial (BWPP) ................................ ...... 18 Ch apter Two Literature Review ................................ ................................ ...................... 19 Introduction ................................ ................................ ................................ ............ 19 Effective Computer B ased Instruction (CBI) ................................ ........................ 21 The Learning and Media Debate ................................ ................................ ............ 24 Instructio nal Design: Virtues and Flaws ................................ ................................ 25 Interface Design ................................ ................................ ................................ ..... 29 Screen Density and Inst ructional Text ................................ ................................ .. 31 Me mory, Reading, and Learning ................................ ................................ ........... 34 Chunking Up to Pr oduce Lean Instructional Text ................................ ................. 38 To Scroll or Not to Scroll ................................ ................................ ....................... 41 Comparing Partial Page and Full P age WBT Screen Designs .............................. 42 Scrolling vs. Paging Studies ................................ ................................ .................. 43 Summary ................................ ................................ ................................ ................ 46 Chapter Three Research Methods ................................ ................................ ................... 47 Study Overview ................................ ................................ ................................ ...... 47 Research Design ................................ ................................ ................................ ..... 49 Study Participants ................................ ................................ ................................ .. 49 Descriptive Statistics for the Total Study Group ................................ ....... 50 Sample Size and Selection ................................ ................................ ......... 52

PAGE 6

iii Measures ................................ ................................ ................................ ................ 58 Data Collection Procedures ................................ ................................ .................... 58 The Computer Lab ................................ ................................ ..................... 58 Pilot Studies ................................ ................................ ............................... 60 Initiatio n of the Main Stud y ................................ ................................ ....... 62 Study Session Preparation ................................ ................................ .......... 62 Random Assignment of Parti cipants into Treatment Groups .................... 63 Random Selection for Post Session Interviews ................................ ......... 64 Informed Consent ................................ ................................ ....................... 65 T he Web Skills Assessment Program (WSA) ................................ ........... 66 T he B asic W eb P age P rogramming Tutorial (BWPP) ............................... 66 The Post Session Interview ................................ ................................ ........ 68 Confidentiality and Use of Data Collected for This Study ........................ 69 Data Collection Instruments ................................ ................................ ................... 70 Participant Demographics The WSA Program ................................ ....... 71 Learner Performance The BWPP Exam ................................ ................. 73 Validity ................................ ................................ ........................... 7 6 Rel iability ................................ ................................ ....................... 77 Learner Satisfaction The Learner Satisfaction Survey ........................... 78 Validity ................................ ................................ ........................... 79 Reliability ................................ ................................ ....................... 80 The Post Session Interview ................................ ................................ ........ 80 Interview Inte r Rater Reliability Procedure ................................ ............... 82

PAGE 7

iv Interview Inter Rater Reliabi lity Outcome ................................ ................ 83 Data A nalysis ................................ ................................ ............................. 83 Chapter Four Results ................................ ................................ ................................ ....... 86 Introduction ................................ ................................ ................................ ............ 86 Equivalen c e of the Two Treatment Groups ................................ ........................... 86 Gender Equivalence ................................ ................................ ................... 87 Age Equivalence ................................ ................................ ........................ 87 Pri or HTML Awareness Equivalence ................................ ........................ 88 HTML Experience Equivalence ................................ ................................ 88 T otal Session Time Equival ence ................................ ................................ 89 Conclusion Regarding Eq uivalency of Treatment Groups ........................ 90 The BWPP Exam Score and Satisfaction Level ................................ .................... 90 Learner Performance Effects ................................ ................................ ................. 91 Learner Satisfaction Effects ................................ ................................ ................... 91 Secondary Relationships ................................ ................................ ........................ 92 Post Session Interviews ................................ ................................ ......................... 94 Int erview Question 1 ................................ ................................ .................. 99 Interview Question 2 ................................ ................................ .................. 99 Interview Question 3 ................................ ................................ ................ 100 Interview Question 4 ................................ ................................ ................ 101 Interview Question 5 ................................ ................................ ................ 102 Interview Question 6 ................................ ................................ ................ 103 Interview Question 7 ................................ ................................ ................ 104

PAGE 8

v Interview Question 8 ................................ ................................ ................ 105 Interview Question 9 ................................ ................................ ................ 106 Interview Qu estion 10 ................................ ................................ .............. 108 Interview Question 11 ................................ ................................ .............. 110 Interview Question 12 ................................ ................................ .............. 111 Chapter Five Discussion ................................ ................................ ............................... 113 Introduction ................................ ................................ ................................ .......... 113 Purpose of the Study ................................ ................................ ............................ 113 Revi ew of the Research Questions ................................ ................................ ....... 114 Results for the Research Questions ................................ ................................ ...... 114 Discussion ................................ ................................ ................................ ............ 114 L earner Performance Outcomes ................................ ............................... 116 Learner Satisfaction Outcomes ................................ ................................ 117 Reflections o n the Performanc e and Satisfaction Outcomes ................... 117 Secondary Rela tionships ................................ ................................ .......... 123 Recommendations Deriving f rom t his Study ................................ ....................... 126 Recommendations for the Design of WBT Programs ............................. 1 27 Recommendations for Impro ving t his Study ................................ ........... 1 2 9 Recomm e ndations for Future Research ................................ ................... 138 Conclusion ................................ ................................ ................................ ........... 140 N otes ................................ ................................ ................................ ................................ 142 References ................................ ................................ ................................ ........................ 144 Appendices ................................ ................................ ................................ ....................... 15 8

PAGE 9

vi Appendix A: Compa rison of Full Page and Partial Page Screen Design s ........... 15 9 Appendix B: Modifications to the Original Proposed Study ............................... 1 60 Appendix C: Frequen cy Table of Participant Ages ................................ ............. 16 6 Appendix D: Samples of R ecruitment Materials ................................ ................. 16 7 Appendix E: The Study Pa rticipant Scheduling Process ................................ ..... 1 71 Append ix F: Subject Prep Ch ecklist ................................ ................................ .... 1 80 Ap pend ix G: Informed Consent Form ................................ ................................ 1 81 Appendix H: Post Session Interview Guide ................................ ........................ 18 3 Appendix I: Web Skills Assessment Program (WSA ) ................................ ........ 18 5 Appendix J: Description of the Basic W eb Page Programming Tutorial ............ 19 8 Appendix K: The Basic Web Page Programming Ex am Items ........................ 2 20 Appendix L: Dr. Tina Majchrzaks A pproval of the BWPP tutorial ................... 2 25 Appendix M: Learner Satisfaction Survey Question s ................................ ......... 22 6 Appendix N: Web Form for Entering Post Session Interview Data into Database ................................ ................................ ........................ 22 7 Appendix O: Sample Transcript of a Post Session Interview ............................. 22 8 About the Author ................................ ................................ ................................ .... End Page

PAGE 10

vii List of Tables Table 1. Total Grou p Gender and Age Demographics ................................ .................. 51 Table 2. HTML Awareness by Gender ................................ ................................ .......... 51 Table 3. HTML Experience by Gender ................................ ................................ .......... 52 Table 4. Gender by Treatment G roup ................................ ................................ ............ 87 Tab le 5. Age by Treatment Group ................................ ................................ ................. 88 Table 6. Prior HTML Awareness by Treatment Group ................................ ................. 88 Table 7. HTML Experience by Treatment Group ................................ .......................... 89 Table 8. Total Session Time by Treatment Group ................................ ......................... 89 Table 9. Independent T Te st Results of BWPP Exam Score s by T reatment Group ..... 91 Table 10. Independent T Test Results of Satisfaction Level by Treatment Group ......... 92 Table 11. Multiple Regression Results o f Both Exam Score and Satisfaction Level ...... 94 Table 12. Post Session Interviewees Gender and Age Split b y Tr eatment Group ......... 95 Table 13. Post Session Interviewees HTML Awareness and Experience Split by Treatment Group ................................ ................................ ................ 95 Table 14. Post Session Interview Responses for Total Group and b y Treatment Group ................................ ................................ ................................ ............... 97 Table 15. Frequenci es o f Study Participant Ages ................................ .......................... 1 66

PAGE 11

viii List of Figures Figure 1. Comparison of Full Page and Partial Page WBT Screen Designs ................ 15 9 Figure 2. Sample Recruitment Poster ................................ ................................ ............ 16 7 Figure 3. Sample Recruitment Handbill ................................ ................................ ........ 16 8 Figure 4. The Study Web Site Home Page ................................ ................................ .... 1 71 Figure 5. Participation Criteria Page ................................ ................................ ............. 17 3 Figure 6. The Scheduling Calendar ................................ ................................ ............... 17 4 Figure 7. The Session Sign Up Form ................................ ................................ ............ 17 6 Figure 8. Study Session Confirmation ................................ ................................ .......... 177 Figure 9. The Cancellation Form ................................ ................................ ................... 17 8 Figure 10. T he Cancellation Confirm ation ................................ ................................ ...... 17 8 Figur e 11. The Openin g Screen for the WSA Program ................................ .................. 18 5 Figure 12. Task 1: Using Form Elements on a Web Page ................................ .............. 18 6 Figure 13. Task 2: Differentiation Between Form Elements b y Name ........................... 18 7 Figure 14. Task 4: Differentiation Between Form Elements by Function ...................... 18 8 F igure 15. Task 5: Numbering Sequencing ................................ ................................ ..... 18 9 Figure 16. Link for Displaying Resul ts of Number Sequencing Task ............................ 1 90 Figure 17. Number Sequencing Task Results Page ................................ ........................ 1 91 Figure 18. Page 7 Content If New Window Was Not Closed ................................ ......... 19 2 Figure 19. Page 7 Cont ent If New Window Was Closed ................................ ................ 19 3 Figure 20. Task 7: Scrolling t o t he Previous Button ................................ .................... 19 4

PAGE 12

ix Figure 21. The New Page 8 ................................ ................................ ............................. 19 5 Figure 22. Question Regarding Prior Awar eness of HTML ................................ ........... 19 6 Figure 23. Question Rega rding Experience Using HTML ................................ ............. 19 7 Figure 24. Title S creen for the BWPP Tutorial ................................ .............................. 19 9 Figure 25. Layout of the Tutoria l Interface ................................ ................................ ..... 200 Figure 26. Segue f rom Welcome to Optional Orientation Segment ............................... 20 2 Figure 27. Sample Page from Sec tion 1: Introduction to HTML ................................ .... 20 3 Figure 28. Sample Page from Se ction 5: Images ................................ ............................ 20 5 Figure 29. Sample Exam Question Page ................................ ................................ ......... 20 6 Figure 30. Exam Results Page ................................ ................................ ........................ 20 7 Figure 31. The Main Menu ................................ ................................ .............................. 20 8 Figure 32. The Help Window ................................ ................................ .......................... 20 9 Figure 33. The Glossary Window ................................ ................................ ................... 2 10 Figure 34. The Resources Window ................................ ................................ ................. 2 11 Figure 35. The Send Ema il Window ................................ ................................ ............... 21 2 Figure 36. Sample of a Static Example ................................ ................................ ........... 21 3 Figure 37. Dynamic Example: Code View ................................ ................................ ...... 21 4 Figure 38. Dynamic Example: Results View in Content Area ................................ ........ 21 4 Figure 39. Dynamic Example : Results View in New Window ................................ ....... 21 5 Figure 40. In terac tive Example: C ode View ................................ ................................ ... 21 6 Figure 41. Inter active Example: Results View ................................ ................................ 21 6 Figure 42. In teractive Example: Help Link ................................ ................................ ..... 21 7 Figure 43. In teractive Example: Help View ................................ ................................ .... 21 7

PAGE 13

x Figure 44. Example of an Artificia lly Introduced Program Error ................................ ... 21 9 Figure 45. Web Form for Enteri ng Post Sessio n Interview Data in Database ................ 2 27

PAGE 14

xi Full Page Versus Partial Page Screen Designs in Web Based Training: Their Effects on Learner Satisfaction and Performance Phillip Eulon Grace ABSTRACT This is a report on research regarding the screen lay out of Web based training (WBT) programs conducted with an eye toward providing evidence based guidance for the design and development of WBT interfaces Specifically, the study investigated the relative instructional benefits of two general types of WBT screen design full page and partial page in terms of both learner performance and learner satisfaction. The main hypothes e s of the study w ere that the full page design option w ould yield significantly better outcomes in both categories of interest. The study employed a mixed method design, generating both quantitative and qualitative data. The main phase of the study was experimental, following a factorial design to explore the relationships between a single treatment variable (WBT screen design) in two treatment conditions (partial page WBT design and full page WBT design) and two dependent variables (learner performance and learner satisfaction). Both a full page and a partial page version of the same Web based tutorial were created, and 129 self select ed undergraduate students w ho reported having little or no experience with the tutorial subject matter were randomly assigned into the two treatment groups Performance data were collected as scores on the tutorial s 18 item, multiple choice fina l exam a nd satisfaction d ata were collected via a 10 item satisfaction survey. In addition,

PAGE 15

xii 59 of the study participants were randomly selected to participate in post study session interviews The results of the study yi elded no significant difference between the two treatment groups for either learner performance or l earner satisfaction; t hus, making it impossible to reject the null hypothesis for either of the two primary research questions. The conclusion of this study was that the presence or absenc e of scrolling alone is not a significant factor either in how well a person performs in a WBT program or how satisfied they are with the learning experience However, while analysis of the post study session interview data supported this conclusion, the f act that a large majority of the interviewees stated a preference for the full page, non scrolling WBT interface design suggests that some elements inherent in the full page design might warrant further consideration and/or study.

PAGE 16

1 Chapter One Introduct ion and Background Introductio n Although Web based training (WBT) has been around in some form almost as long as the World Wide Web itself, it has become a serious instructional alternative only since around 1996 (Alessi & Trollip, 2001; Horton, 2000; Kr use & Keil 2000). Like any medium of instruction, the Web offers advantages and disadvantages to both instructional designers and potential learners alike. It is, of course, the task of the instructional designer of WBT programs to maximize these advantag es, while attempting to minimize the disadvantages in order to provide the learner with the opti mal learning experience (Horton, 2000). A problem arises, however, when we attempt to delineate just exactly what an optimal WBT learning experience would en tail. Inasmuch as a learning experience is the nexus of learner, instructional and environmental elements, the effectiveness and quality of t hat learning experience reflect the confluence of such things as learner attributes and interface design. Practiti oners and researchers from many fields have long been investigating how learners are impacted (both positively and negatively) by the design of instructiona l media interfaces (Shneiderman, 1998). Screen design is a critical element in Web page and compute r based instructional design, in general, and in WBT design, in particular (Alessi & Trollip, 2001; Geraci, 2002; Grabinger & Osman Jouchoux, 1996; Nielsen, 2000; Shneiderman, 1998; Smith &

PAGE 17

2 Ragan, 1993). It is an integral component of a programs interface which is the door between the student and the instruction (Kruse & Keil, 2000, p.120). Screen designs that are consistent, functional, and pleasing can improve the utility and appeal of an instructional program (Smith & Ragan, 1993). Since the screen i s the central point of the interaction between student and program (Grabinger & Osman Jouchoux, 1996, p.181) and because [interface] design choices determine the success or failure of instruction (Grabinger & Osman Jouchoux, 1996, p.206), screen design is a major focus of the overarching process of interface design. It follows, then, that WBT designers, as well as other computer based instructional designers, need to follow best practices in Web page design and human factors design. However, because W BT has become a viable instructional option only over the last several years no firm consensus has yet developed regarding the most effective and/or desirable characteristics of WBT (Alessi & Trollip, 2001). This is an unfortunate situation, given that WBT is currently proliferating at an incredible rate (Alessi & Trollip, 2001; Ellis, Wagner, & Longmire, 1999; Geraci 2002; Horton, 2000; Kruse & Keil, 2000 ; Lim, 200 3 ; Mwaura, 2003 ). Indeed, Horton (2000) alludes to the dearth of research bas ed WBT design principles in his recent book, Designing Web Based Training when he writes: My sisters and brothers in the academic community are welcome to read this book, but no one should expect a scholarly work crammed with footnotes and hesitant genera lizations. This book is for practitioners who cannot wait for all the research to be done and need advice now. (p. vi)

PAGE 18

3 It is, therefore, within the context of this milieu, where technology is out pacing research, that this study was undertaken as an effort to address a controversial WBT design issue: scrolling. While scrolling is a ubiquitous characteristic of the vast majority of pages currently populating the Web, it is problematic for WBT designers (Alden, 1998; Alessi & Trollip, 2001). Although scrollin g can provide several advantages (Alden, 1998; Alessi & Trollip, 2001; Nielsen, 2000), it also presents several disadvantages that can interfere with the learning process (Alessi & Trollip, 2001; Dyson & Kipping, 1998; Levi, 1998; Merrill, 1994). Recognizi ng the necessity and/or desirability of scrolling Web pages in certain circumstances, Alessi and Trollip (2001) nevertheless recommend designing alternatives to scrolling whenever possible Others suggest that if scrolling is going to be present, it should be limited to no more than two to three screens long ( Koyani, Bailey & Nall, 2003 ; Nielsen, 2000) Statement of the Problem Alessi and Trollip do not, however, provide research findings to substantiate their WBT design recommendation. In fact, a search for research specifically comparing the instructio nal benefits of a non scrolling, full page WBT design with those of a scrolling, partial page yielded mostly confusion. A few studies have compar ed the relative benefits of scrolling with those of what has been termed as paging in more general contexts such as Web searches ( Bernard Baker, & Fernandez, 2002 ) online text readability (Baker, 2003 ; Dyson & Kipping, 1998 ) and finding information in text passages on a web page (Parsons, 2001) (See the Definitions and Acronyms section later in this c hapter for definitions of paging and other terms used in these introductory sections. )

PAGE 19

4 The literature co ncerning scrolling versus paging generally favor s paging over scrolling (Alessi & Trollip, 2001; Bernard, et al. 2002 ; Harrell, 1999; Kolers, Duchnicky, & Ferguson, 1981; Dyson & Kipping, 1998; Mills & Weldon, 1987; Parsons, 2001; Piolat, Roussey, & Thun in, 1997; Schwarz, Beldie, & Pastoor, 1983) O ther s however, came to the opposite conclusion that scrolling had some advantages over paging for certain purposes ( Lee & Tedder, 2004; Ryan, 200 4 ) Koyani, Bailey & Nall (2003) in their Research Based Web De sign and Usability Guidelines suggest employing scrolling and paging according to considerations of the primary users and the type of tasks being performed [ pointing out that ] some tasks that require users to remember where information is located on a pag e may benefit from paging, while many reading tasks [such as comprehension] benefit from scrolling (p.66) However, they mitigate their suggestion of scrolling in reading comprehension tasks by stating, with pages that have fast loading times, there is n o reliable difference between scrolling and paging when people are reading for comprehension (p.68) Indeed, they referred to Piolat, Roussey, and Thunins 1998 findings when reporting that paging may allow for better mental representations of the text a s a whole, and are better at remembering the main ideas and later locating relevant information on a page (p. 68). It should be noted, however that the terminology in the literature on this issue is poorly operationalized such that is sometimes unclear as to what the term paging actually refers. In some cases it seems to refer to the process of moving between separate non scrollable screens linked together by hypertext links (i.e., full page design). I n other cases paging refer s to moving quickly through a single scrollable page in large increments either by using the Page Up and Page Down keys or by clicking in the gray

PAGE 20

5 areas of the scroll bar ( in contrast to the much slower line by line scrolling accomp lished by clicking on the up and down arrows of the scroll bar). In such cases, paging could be considered just another form of scrolling. And in yet other cases, it refers to a hybrid of the first two instances, with scrollable pages of relatively limited content linked together by hypertext links. Thus, to the mind of this researcher, the current literature on scrolling does not adequately address, and may even confound the question of whether or not scrolling is an effective and/or desirable design characteristic particularly for Web based instructional programs More to the point of this study, the literature to date does not clearly indicate which has greater instru ctional implications specifically for WBT programs : a non s crollable full page design or a partial page design that requires scrolling. The vast majo rity of literature distinctly comparing partial page and full page screen designs do so in contexts ot her than We b based instructional programs, such as performing Web searches, or finding information within a text passage. This researcher was unable to locate another study that specifically compared the two design alternatives in relation to a WBT program to the degree that it was done here There is no question that t he literature pertaining to scrolling has utility in helping to delineate possible WBT design guidelines However the most convincing and reliable path to devising such guidelines is to act ually test the conclusions of t h is literature specifically with full fledged WBT programs WBT programs, as a genre, constitute a much more complex learning environment than has been represented in most previous studies. The various instructional and suppo rt elements found in a well designed WBT are not fully mirrored in tasks such as Web searches or finding information in text

PAGE 21

6 passages on Web page s. Many of the prin ciples of screen design suggested in the existing literature will surely apply, but until th ese princ iples are thoroughly tested in the domain of WBT programs, we cannot speak with true authority on which principles apply, under what circumstances, and with what effect. A nd this leads to the purpose of this study. Purpose of This Stud y This study examined the effects of the two page design options for WBT mentioned above (partial page and full page) on both learner performance and satisfaction. Because learners individual experience with computers and the Web could well confound a com parison of the two screen designs, participants level of computer proficiency and level of Web experience were controlled for. It wa s hypothesized that a non sc rolling, full page WBT design might be superior to a partial page design that necess itates scrolling, if not in performance, then with regard to learner satisfaction. Rationale for This Study Alessi and Trollip (2001, p. 65) refer to the issue of scrolling as the most difficult design issue regarding text in hypermedia and Web page s. But w hile they and other researchers and practitioners present both advantages and disadvantages of scrolling in WBT and more general Web page design there appears to be very little research serving to guide WBT designers in specifically deciding be tween a partial page or full page design. Given that screen design can be instrumental to the success or failure of a WBT program, it is critical that the screen design process be informed, as much as

PAGE 22

7 possible, by solid research. This study is an attempt t o shed some guiding light on the relative instructional value of partial page versus full page WBT design. The focus of this study concerns a primary aspect of page design that could have important implications for the efficiency and effectiveness of WBT p rograms. As will be discussed in this report, the WBT interface is an integral part of the online learning process and can impact learners learning satisfaction their motivation to learn and ultimately their learning performance Hopefully, the results of this study will help current and future WBT designers make fundamentally sound decisions about their interface designs. Research Questions and Hypotheses The primary intent of this study was to investigate the following two questions : 1. Is there a significant difference in performance between learners using a scrolling, partial page WBT and those using a non scrolling, full page WBT design? 2 Is there a significant difference in satisfaction between learners using a part ial page WBT and those using a full page WBT design? Given what the literature pertaining to CBI and WBT screen design indicates, one might expect that learners using a full page design would have a higher level of performance than those using a part ial page design. This performance gain would probably be attributable not only to aspects of the full page design that facilitates learning (e.g., retention, low error rates, efficiency), but also to higher levels of satisfaction that such a design would p robably evoke in users. At the very least, it might be expected that full page designs would prove to be as effective as partial page designs.

PAGE 23

8 Even in the absence of a significant difference in performance between the two design s, the higher levels of satisfaction expected for a full page design would seem to make for a qualitatively better experience for the user and possibly result in a user preference for the full page design. The literature makes a connection between learner satisfaction, motivation, and learning In a 2002 study, Hsu, Wang & Wang found a strong correlation between learner motivation, learning satisfaction, and learning effectiveness. Keller s ARCS model of motivation design includes learner satisfaction as an integral component in creating motivating instruction suggesting that s atisfied learners are motivated to continue learning because they see value in what they are doing (Keller & Suzuki, 1988) Kruse ( 2004 ) also points to the ARCS model when he state s, Even the most elegantly designed training program will fail if the students are not motivated to learn. Without a desire to learn on the part of the student, retention is unlikely (1st paragraph). A nd even though Horton (2000) suggests that learning satisfaction is not a reliable measure of learning, he states that it certainly beats learning dissatisfaction (p. 27). Finally, Nielsen (1993, 2003) and Shneiderman (1998) both consider learner satisfaction a hallmark of good usability design Limitati ons D iscussion throughout this report might well give the reader the impression that there is a simple dic hotomy of WBT screen designs : full page and partial page. This is not the case at all. There are a number of alternative designs employing different princi ples and navigational elements (e.g., frames embedded hypertext links, etc. ) that were not included in this study. The fact that distinct partial page and full page designs

PAGE 24

9 were compared, however, was to isolate the variable of scrolling as much as possible. Use of other navigational methods within and between content pages could have confounded the study results, making it much more difficult to center the results specifically on variable of scrolling. While the design of the study as it was conduct ed was a reasonable path of investigation (especially considering the number of practical considerations that defined its parameters ) t he fact that all study participants were exposed to only one treatment might be considered a limitation in that the y had no opportunity for a direct comparison of the two screen designs. Therefore, the possibility of future stud ies where participants are afforded the chance to experience both screen designs is discussed in Chapter Five. Finally, i t should be noted that the partial page design in this study could be considered something of a hybrid of the full and partial page designs While each of i ts content pages requir e d the user to scroll, each of its content sections consist ed of several contiguous pages hyperlinked t o each other in the same manner as those of the full page interface E ach page in the partial page design contained at least three screenfuls of content, but none contained an entire section worth of content. It cannot be known if modifying the partial pag e design such that each content section consi sted of a single page would have altered the results in this study but it is offered as one way to improve the study in Chapter Five. Definitions and Acronyms In order to facilitate clarity and precision dur ing the discussion of this study, it is wise to first define and discuss some terms that are used in this report. It is important to

PAGE 25

10 operationalize terms because some terms can have a variety of connotations, which can obscure the intent o f their use and lead to confusion. Sometimes, however, mere definitions of certain terms are inadequate in and of themselves to contextualize the relevance to and importance of those terms to the purpose of this study. Therefore, supplemental background an d/or conceptual information are provided for some of the terms. World Wide Web (a.k.a. the Web) The World Wide Web (commonly referred to simply as the Web ) has been defined as, system of Internet servers that support specially formatted documents. The d ocuments are formatted in a markup language called HTML (HyperText Markup Language) that supports links to other documents, as well as graphics, audio, and video files (Webopedia Computer Dictionary 2005c). HTML HTML is the acronym for HyperText Markup Language which can be defined simply as the authoring language used to create documents on the World Wide Web ( Webopedia Computer Dictionary 2005a). The online encyclopedia, Wikipedia (2006) adds that it is used to structure information denoting certain text as heading s, paragraphs, lists and so on and can be us ed to describe, to some degree, the appearance and semantics of a document.

PAGE 26

11 Computer Based Instruction (CBI) The terminology for types of instruction delivered in some way through a co mputer varies widely according to those who develop, utilize, theorize and/or write about such technologies, the context in which these instruction/learning technologies are used, and the purposes to which these technologies are put. According to Kruse and Keil (2000), many of the terms, such as computer based learning (CBL), computer based training (CBT) and computer based education (CBE) have come to be considered more or less interchangeable, while some terms, such as Web based training (WBT), are more distinctly defined. The variety of actual terms and acronyms referring to the permutations of computer based learning, including those that utilize the Web, has been covered elsewhere (Barron, 1998; Bixler & Bergman, 2001; Eberts, 1997; Kruse & Keil, 2000 ; Horton, 2000). In this study, CBI was used as an overarching term that refers to any instruction that is delivered via a computer, either locally or from a distance. This was taken to include sub genres such as Web based training. It does bear noting, ho wever, that the term traditional CBI is sometimes used in this report to refer to a non Web based CBI that is designed and programmed specifically as a stand alone application. Thus, traditional CBI stands in contrast to Web based instructional progr ams that require other applications (e.g., a Web browser and one or more plug ins) in order to display and otherwise function. Traditional CBI affords the instructional designer a high level of control over the look, feel, and function of the program, whil e most Web based programs are subject to a greater degree of change by the user (e.g., font typeface, font size, and graphics displaying or not). The exceptions to this level of user control over Web based

PAGE 27

12 programs are CBI programs that have been created a s traditional, stand alone applications but which are transmitted over the Web via the use of browser plug ins (e.g., Shockwave for Authorware ) (Barron, 1998). Web Based Training (WBT) For the purpose of this study, the term Web based training (WBT) was u sed to refer to any purposeful, considered application of Web technologies to the task of educating a fellow human being (Horton, 2000, p. 2). Bixl er and Bergman (2001), call WBT a new, creative method for delivering computer based training to widesprea d, limitless audiences. They also see WBT as representing a shift from t he current paradigm of [traditional ] CBT, where the information presented is usually stored on the local machine, a local server, or a local CD ROM, to a system where information is distributed via [the Web] and most likely is stored at a distant location (1st paragraph). Barron (1998) delineates three basic types (or design options , as she refers to them) of WBT screen designs: page based (i.e., partial page ) screen based (i.e., full page ) and frame based 1 of which only the first two are of concern for this study. Each describes a different approach to Web based instructional design and reflects a particular strategy for dealing with the various features and operating parameters of the Web, in particular those that have significant instructional design implications. Usability (a.k.a. Web Usability) AgelessLea r ner .com (2005), an online educational website and advisory services firm, defines usability as:

PAGE 28

13 Capable of being used. In web design, this refers to the capability of a web site to be used by everyone. Usability issues include interface and navigation design (can the user easily understand how to find their way around the site), content layout (small blocks of text that are not too wide are easier for reading on the web), and accessibility and compatibility issues (1st paragraph) Web usability is an umbrella term that spans everything from page design to content design to an entire site design (Nielsen, 2000). Nielsen (19 93) considers usability in terms of five attributes: 1. Learnability : the system should be easy to learn so that the user can rapidly start getting some work done with it. 2. Efficiency : the system should be efficient to use, so that once the user has lea rned the system, a high level of productivity is possible. 3. Memorability : the system should be easy to remember, so that the casual user is able to return to the system after some period of not having used it, without having to learn everything all over again. 4. Low Rate of Errors : the system should have a low error rate, so that users make few errors during the use of the system, and so that if they do make errors they can easily recover from them. Further, catastrophic errors must not occur. 5. Sati sfaction : the system should be pleasant to use, so that users are subjectively satisfied when using it; this means that they like it.

PAGE 29

14 Screen Design (a.k.a. Interface Design) In this report, screen design refers to the layout of what a user sees on their m onitor when they view a CBI or WBT program In CBI, the program interface may take up the entire monitor screen, while WBT program interfaces are usually more restricted due to the screen space (or screen real estate ) reserved for the Web browser. It is th rough the screen design elements that the user interfaces (i.e., interacts ) with the program ; thus, screen design also encompasses the functionality and usability facets of the program. Throughout this report the terms screen design and interface design ar e used interchangeabl y as are the terms screen and interface Scrolling Scrolling refers to both a featu re (or characteristic) of screen design and an action. Merriam Webste r Online (2005 ) defines scrolling in two senses. The first is as an intransitive verb meaning, to move text or graphics up or down or across a display screen as if by unrolling a scroll. The second, transitive sense is to cause (text or graphics on a display screen) to move in scrolling. Both senses are relevant to the discussion of screen design in this study. A bit more detailed definition of scrolling was found on online at Webopedi a Computer Dictionary (2005b) : To view consecutive lines of data on the display screen. The term scroll means that once the screen is full, each new line appears at the edge of the screen and all other lines move over one position. For example, when you s croll down, each new line appears at the bottom of the screen and all the other lines move up one row,

PAGE 30

15 so that the top line disappears. The term vertical scrolling refers to the ability to scroll up or down. Horizontal scrolling means that the i mage moves sideways Scrolling becomes necessary when all the information cannot fit on the content portion of the screen at one time [so that in order to] view all the information, the user has to scroll up or down to see it, causing other information to disappear from the screen (Alessi and Trollip, 2001, p. 65). The most common scrolling controls are vertical and/or horizontal scroll bars that are usually located, respectively, along the right and bottom edges of the content portion of a screen. These scroll b ars allow the user to manually control the process of scrolling up, down or sideways by clicking on the arrowheads that reside at either end of a scroll bar. (Some computer mouse models come with a scroll wheel that allows the user to scroll line by lin e by rolling the wheel forward and backward with a finger. ) However, Web pages can also be programmed to scroll automatically, without the need for the user to control the process. For the purposes of this report scrolling should be taken to mean manually scrolling through the content of a Web page line by line (although the discussion of this phenomenon generally applies to automatic scrolling as well). It is also to be taken as the defining characteri stic of the partial page screen design serving to dis tinguish it from the full page design, where no scrolling is required. Partial Page WBT Screen Design A partial page WBT screen design is, essentially, the classic Web page (based on simple HTML) that has constituted most Web pages since the Webs incep tion. Due to

PAGE 31

16 the amount of page material and features, the user will probably have to scroll (see scrolling below) at least vertically to gain access to all available content and program features. The instructional content is embedded in a simple (i.e. no frames) Web page such that if the entire page content cannot be displayed all at once on the screen, users must scroll to view the rest of the page content (see Appendix A for a graphic example of a partial page screen design). If the entire page cont ent cannot be viewed, or if the WBT program window is resized smaller, a single scroll bar appears along the right hand side of the WBT program window for vertical scrolling and/or along the bottom for scrolling horizontally. Paging Paging is a confusin g term, as it has been used to mean two different concepts and/or activities depending on which source one consults. In earli er literature, paging refers to an alternate form of verti cal scrolling on a single page. Instead of line by line scrolling paging shift[s] the text [vertically] by a span of lines equal to the [computer] screen size (Piolat, Roussey, & Thunin, 1997, p. 568). In other words, an entire screen of content is replaced by another with t he press of a single keystroke (using the Page Up and Page Down key s ) or a single mouse click in the (usually) gray area above (to page up) or below (to page down) the scroll control box in the scroll bar Essentially, the user is scrolling through a Web p age by blocks of text instead of line by line. More recently, however, the term has been used to refer to the process of moving linearly between multiple contiguous Web pages by clicking on hypertext links (usually dichotomously labeled something similar to Previous and Next) It is analogous to

PAGE 32

17 tu rning pages in a book. Paging in this context, traditionally limits content on each of these hyperlinked pages, either greatly reducing or completing eliminating the necessity of vertical scrolling. When vertical scrolling is completely elim in ated paging can be viewed as the primary navigation al method employed in full page screen design. Full Page WBT Screen Design The full page design, while also constructed of simple HTML code, is a fixed screen display in the sense that the user does not have to s croll, either horizontally or vertically, to see the entire content of the page. In other words, all features and navigation options offered by the program are always visible and accessible from within the screen area, such that only the instructional cont ent changes as the user moves through an instructional pr ogram (s ee Appendix A for a graphic example of a full page WBT screen design). Barron (1998) n otes that full page WBT design can appear almost exactly the same as traditional CBT. CBT stands for co mputer based training which, in the traditional sense, refers to computer instruction whose design features are hard wired and can not be altered by the user unless customization of the program is included as one of the design features. This is in contr ast to the actual level of design control a WBT designer has in insuring a WBT program will display and operate as intended. (Note: This, of course, does not include courseware that is produced in an authoring system, such as Authorware, and only delivered through the Web via browser plug ins.)

PAGE 33

18 Chunking In instructional design, chunking refers to the genera l process of breaking larger pieces of information into smaller, more digestible pieces (Fleming & Levie, 1993; Kruse & Keil, 2000; Nielsen, 2000; Brehover 2000; Shneiderman, 1998). The notion derives from psychologist George Millers work in the 1950s on short term memory. Miller (1956) first posited the principle that, on average, people have the capacity to remember seven items of information at a time, give or take two items. Chunking can be performed at various levels. For instance, one can chunk an entire book up into chapters, units, parts, and/or sections ( Brehover 2000). On the other hand, as in the case of this study, one could chunk a single Web page containing a large block of continuous text into a several separate, sequenced, screen sized p ages, each containing smaller, more concise chunks of the information. Basic Web Page Programming T utorial (BWPP) The Basic Web Page Programming tutorial is a Web based instructional program on how to create very basic Web pages using only the HTML Web authoring language. It was based on a more extensive CBI program entitled Internet Programming and was develo ped solely for this study. Its final exam wa s the instrument for measuring learner performance (one of the studys dependent variables of interest). This tutorial is described in grea ter detail in the Chapter Three. The BWPP tutorial is also referred to alternately as the BWPP program and as the BWPP courseware.

PAGE 34

19 Chapter Two Literature Review Introduction The great majority of discussion about WBT screen design is derived from the literature on the overarching area of CBI interface design, as well as that concerning general Web page design. This is reasonable because ( 1) WBT, being a genre of CBI, shares many of the same characteristics and, thus, design concerns with other type s of CBI and ( 2) WBT programs are constructed as Web pages for delivery over the Web. WBT, however, unlike more traditional CBI, presents some singular design concerns that revolve around the use of the Web as a delivery medium. Screen real estate, bandwi dth limitations, computer processing resources, non standardized operating environment parameters, high levels of user control over the Web browser environment and disparities in end user equipment capabilities are just some of the problems that designers of WBT must confront. The necessity of using Web browsers, such as Microsoft Internet Explorer and Netscape Navigator, to access and display WBT programs creates severe design problems. Of particular relevance to this study are the difficulties surroundin g the issue of screen real estate. In addition to the display restrictions inherent to computer monitors (Nielsen, 2000; Shneiderman, 1998, 1998; Tullis, 1997), the framework that Web browsers provide for the display of Web pages further restricts the cont ent and operational areas of WBT programs. Thus, while computer screen display issues have

PAGE 35

20 always presented difficulties for designers of more traditional CBI, these problems are even more critical to WBT designers. Barron (1998) delineates three main type s of WBT screen designs: page based (referred to here as partial page ) screen based (referred to here as ful l page ) and frame based (again, only the first two are of concern for this study) While her consideration of each designs apparent advantages an d disadvantages can be helpful to WBT designers, they do not constitute definitive research as to which design might provide an instructional advantage over the others. Indeed, the literature specifically pertaining to WBT screen design rarely speaks direc tly to decisions about full page versus partial page designs. This seems to because there is an assumption, by and large, that scrolling was a given characteristic of Web based instructional programs. It is the purpos e of this chapter, therefore, to review the literature on CBI screen design in order to inform a more specific discussion of the central issue for this study: the relative instructional benefits of a full page WBT screen design as compared to a partial page screen design. To begin a review of t he literature specifically related to CBI and WBT screen design, however, it seems appropriate to first consider the matter of CBI design and development in the broader context of instructional effectiveness. The effectiveness of CBI has been a perennial t opic of debate ever since computers were first used to deliver instruction several decades ago. Thus, as WBT becomes both more widely available and more extensively relied upon to fulfill the educational and training needs and/or goals of both academic and commercial communities, long debated questions concerning the instructional benefits of computers become ever more important. Since an underlying

PAGE 36

21 premise of this study is that WBT can provide effective instruction, a cursory examination of the concept of effective CBI is presented in order to provide context to this assumption. Effective Computer Based Instruction (CBI) The goal of developing effective CBI programs, which includes the genre of WBT, is rather lofty. What makes this pursuit so difficult i s the adjective effective Educational theorists, researchers and practitioners have yet to agree upon a satisfactory definition of what learning is, let alone agree upon what constitutes effective instruction and how effectiveness should be gauged. The literature on the effectiveness of CBI reflects, at best, a mixed bag of research findings (Cuban & Kirkpatrick, 1998; Kerlin, 1992). While there are those who tout the educational benefits of computer based instructional technologies (Barth, 1990; Cr osby & Stelovsky, 1995; Fletcher Flinn & Gravatt, 1995; Friend & Cole, 1990; Greenfield, 1984; Johnston, 1995; Liu & Reed, 1994; Sloan, 1997; Vockell & Brown, 1992), there are others who reject this proposition (Clark, 1983, 1991, 1994; Kay, 1996; Lookatch 1995, 1996, 1997; Mergendoller, 1996; Oppenheimer, 1997; Pepi & Scheurman, 1996; Russell, 1999). In addition, there has been much criticism regarding the quality of many of the studies that have indicated an advantage of CBI over traditional forms of ins truction (Becker, 1992; Berson, 1996; Clark, 1983, 1994; Cuban & Kirkpatrick, 1998; Lookatch, 1995, 1996; Reeves, 1993, 1998). Thus, it seems that the ne t result of the last thirty or so years of educational theorizing and research in the areas of educatio nal and instructional technology is a bit

PAGE 37

22 disappointing for those seeking definitive answers to questions pertaining to effective CBI. Discussions about CBI effectiveness are necessarily multidimensional, reflecting the complex nature of human learning. Ev en though we have not fully deciphered how humans do, in fact, learn, we assume that the process of learning involves many factors. Precisely what these factors are and to what degree they influence, facilitate or dictate how humans learn, however, remain sources of contention among scholars and researchers from a variety of educational disciplines (Brown, 1997a, 1997b, 1997c; Hiemstra & Brockett, 1994; Merrill 1994; Steinberg, 1989). Effective CBI is a tenuous concept. The notion of effective CBI begs th e question of what exactly is meant by effective. Much discussion of CBI efficacy in the literature revolves around levels of student achievement (Fletcher Flinn & Gravatt, 1995). However, Cuban and Kirkpatrick (1998) lament the lack of clear focus in e ducational research regarding the efficacy of technology in education, and they cite a variety of measures of effectiveness found in the literature. They note that some researchers measure effectiveness in terms of student scores, some focus on how quickly students learn, while others look at student motivation levels. These different measures of effectiveness in educational research, they contend, make it difficult to assess CBI efficacy. It may also be that the effectiveness of a CBI program can be measu red, not only in terms of significant gains in student achievement over more traditional forms of instruction, but also in terms of it being just as effective as traditional methods. Ayersman (1996), for example, found hypermedia programs to be at least as effective as lecture,

PAGE 38

23 especially for remedial and learning disabled students. Under these circumstances, the decisions regarding the use of CBI programs would probably hinge on other factors (such as cost effectiveness) that may or may not give CBI an adv antage over more traditional instructional media. While noting the difficulty of documenting gains in learning through computer instruction, Alessi and Trollip (2001) outline some of CBIs perceived benefits: it is widely accepted that computer based i nstruction at least reduces the time spent learning. Even if the learning itself is not better, reducing time is a benefit. Properly used, computers can improve learning effectiveness and efficiency (Christmann, Badgett & Lucking, 1997; Kulik & Kulik, 1991 ). In addition, using technology for learning has logistical benefits. Materials can be distributed more cheaply and easily; it is easier to ensure all users have the most recent version of the materials; learners can access the materials at their convenie nce; accessibility is facilitated for people with disabilities; and dangerous, expensive, or unique environments can be simulated to improve access. (p. 5) They go on to concede, however, that none of these situations guarantees that computers were benefic ial to the learning process. Recognizing that the benefit of computers in educational endeavors remains debatable, they are hopeful that, as more educational and training applications are proliferated on the Web, people will to take CBI more seriously. In this way, they predict, more instructionally sound material will be developed. Even so, they never directly try to delineate how effective or quality CBI might be defined. The fact that the literature yields no clear definition of effective CBI might be at tributable to at least two prerequisite issues: how learning is defined and the influence

PAGE 39

24 of media on learning. In the first case, it is reasonable to expect that ones definition of learning will determine how the effectiveness of any instructional progra m is conceived and measured. While a discussion of learning as a concept is beyond the scope of this study, the proposition that instructional media has an impact on learning needs to be briefly explored as it has direct implications for the design and dev elopment of any type of technology based instruction, including WBT. The Learning and Media Debate It may be that CBI, as an effective instructional medium, may not warrant consideration separate from other types of instructional media. The very notion of effective instructional media (from books to overhead projectors to videotape to laser discs to CBI) is predicated upon the assumption that the media, itself, impacts learning. This assumption, however, is not universally accepted. Indeed, Richard Clark proffered a compelling argument for focusing discussions of instructional effectiveness on instructional method rather than the particular medium used to relay the instruction to the learner (Clark, 1983, 1991, 1994). The case he made against media having influence on learning has direct implications for framing the definition and measurement of effective CBI or any other instructional media. The impact of instructional media (or their attributes) on learning, motivation and efficiency gains from instruc tion has been a long standing debate. Though not the first to say so, Clark precipitated this rather heated quarrel with his contention in his 1983 article that instructional media, in and of themselves, offer no learning benefits. In his opinion, media ar e mere vehicles that deliver instruction but do not influence student

PAGE 40

25 achievement any more than the truck that delivers our groceries causes changes in our nutrition (1983, p.445). Offering several studies to substantiate this assertion, he hypothesized that achievement gains being attributed to instructional media (or their attributes) are due to a confusion of the media with instructional methods. Thus, Clarks claim was that the potential for educational achievement exists only in the instructional met hod employed, not in the particular media used to delivery it. Over the years, other instructional technology and media researchers have taken Clark to task on this matter (Cunningham, 1986; Kozma, 1991, 1994; Petkovitch & Tennyson, 1985; Salomon, Perkins & Gloverson, 1991; Ullmer 1994). Clark (1994), however, remains unmoved by their arguments, claiming that every media researcher who had engaged him in dialogue eventually agreed that the available evidence does not yet support the claims that either med ia or their attributes affect learning. This issue has yet to be definitively resolved. Instructional Design: Virtues and Flaws Every instructional method has an upside and a downside, as does every system for delivering instruction to the learner. While there is certainly debate among instructional theorists, designers and practitioners about the relative effectiveness of this particular method or that particular delivery system, most would probably concede that all methods and delivery systems have both virtues and flaws. Virtues would be features, characteristics or aspects that facilitate the learning process, while those that inhibit or otherwise interfere with the learning process can be viewed as flaws. Determining which is which is not always a simp le or easy matter because a variety of factors (e.g., subject

PAGE 41

26 matter content, learning styles of learners, available instructional resources, etc.) can differentially impact the effectiveness of an instructional program. In other words, what might be a vir tue in one learning environment (or with one type of learner) might prove to be a flaw in another (Merrill, 1994; Shneiderman, 1998). There is simply no single instructional method or delivery system that is best across the board and in all circumstances. The development of effective instructional programming is, at best, an exercise in informed compromise (Alessi & Trollip, 2001; Shneiderman, 1998). To tweak the greatest learning gain from a particular instructional program, instructional designers must as sess that programs subject matter and the learning environment(s) within which it was implemented in order to determine the most appropriate instructional method and delivery system for implementing the program. This means evaluating what aspects of an in structional method or delivery system would be virtues and which would be flaws within what set of circumstances. Pointing out that computer based instructional programs are frequently developed by teams that include media and graphic designers who rarely have had training in usability design or learning theory, Gordon (1994) insists that it is the job of instructional designers to make sure that principles of good design are followed. Ideally, the various instructional design choices made throughout the de sign process are informed by research that delineates best practices in instructional design. Unfortunately, the ideal is not always easy to adhere to for a number of reasons. This is true for computer based instruction and especially so for designing an d developing WBT. The eternal debate among proponents of the various paradigms for learning and instruction, particularly between adherents of constructivism, which currently dominates

PAGE 42

27 educational theory, and advocates of behaviorism, clouds the issue of b est design practices for WBT (Alessi & Trollip, 2001; Horton, 2000). This is understandable since these paradigm debates have yet to result in a consensus on the nature of learning, much less on the most effective instructional methodologies for facilitati ng it (Catania, 1992; Hergenhahn, 1988; Mazur, 1990). While useful in some respects, these debates have yet to result in solid, universally accepted WBT design and development guidelines and practices WBT (Alessi & Trollip, 2001; Horton, 2000). As Alessi a nd Trollip (2001, p.5) note, computer based instruction, and especially WBT, are still young and evolving [and] much remains to be learned regarding the best ways to harness the power of computers. As mentioned earlier, Horton (2000), contends that beca use the WBT genie has already been released form its bottle, so to speak, WBT designers cannot wait until research delivers guidelines for best practices in WBT design. He also warns WBT designers against becoming dogmatic adherents to particular theories, and/or design/development systems: Many designers treat educational theories and development methodologies like strict religion. And only their religion is the true religion. An exogenous constructivist considers Designers Edge a tool of the seven horned devil. Devotees of Information Mapping guffaw at the foo foo puffery of the Microworldians I have seen effective WBT courses developed based on almost every popular theory, even I just did what seemed right. I do not mean to imply that educational theo ry and development methodology are not important, just that success does not depend on any particular one. (p. 14)

PAGE 43

28 This does not mean, however, that WBT design has to be a Wild West like frontier. Despite having no extensive research history upon which to draw firm conclusions, the design of WBT can be guided by past experiences with related technologies and techniques design (Alessi & Trollip, 2001; Gordon, 1994; Grabinger & Osman Jouchoux, 1996). According to Gordon (1994), an instructional program is a product or system just as much as any physical system such as a chair, automobile, or software program (p. 10). He asserts that instructional programs can, therefore, be developed using design principles similar to those used in engineering design. With r egard to screen design, WBT designers can be guided by principles derived from fields with more extensive research histories, such as usability engineering, human factors design and human computer interface design (Alessi & Trollip, 2001; Gordon, 1994; Gra binger and Osman Jouchoux, 1996). Grabinger and Osman Jouchoux (1996) point to this multidisciplinary design approach when they write: Design is a series of choices that interact with each other and that reflect the theoretical underpinnings of a disciplin e. Designers of computer screens that present information and create interactions for learning make choices in manipulating several attributes that are common to both print and electronic media, among them, text, typography, layout, and graphics The wealt h of information on printed text gives us indications about making some of these choices. (p. 181) Since very little research can be found in the CBI or WBT literature that directly compare the instructional advantages and disadvantages of partial page an d full page

PAGE 44

29 WBT screen designs, the literature to inform this study must come from other related issues in instructional design spanning several fields concerned with CBI screen design. Interface Design According to Kruse and Keil (20 00), the computer user interface is the training program for many people. They contend that it plays a very important role in the training program because it creates the graphical association of the training program in the mind of the user (p. 107). Murp hy (1996), noting that humans and computers are very different entities, states that the greater the difference between the two entities, the greater the need for a well designed interface [and that ] human computer interface design looks at how we can les sen the effects of these differences (2nd paragraph). Laurel (1990) suggests that in general, an interface reflects the physical properties of the interactors, the functions to be performed, and the balance of power and control [as well as the] cognitiv e and emotional aspects of the user's experience (p. xiii). Huang, Diefes Dux, Imbrie, Daku, & Kallimani (2004) conducted a pilot study where they evaluated a CBI program using Kellers ARCS model of motivational design and concluded that interface desig n is critical for stimulating students Attention (p. 34 ) one of the models four dimensions of learner motivation Therefore, the design of the interface must be given considerable thought and planning. Following proven design principles in constructin g the user interface facilitates the learning process (Alessi & Trollip, 2001; Koyani, Bailey, Nall, 2003; Kruse & Keil, 2000; Smith & Ragan, 1993). Designing a good user interface means that it will have optimal usability. The definition given earlier for usability included five attributes that Nielsen (1993 2003)

PAGE 45

30 states all user interfaces should possess : l earnability e fficiency m emorability l ow rate of errors and s atisfaction Shneiderman (1998) offers a similar list that he believes is central to evaluating the usability of user interfaces: 1. Time to learn ( H ow long it takes typical users to learn how to use the comman ds relevant to a set of tasks) 2. Speed of performance (How long does it take to carry out benchmark tasks?) 3. Rate of errors by user (How many and what kind of errors do users make in carr ying out the benchmark tests?) 4. Retention over time (How well do users maintain their knowledge aft er an hour, a day, or a week?) 5. Subjective satisfaction (How much did users like using various aspects of the system?) (p. 15) Essentially, what both Nielsen and Shneiderman have done is identify the goals of interface design. These goals represent the idea outcome of any interface under any circumstances. But as Shneiderman (1998) points out, every designer would like to succeed in every category, but there are tradeoffs (p. 15); thus, harking back to the earlier discussion of instructional virtues and flaws. These inte rface usability design goals are of great significance to this study, as one would expect that whichever design is able to incorporate the greatest number of interface design principles to the greatest degree would likely produce the greatest performance a nd satisfaction outcomes.

PAGE 46

31 Screen Density and Instructional Text Since much, if not the majority of the instructional content of CBI and WBT programs is conveyed through text, certain principles of instructional text bear directly on decisions about scre en design. Some of these principles are treated separately later in this chapter as they also relate to other considerations in making screen design decisions, but the issue of screen density on the screen is fundamental to all of them (Alessi & Trollip, 2 001; Fleming & Levie, 1993; Geraci (2002); Grabinger & Osman Jouchoux, 1996; Nielsen, 2000; Shneiderman, 1998). Screen density refers to the amount of empty space in relationship to text elements on the screen (Grabinger & Osman Jouchoux, 1996, p. 189). According to Grabinger and Osman Jouchoux (1996): ... screens should have moderate density, appearing neither too empty or too crowded. Empty screens are viewed as boring and uninteresting. Overly crowded or complex screens are viewed as intimidating and t oo difficult to study. (p. 199) Screen density is of particular concern in WBT screen design because the screen real estate with which designers have to work is very limited in the best of situations (Alessi & Trollip, 2001; Grabinger & Osman Jouchoux, 19 96; Nielsen, 2000; Shneiderman, 1998). Fleming and Levie (1993) estimate that an 80 column by 25 row screen display (a common configuration) may present only a quarter of the informati on that can be printed on an 8.5 by 11 inch sheet of paper. Monitor (or display) size, screen resolution and Web browser windows all have a significant e ffect on the amount of screen real estate available for instructional text. While large computer monitors (i.e., 17 inch and above) provide more screen real estate in general, designers must take into

PAGE 47

32 account that many end users will probably have smaller monitors (15 inch or smaller). This is particularly true of laptop computers, where screen displays are often twelve inches or smaller (especially for the new palmtop computer s). Designers can design for higher screen resolutions that generally enable more text to be displayed, but there is often a tradeoff in legibility since the text size is made smaller. WBT screen real estate is also eaten up by the Web browser window. Lik e any application, Web browsers entail operational features that necessarily require screen real estate in order to display. The perimeter of a Web browser window generally consists of a title bar, one or more toolbar (e.g., menu, address, and links toolba rs), a status bar, and scroll bars, all of which take up precious screen space. While users can exert some control over how much of the screen these features of a Web browser take up, screen real estate is still lost. Along with monitor size, screen resolu tion and the Web browser window, a number of factors specifically related to text (e.g., vertical spacing, the number of characters per line, and line length) also impact screen density. Various text density studies have compared low density text screens w ith high density text screens in order to determine preferences for the proportion of text to white space on a screen ( Bernard, Fernandez, & Hull, 2002; Morrison, Ross, & ODell, 1988; Ross & Morrison, 1989; Ross, Morrison, & ODell, 1988; Morrison, Ross, Schultz, & ODell, 1990 ; Youngman & Scharff, 1998 ). In general, these studies found (1) that low density text screens are just as effective as high density screens for expository lessons, (2) that there was a significant reduction of lesson completion tim e with low density screens, and (3) that user s expressed a preference for low density over high density screens. However, Grabinger

PAGE 48

33 and Osman Jouchoux (1996) warn against concluding that learning is affected by screen density: ...as with the other typogra phic variable research, screen density research focuses on perception of the screen rather than on the processes of reading and studying. The results of most of this research show little, if any, consistent effect on learning. Because learning from an inst ructional computer screen involves the reader and complex cognitive processes, it may be more likely that changes that help the perceptual and reading processes such as organizational factors and meaning may be more valuable research material. (p. 190) Mut er (1996) reinforces this caution when he states that at present, we do not know how to optimize reading via electronic equipment (p. 161). Nevertheless, the question of how much text can be displayed on the screen while maintaining an optimal screen de nsity for learning has important ramifications for the quantity and quality of instructional information that can appear on screen at any given time (Grabinger & Osman Jouchoux, 1996, p. 189). For instance, according to studies of viewer preferences, read ers prefer shorter rather than longer lines of text (Grabinger & Osman Jouchoux, 1996, p. 195). Designing a WBT screen with this as a guiding design principle, along with the various other constraints placed on the amount of screen real estate available f or instructional content, requires that the designer must be particularly judicious about what is included on that screen. So the designer must give careful thought as to how the instructional message can be conveyed both as clearly as possible and as conc isely as possible.

PAGE 49

34 Alessi and Trollip (2001) refer to the informative, yet parsimonious construction of instructional messages as leanness, which they define as say[ing] just enough to explain what is desired, and no more (p. 67). Calling it an import ant quality of instructional text, they state that it applies not only to text descriptions, but to examples of concepts, sample applications of rules, pictures for demonstration purposes, and so on (2001, p. 67). Reader and Anderson (1980) validated the principle of leanness when they demonstrated that readers learn the main points of a textbook better from just a summary of the main points than from the text itself, even when the main points were highlighted in the textbook. Authorities in instructiona l design point out that lean instructional text facilitates learning (Alessi & Trollip, 2001; Merrill, 1994). That leanness of instructional text can yield learning benefits seems to make sense just on the common sense principle that eliminating all superf luous elements in the instructional message would tend to increase the visibility of the message and, thus, its instructional potency. Further substantiation of this principle can be found in considering how humans perceive, process and store information. Memory, Reading, and Learning Huitt (2000) outlines four general principles of cognitive psychology that inform a basic information processing model of memory. First, there is an assumption that the human mental system has a limited capacity, with constra ints being placed on the amount of information that can be processed at any given time. These constraints occur because of bottlenecks at specific points in the system. A second principle is that part of the

PAGE 50

35 processing power of the brain is reserved for an overarching control mechanism that oversees the encoding, transformation, processing, storage, retrieval and utilization of information. Third, our perception and understanding of the world results primarily from two sources of information, one being the information coming to us through our senses, and the other being our stored (i.e., long term) memories. And the fourth principle is that humankind is genetically predisposed to process and organize information in specific ways. For example, human infants a re more likely to look at a human face than any other stimulus within their 12 to 18 inch field of focus, which is apparently an important aspect of the infants survival. While no one can claim to have completely deciphered the human memory process, curre nt information processing theories give us some insight into how we humans perceive, process, store information, and, thus, learn. The so called stage model, based on the work of Atkinson and Shiffrin (1968), posits that information is processed and stor ed in the human brain in three stages: (1) external stimuli enters the sensory memory, (2) information that survives the sensory memory is transferred to the short term memory, and (3) information that survives the short term memory is deposited in the lon g term memory (Gordon, 1994; Hergenhahn, 1988; Huitt, 2000; Mazur, 1990). The first two stages describe limitations on the processing power of the brain. In the first stage of the memory, the various types of information we receive via our senses are conve rted into a form of energy that the brain can handle. During this transduction process, an extremely short lived memory (anywhere from a half a second to several seconds, depending on the type of information) is created (Gordon, 1994; Huitt, 2000). If th e information does not have an interesting enough feature or if it does

PAGE 51

36 not activate a known pattern, it will more than likely not survive to be transferred to the short term memory. Once in the short term (or working) memory the information has our att ention. However, the information will survive for only about 15 to 20 seconds before it is dropped, unless it is immediately repeated (Gordon, 1994; Hergenhahn, 1988; Huitt, 2000). If it is repeated, the information will stay available for up to 20 minutes (Gordon, 1994; Huitt, 2000). This is the stage during which Millers (1956) magical number seven, plus or minus two comes into play. Millers number refers to the apparent limit on the number of items of information that the human brain can, on average, process at any one time: seven, give or take two items. More recent research has demonstrated, however, that that number drops to around five, plus or minus two, if the information item is complex (Gordon, 1994; Huitt, 2000). Since the human sensory syst em attempts to process all external stimuli, it can be easily overloaded by too much stimulation (Kruse & Keil, 2000, p. 115). Short term memory is, therefore, highly volatile due to its high susceptibility to disruptions due to distracting stimuli in the environment (Shneiderman, 1998). Visual and/or auditory (i.e., noise) distractions can interfere with the cognitive processing of information (Kruse & Keil, 2000; Shneiderman, 1998). Emotional states, such as anxiety, can also cause loss of information dur ing the processing because preoccupation with whatever is causing the anxiety reduces the amount of processing power available to transfer new information into long term memory (Shneiderman, 1998). In addition, delays in the transfer of information due to distractions can require that the memory be refreshed (Shneiderman, 1998). Therefore, organization and

PAGE 52

37 repetition are indicated as the most important means for insuring information in short term memory will make it into long term memory, which is the last stage in this information processing model (Gordon, 1994; Hergenhahn, 1988; Huitt, 2000; Mazur, 1990). Long term memory is apparently limitless, storing and organizing information according to one or more of three types of memory structures: declarative, procedural, and/or imagery (Huitt, 2000; Gordon, 1994). Reading involves both memory and the context within which the learner is learning (Grabinger & Osman Jouchoux, 1996). Instructional text displayed on a computer screen is acquired, organized and proc essed, resulting in a message that is intended to be deposited into the learners long term memory. Tinker and McCullough (1962) define d reading as involving : ... recognition of printed or written symbols which serve as stimuli for the recall of meanings b uilt up through past experience, and the construction of new meanings through manipulation of concepts already possessed by the reader. The resulting meanings are organized into thought processes according to the purposes adopted by the reader. Such an org anization leads to modified thought and/or behavior, or else leads to new behavior which takes its place, either in personal or in social development. (p. 13) It is the WBT designers job to arrange all the text elements on the screen in such a way as to facilitate the learners perception, reading, and understanding of the instructional message (Alessi & Trollip, 2001; Fleming & Levie, 1993; Grabinger & Osman Jouch oux, 1996). According to Kruse and Keil (2000), much of the work done

PAGE 53

38 in human computer int eraction is focused purely on ways to reduce the load on the human users memory (p. 110). By understanding how human memory works, we can develop effective strategies for aiding memory and, therefore, improving instructional programs. Producing lean inst ructional text is one such strategy. By distilling the instructional message down to its purest, simplest form, the learners cognitive load is lessened because he/she does not have to filter through extraneous and distracting stimuli. It would stand to re ason, then, that the probability of the message being attended to, processed and deposited into long term memory would be increased. And if this is so, then the case can be made that a full page design would better facilitate the production of lean instruc tional text than would a partial page design. With less screen real estate to work with, the designer is forced to chunk up and refine the instructional content such that each screen will contain a low text density message that carries a high instruction al value. Chunking Up to Produce Lean Instructional Text As Kruse and Keil (2000) point out, the ultimate goal of training and education is to get relevant information through short term memory and into long term memory, where it can be accessed at a la ter time (pp. 110 111). To that end, one of Shneidermans (1998) Eight Golden Rules of Interface Design is reduce short term memory load (p. 75). Again, because humans have a very limited amount of processing capacity, learners can reach their cognit ive limit fairly quickly, depending on the amount and/or

PAGE 54

39 complexity of the information they are attempting to absorb. Even though sensory memory can receive a great deal of information, only a very small part of that information will make it into working m emory. Gordon (1994) explains: ... it takes cognitive resources to attend to subsets of that information and transform it for use in working memory. The limits in our cognitive resources dictate the amount of information we can transform. Sperling (1960) f ound that we can only transform 4 5 items within the 1 second time span before information in sensory memory decays or is replaced. The implication is that of all the information a trainee may see or hear in a training program, he/she can only bring in a small subset of items for actual cognitive processing at any given time... In a training environment, the information that gets the most extensive processing will depend on the amount [emphasis his] of information being presented, the salience of variou s stimuli, the degree to which the information is interesting [emphasis his] , and the degree to which the information is called for by short term or long term goals [emphasis his] of the traine e. (pp. 131 132) A gain, the conclusion this leads to is that WBT designers should refrain from putting too much information on the screen at one time. There appears to be an expert consensus on the w isdom of constructing lean, chun ked up instructional content (Alessi & Trollip, 2001; Fleming & Levie, 1993; Grabinge r & Osman Jouchoux, 1996; Horton, 2000; Kruse & Keil, 2000; Merrill, 1994; Nielsen, 2000; Piskurich, 2000; Shneiderman, 1998; Tullis, 1997). Horton (2000) implores designers to avoid the Great Wall of Text [that consists] entirely of great, gray blocks of text (p. 447). He considers this to be one

PAGE 55

40 of the biggest pitfalls in WBT, because to many learners, large blocks of continuous texts, especially displayed on a computer screen, are intimidating or boring, taxing their endurance and severely testing thei r level of motivation (Horton, 2000). Unfortunately, a great many, if not the majority, of WBT programs found on the Web today perpetrate this design flaw. Although there are certainly times when large blocks of continuous text are unavoidable or even desi rable (Alessi & Trollip, 2001), it does not, in general, follow good instructional design guidelines. The remedy to this Great Wall of Text problem is to chunk large blocks of information up into smaller, more digestible pieces (Alessi & Trollip, 2001; F leming & Levie, 1993; Horton, 2000; Kruse & Keil, 2000; Merrill, 1994; Nielsen, 2000; Shneiderman, 1998). Significantly, Merrill (1994) refers to these smaller pieces as mind sized chunks (p. 153). Furthermore, the chunking process facilitates the produ ction of lean instructional text, which, in turn, facilitates learning (Alessi & Trollip, 2001; Merrill, 1994). Designers should be aiming to design screens that contain only the minimum amount of information necessary to accomplish the purpose of that scr een (Alessi & Trollip, 2001; Galitz, 1993; Smith & Mosier, 1986; Tullis, 1997). Tullis (1997) expands on this point: A designer should ensure that each screen or window contains only the information that is actually needed by the users to perform the expec ted tasks at that point in the interaction. The temptation to provide additional data just because it is available should be avoided, since extra clutter clearly degrades the users ability to e xtract the relevant information (p. 509)

PAGE 56

41 To Scroll or Not to Scroll The amount of information that that goes on a single page of a WBT program is not of great concern if one does not aspire to produce lean (but potent) instructional content, and/or there is no concern about ending up with content that is too large to fit on the screen all at once. However, WBT designers intent on developing the most instructionally sound programs based on what we know or think to be good instructional design guidelines for WBT have to be concerned with a number of issues related t o the quantity (and quality) of information displayed on a WBT screen. If the latter is the case, then screen real estate, screen density, chunking large blocks of continuous text, and generating lean instructional content all become problematic. They bec ome problematic for WBT designers right from the get go because a decision has to be made about whether or not to design a screen layout that will require learners to scroll. This is so because the ability to scroll has implications for all four aspects of screen design just mentioned. What is problematic about learners having to scroll? It is problematic if you believe Alessi and Trollip (2001) when they advise CBI and WBT designers to design alternatives to scrolling whenever possible. Scrolling becomes p roblematic if you believe that it v iolates any of Nielsens (2000, 2003) or Shneidermans (1998) tenants of usability, such as efficiency or user satisfaction. It becomes problematic if you believe the studies of viewer preferences that demonstrate a user preference for shorter rather than longer lines of text. It is problematic if you believe that building it in as a design choice served as a disincentive to produce lean instructional content, resulting in superfluous material being incorporated into the i nstructional program that might detract

PAGE 57

42 from the programs effectiveness. Scrolling is problematic if you believe that having to scroll interferes in any way with the learning process, that it might constitute a distracting stimuli in the environment tha t can interfere with the cognitive processing and retention of information. On the other hand, the decision to not allow scrolling also becomes problematic if you cannot prevent users from changing their Web browser default settings (e.g., font typeface and size). If the ultimate goal of a WBT designer is the construction of a screen layout that facilitates the most beneficial instructional experience possible for learners, then the decision to include or disallow scrolling is of great importance. This d ecision has great importance not only because scrolling can be problematic in the ways just listed, but also for several other reasons which are discussed in the following sections. Comparing Partial Page and Full P age WBT Screen Designs The main differen ce between partial page and full page WBT screen designs can probably be boiled down to the issue of scrolling. A partial page screen design is, for all intents and purposes, unconstrained in terms of the length of its constituent Web pages, which means sc rolling is a planned design feature. A full page design, on the other hand, is constrained in dimension to the size of a window that, while possibly smaller than the viewable screen area of the computer monitor, should never exceed the dimensions of the sc reen area. The question of interest in this study is which screen design might have greater instructional benefits, both in terms of learner performance and in terms of satisfaction? Since there appears to have been no previous research conducted on the r elative

PAGE 58

43 instructional benefits of the two screen designs in WBT one must look to other related fields and studies, with an eye toward extrapolating from that literature. That is where the scrolling versus paging studies comes in with regard to this resear ch. Scrolling vs. Paging Studies I t should be remembered here that, i n the earlier literature, paging o ften referred to an alternate m ethod of moving around a single page that contain ed content too large to fit on the screen all at once ( Kolers, Duchnicky & Ferguson, 1981; Piolat, Roussey, & Thunin, 1997; Schwartz, Beldie, & Pastoor, 1983 ) Some considered it a different form of scrolling over a long page of content with the difference being that in regular scrolling, movement is in small increments (typically, line by line), whereas paging moves through a page in large increments (roughly one entire screenful of information at a time). This usage of the term was of course, usually in relation to partial page interface designs. More recently, h owever, paging has been used most often in t h e context of non scrolling, full page screen designs where the user moves multiple contigu ous pages of instructional content linked by hypertext links ( Baker, 2003; Bernard et al. 2002 ; Harrell 1999; Parsons, 2001) But it is important to note that even though earlier paging studies were conducted in relation to a partial page screen design the paging condition still shared some of the same qualities of paging as it has been more recently conceived in the context of full page screen design F or instance, i n both contexts, paging result s in one screenful of content being replaced in its entirety with another at the press of key or click of the mouse. Of course, there are important differe nces that cannot be overlooked and serve to definitively differentiate them. For example, in the context of paging down a

PAGE 59

44 single page, other program features move out of the users view whereas all program features remain in view when paging through a full page interface. Nevertheless, the findings from earlier paging studies conducted using a partial page interface are still usefu l for informing the discussion of full page versus partial page screen designs. While there is precious little literature specifically comparing partial page and full page designs in WBT design, there have been a number of studies that have looked at diff erences in learner performance, satisfaction, and/or preference outcomes between scrolling and paging in other contexts, such as Web searches (Bernard et al. 2002), online text readability comprehension and retention (Baker, 2003; Dyson & Kipping, 1998 ; Kolers, Duchnicky, & Ferguson, 1981; Piolat, Roussey, & Thunin, 1997) word reading, line searching, and term sorting (Schwartz, Beldie, & Pastoor, 1983) location orientation ( Beard & Walker, 1990 ), t he usability of online newspaper s (van Oostendorp & va n Nimwegen, 1998), and finding information in text passages on a web page (Parsons, 2001). Most of th ese studies concluded that paging h eld an advantage over s crolling although at least one found the opposite to be true (Baker, 2003) For those finding an advantage in paging, a primary factor for the differences i n outcomes was identified as spatial orientation (or encoding), which involves the learners building a mental representation of the location of text information [on a page] (Piolat, Roussey, & Thunin, 1997). Other authorities in the fields of instruction al and human interface design support this finding (Alessi & Trollip, 2001; Muter, 1996; Severinson Eklundh, Fatton, & Romberger, 1996 ). Severinson Eklundh, Fatton, and Romberger (1996) explain:

PAGE 60

45 When writing or reading on paper, we make constant use of th e spatial arrangement of the text to remind ourselves of its inherent structure. This holds in a local as well as a global sense. By a quick visual inspection of a book in our hands, and by flipping the pages for a few seconds, we get a preliminary feel fo r the size, structure, and content of the text material. Not only are we guided by those physical cues when approaching a new document, they also enable us to remember the text by its app earance and spatial arrangement (p. 139) This same sort of process occurs with electronic text on the screen. These orientations hold pretty well when paging because an entire page (screen) is replaced when paging, allowing the physical and spatial cues used for orientation to remain pretty well in tact. However, with scr olling, learners frequently lose their place and have to re orient themselves each time a tiring and often unmotivating activity. Because scrolling moves down the page incrementally, the spatial encoding that occurs when the learner scans an entire page becomes useless. The physical cues and spatial relationships that learners depend on to orient them to where things are on a page have disappeared. Thus, s crolling versus paging studies are relevant to this study because spatial orientation is a factor in both partial page and full page designs. The difference is that with the partial page design, spatial orientation is disrupted to a significantly greater degree than in full page designs. In the latter, when moving from one page to another page, the ent ire interface ( including all operational and navigational features ) remains in view ; thus, the physical cues and spatial orientation on which learners rely remain entirely intact, the same as they do when leafing through a book. We know that spatial

PAGE 61

46 orient ation is a key factor in learner performance and satisfaction because in partial page designs, the greater degree of spatial orientation is the reason for the difference in outcomes between scrolling and paging, in favor of paging. Therefore, it seems reas onable to extrapolate that if paging is superior to scrolling by virtue of its greater possibility for spatial encoding, then f ull page designs might yield superior learner performance and satisfaction outcomes over partial page designs since its design an d mode of navigation appears to be inherently more conducive to spatial encoding than scrolling Summary This literature review has been an attempt to do two things: (1) to convey the rationale for investigating which of two WBT screen designs might hold a greater instructional benefit: partial page or full page, and (2) to provide a convincing argument for why this issue is important to WBT instructional design. The dearth of literature specific to the topic of this research in WBT design is, at once, bo th unfortunate and fortuitous. Although this study must rely on literature from related fields and of related topics, it provides an opportunity to at least shed some light on a fundamental design issue that appears to get glossed over on a regular basis. In a very real sense, it is an opportunity to add one small, but informed piece to the WBT design puzzle.

PAGE 62

47 Chapter Three Research Methods Study Overview The overarching question addressed by this research was whether or not the type of overall screen design selected by a WBT designer has implications for how well learners learn the material and/or are satisfied with the learning experience. In particular, this study was conducted in an effort to determine if there was a significant difference between two types of WBT screen designs with regard to either learner performance or learner satisfaction. For the purposes of this study, the two screen designs in question hav e been designated as full page and partial page with the distinguishing feature be ing the latters necessitating vertical scrolling in order to view all of a WBT pages features and/or content. The full page design allows the learner to view an entire WBT page at once, but only by limiting the amount of instructional content per page, whereas th e partial page design can provide more instructional content per page, but requires the learner to scroll down an indeterminate amount in order to view all a pages content and/or features. T he study design was originally piloted during the spring of 2004, the results of which led to the modification of some of the data collection procedures and instruments initially proposed for the study (see Appendix B for more information). A second pilot was conducted during January 2005, which led to further i nstrument refinements. A third pilot study was conducted in March and April of 2005, yielding results that justified

PAGE 63

48 continuing on with the main study. The main study was conducted from April through June 2005. All three pilots and the main study were cond ucted at a large metropolitan university in the southeastern United States. Quantitative data were collected via computer on participant performance, and qualitative data regarding participant satisfaction with the instructional experience were collected b oth by computer and through post interview sessions. The vehicle for this research was a Web based instructional program entitled Basic Web Page Programming (BWPP) for which both a partial page version and a full page version were constructed. One hundre d twenty nine undergraduate students participated in the study. All 129 students came to participate in the study by responding to one of a variety of recruitment notices disseminated by this researcher. (The recruitment methods are discussed in a later se ction.) Participants scheduled themselves for a study session at a Web site set up specifically for the study. At the beginning of each study session, participants were first randomly assigned into one of the two treatment groups after which they completed in turn, a brief online Web Skills Assessment (WSA) program, the BWPP tutorial, and a satisfaction survey. In addition, post session interviews were conducted for a randomly selected subset (59) of the 129 participants. This chapter describes both the pr ocedures followed and the instruments employed in conducting the study.

PAGE 64

49 Research Design This study employed a mixed method design, generating both quantitative and qualitative data. The main phase of the study was experimental, following a factorial desi gn to explore the relationships between a single treatment variable (WBT screen design) in two treatment conditions (partial page WBT design and full page WBT design) and two dependent variables (learner performance and learner satisfaction). Participants BWPP exam scores constituted the performance data for this study. S atisfaction data came from an online satisfaction survey that all participants completed following the WBT exam A semi structured post study session interview conducted with a randomly selected subset of study participants provided further qualitative information Study Participants The target population for this study was undergraduate students at a major Southeastern urban university who met two primary criteria: a minimum level of Web proficiency and very little or no experience with HTML ( the authoring language for creating Web pages ) The study was confined to undergraduate students in an effort to bolster its internal validity. The requirement that participants possess a functional level of Web proficiency was to control for the possibility of confounding effects related to inexperience with using the Web (and, by extension, computers in general). Recruitment materials for this st udy described this criteria as adequate Web skills, meaning that [the prospective stu dy participant is] not a complete novice to computers and the Internet/World Wide Web that [he/she knows] how to use a Web browser and are fairly familiar wi th how to get around on the Web (see Appendix D for recruitment samples).

PAGE 65

50 The second primary criterion, little or no experience using HTM L, was important since familiarity with HTML could conceivably give some students a performance edge over those who have had no experience with HTML. Thus, in order to control for variability that might be attributable to different levels of familiarity wi th HTML, students who had significant experience with HTML were excluded from participating in this study. Recruitment materials for this study described this criterion as follows: You know little or nothing at all about how to create Web pages using HTML by itself. If you are fairly familiar with HTML even if through the use of a design view application, such as [Macromedias] Dreamweaver [you do not qualify for this study]. However, if you do not know how to create a Web page, or if you somehow create Web pages without ever seeing any of the HTML code, you would be a candidate for [this] study (assuming you meet the other criteria). Thus, participants in this study were filtered for experience with both the Web and HTML prior to their participation in this study. How participant Web proficiency and level of familiarity with HTML were determined in this study are explained later in this chapter. Descriptive Statistics for the Total Study Group One hundred twenty nine undergraduate students participated in this study. The demographic data collected included gender, age, awareness of HTML (i.e., what it is and what it is used for), and years of experience using HTML. The group as a whole consisted of 44 males (34%) and 85 females (66%) and ranged in age f ro m 18 to 52 years with a mean age of 21.9 years ( SD = 4.86). Table 1 provides more detail regarding

PAGE 66

51 the total group s gender and age demographics. The sample, as a whole, was relatively young, with 109 participants (89%) between ages 18 and 24 years This was not unexpected for a group of undergraduate students, although the presence of older undergraduates was somewhat a surprise. A frequency table for participant ages can be found in Appendix C. Table 1 Total Group Gender and Age Demographics Gender N Age Range M SD Male 44 (34%) 18 44 22.8 4.719 Female 85 (66%) 18 52 21.4 4.897 Combined 129 (100%) 18 52 21.9 4.863 A majority of participants (58%) reported having no prior awareness of the HTML Web programming language. This was also true within gender groups, although a higher percentage of females had no prior HTML awareness. A more complete breakdown of prior HTML awareness by gender is provided in Table 2. Table 2 HTML Awareness b y Gender Gender Prior HTML Awareness M ales Females Total Group No 24 (54.5%) 51 (60%) 75 (58.1%) Yes 20 (45.5%) 34 (40%) 54 (41.9%) Combined 44 (100%) 85 (100%) 129 (100%)

PAGE 67

52 As a group, 111 (86%) participants reported having absolutely no experience using HTML. Of the 18 that reported some experience using HTML, nine reported less than a years experience, five indicated 1 to 2 years, one reported 2 to 5 years experience, and three said they had over 5 years experience. A more complete breakdown of HTML awareness by gender is provided in Table 3. Table 3 HTML Experience b y Gender Gender HTML Experience Males Females Total None 37 (84.1%) 74 (87.1%) 111 (86.0%) Some 7 (15.9%) 11 (12.9%) 18 (14.0%) Combined 44 (100%) 85 (100%) 129 (100%) Sample Size and Selection Prior to the start o f this study, a search of the literature for guidance in determining an appropriate sample size for this study yielded only a few studies of comparable concern. Piolat, Roussey, and Thunin (1997) published a single paper describing two separate studies in vestigating the effects of two types of text presentation (page by page vs. scrolling) on participants performance while reading and revising texts (p. 565). In their first experiment, they employed a sample of 54 participants while in the second exper iment their sample was composed of 26. Each sample was drawn from second year undergraduate psychology students, though there was no specific indication both samples were drawn from the same population or if any of the individuals participated in bot h ex periments. Bernard, Baker, and Fernandez (2002) sought to

PAGE 68

53 determine the best way to display large amounts of information on the web by comparing paging versus scrolling screen designs. They used a sample of 18 volunteers, all of whom were subjected to thre e separate conditions. In another study examining the effects of scrolling on the usability of an online newspaper (van Oostendorp & van Nimwegen, 1998), the sample consisted of 20 (unclassified) students. Schwarz, Beldie, and Pastoors (1983) study compar ing user preference between paging and scrolling screen designs was also conducted wit h a sample of 20 participants And finally, Kolers, Duchnicky, and Ferguson (1981) used 20 paid volunteers to compare the effects of scrolling rates on the readability of text on a CRT (i.e., television) screen. Unfortunately, none of the these studies provided sufficient information to ascertain how their respective sample sizes were determined and, therefore, could not appropriately be used to inform this study. On an intuitive level, the sample sizes of these studies (54, 26, 18, 20, 20, and 20) would appear to be suspect, especially if one were assuming a .05 alpha level and a medium effect size. Given the lack of strong precedence in the literature, this researcher turned to Cohens (1992) power table to determine the sample size for this study. Given that the study participants were randomly assigned to the two treatment groups (see the Data Procedures section below), and an assumption of a moderate effect size at a power of .80 for a .05 alpha, Cohens power table group recommended that each treatment group contain 64 participants for a total sample size of 128. This recommendation was followed for the study. Like many such studies that target the population of u niversity undergraduates, the problematic nature of obtaining a truly random sample made such a prospect for this

PAGE 69

54 study severely impractical, if not impossible. Because of limited access to the target population in conjunction with the limited timetable wi thin which to conduct the research (all study data had to be collected by the end of June 2005), the study sample was obtained via various means of recruitment, essentially, on a first come first serve basis. Advertising in the universitys student newspa per and direct dissemination of handbills at various on campus locations where students frequented and/or congregated (such as the student center and main library) proved the most productive. Other methods of recruitment included posting recruitment flyers around campus, direct emails to student led and student oriented university organizations, and instructors of undergraduate classes, and word of mouth. All recruitment materials except the newspaper advertisement included the general purpose of the study, the criteria for participating in the study, the amount of cash compensation for participating in the study, the average length of a study session, and a Web site URL where prospective participants could get further details, sign up for the study, and sch edule a study session. For brevity sake, the newspaper advertisement included only the compensati on amount and the Web site URL. S amples of these recruitment materials can be found in Appendix D Cash compensation for participation in the study ($20.00 for a single study session) was employed as a means of generating interest among the universitys undergraduate population. The decision to provide monetary compensation stemmed from the researchers recruitment experiences during the first instantiation of the study, which featured a different Web based instructional program. More detail about this and other modifications to the original study design are discussed in Appendix B.

PAGE 70

55 Regarding the length of session time advertised, the average given of approximately one hour was a ctually the true average of all length of session times updated in real time The WBT program collected start and stop time data for every participants study session, with the difference calculated in minutes and rounded to the nearest minute. At any given moment, the study Web sites home page displaye d the average of all length of session times currently in the study database, such that the average length of session time was updated with every completed study session. Throughout the entire study, this average remained at about one hour give or take a few minutes. At the conclusion of the study, the average length of session stood at 63 minutes. As was indicated earlier, participants were included into the study on a first come first serve basis, providing, of course, that they met the stated criteria f or participation in the study. Virtually all of the participant session scheduling was done automatically via the study Web site, which was programmed to accept no more than 128 total participants for the study. ( T he total nu mber of study participants ca me to be 129, because of a suspicion that arose toward the end of the data collection process regarding the integrity of one participants data. Because this participant t ook the shortest amount of time to complete a study session and obtained the lowest s core on the BWPP exam, there was some concern that he had not made a good faith effort during his study session, thus rendering his data unreliable Therefore, as a precaution in case that individuals data had to be discarded, an additional participant wa s recruited as a possible replacement. A nalyses of the BWPP data, both including and excluding the suspicious exam score, eventually proved the

PAGE 71

56 concern to be unfounded and that individuals data was retained. But s ince th e data from the additional recruit had already been collected, it was also included in the study, bringing the total N of the study to 129.) The details of the session scheduling process can be found in Appendix E, but it is important to note here that the study Web site was programmed to m anage participant slot availability on the fly based on the number of active session appointments in the study database at any given time. Participants scheduled their own session appointment from an online calendar of dates and times prepared by the resea rcher. They could also cancel and/or reschedule their appointment online themselves. The site automatically adjusted the number of slots available for each session, as well as the number of total available slots for the study. Th e number of slots available was incremented and decremented in real time to reflect the scheduling or cancellation of session appointments. In like fashion, sessions were automatically closed when all their available slots had been taken and reopened again if any of their scheduled participants cancelled an appointment. Thus, who participated in study and in what order was an effectively random process. As far as assuring participant suitability for the study, it was stated at the outset of this chapter that, in addition to the under graduate status requirement, prospective study participants were screened for two other suitability criteria: level of Web proficiency and level of familiarity with HTML. Because it was important to control, as much as possible, outcome variability due to differences in Web skills and/or HTML experience, a premium was put on making sure prospective participants understood and met the

PAGE 72

57 suitability criteria. To this end, the criteria were presented to each prospective participant multiple times before allowing them to participate in the study: 1. I n all recruitment materials 2. A t the start of the online scheduling process 3. I n the online appointment confirmation provided participants after scheduling a study session (which was also emailed to the participan t) 4. V erbally by this researcher when participants arrived for their study sessions 5. O n the online informed consent Web page that participants had to sign (by clicking their agreement to consent) before they could begin their study session In addit ion, this researcher made sure to include the criteria in any other form of communication with prospective participants that might have occurred, such as phone or email contacts. Thus, even if participants never saw any of the recruitment materials (e.g., they learned of the study via word of mouth), the suitability criteria were presented to each individual at least three times before being allowed to participate in the study. With no other way of definitively confirming their suitability for participation participants were allowed into the study, essentially, on the basis of this self report honor system. In summary, the sample selection process employed for this study was fairly random and ensured that only suitable participants were allowed to participate. While a truly random sample selection was all but impossible for this study, the sample selection process implemented was as random as could have been managed under the

PAGE 73

58 circumstances. As well, the process of having each participant confirm his or her suit ability on multiple occasions was as defin itive as could be reasonably achieved. Measures The single independent variable in this study, WBT screen design, had two conditions: partial page and full page. The two dependent variables were learner performanc e and learner satisfaction. In order to control for effects deriving from variations in participants Web experience/proficiency, all study participants had to meet a minimum level of Web proficiency. Participants also were required to have very little or no significant familiarity/experience with HTML so as to control for variability stemming from significant differences in participants familiarity with HTML. The screening method for these criteria is presented in the next section. One hundred twenty nine subjects were recruited on a first come first serve basis for this study. At the beginning of each study session each participant was randomly assigned to one of the two treatment conditions (the process of which will be detailed later in this chapter) Data Collection Procedures The Computer Lab In order to collect performance and satisfaction data for this study, a computer lab was set up in an office on the main campus of the university. The lab consisted of three similarly configured and powered com puter workstations. All three workstation boxes

PAGE 74

59 were older Pentium II based computers that had been reconditioned just prior to and especially for use in this study. Each computer was load ed with the Windows 2000 operating system, Internet Explorer 6.0 and Mc Afee VirusScan and all were protected by the same server firewall. All three work stations were configured with standard Windows enhanced keyboard, a two button wheel mouse, and mouse pad. While all three computers were configured with a sound card, none were equipped with external speakers. The only difference of note between the three workstations was that two of the systems were equipped with 17 inch CRT monitors, while the third was equipp ed with a 15 inch CRT monitor. This was because, just prior to the start of the study, the original 17 inch monitor for the third workstation malfunctioned, and there were no other 17 inch mon itors available to replace it. Though it was preferable to have identical workstations in order to control for possible confoun ding differences attributable to inequitable equipment, there was no evidence that t h e difference in monitor size impact ed the outcome of the study All three workstations were connected via Ethernet card to the universitys network, through which they acc essed the World Wide Web and, thus, the studys Web based measurement instruments (i. e. the Web Skills Assessment Program, the BWPP tutorial, and the online satisfaction survey ) The entire study Web site, including all online measurement instruments, was located on a protected university server. All but the session scheduling pages of the study Web site were restricted, requiring a username and password to access it.

PAGE 75

60 All three workstations were located in the same room and set in a row against the same wa ll. However, stacks of heavy, rectangular storage boxes were positioned between each workstation such that anyone working at one could not see the monitor screens of either of the other two. The computer lab was also equipped with a couch and chairs, as well as a fourth computer station on which the study proctor could work during the study sessions and even monitor the progress of study participants. The three computer workstations were set apart from the rest of the room by a series of tall bookcases, with a gap between two of the bookcases serving as a passageway. The lab itself was located off of a small alcove in a fairly quiet, isolated area of the building. The alcove, which was used for some of the post session interv iews, was equipped with a pair of chairs and a coffee table. P ilot Studies As was briefly mentioned at the beginning of this chapter, three separate pilots were conducted to test the study design prior to initiating the main study. This section provides only a brief synopsis of the pilot stu dy sequence. Descriptions of the instruments mentioned in this section will be provided later in this chapter. The first pilot, conducted during April and May of 2004, involved 24 participants and employed an online tutorial for a standard clinical assessm ent tool used by mental health professionals entitled, The Global Assessment of Functioning Rating Scale (GAF) 3 The Cro nbachs alpha calculated for th e GAF exam was unacceptable ( .40), and that of the satisfaction survey was not much better ( .13). These poor outcomes resulted in a decision to replace the GAF tutorial as the instrument for generating performance data

PAGE 76

61 Modifications to some of the other data collection procedures and instruments were also indicated. Some of these alterations represented a s ubstantial departure from the study protocol originally proposed. Appendix B provides more detail about these modifications. The second pilot, in which the first instantiation of the BWPP tutorial appeared, was conducted in January 2005 with 12 participan ts. This first rendition of the BWPP consisted of six content sections, a review section, and a 22 item final exam. While the reliability coefficient for th e 22 item exam was acceptable ( .76), the Cronbachs alpha calculated for the satisfaction survey ( 45) was actually worse than that of the first pilot. The main results from this pilot were the deletion of the content section on creating tables in a Web page, modification of the BWPP tutorial final exam to reflect the deletion of the tutorial section on cre ating tables and a complete re working of the satisfaction survey instrument. The reason for abridging the BWPP content was that the study sessions, while shorter than those of the GAF tutorial, still averaged about an hour and a half to complete. This was a bit worrisom e because after the first pilot test there was speculation that the long session times (two hours on average) might have negatively impacted participants motivation and, thus, performance. Although there was no way to verify this suspicion, it seemed to b e a reasonable possibility. By removing the section on creating tables, the session time was reduced to about an hour. The third and final pilot for this study was conducted in March and April 2005 and involved 10 participants. The reliability coeffic ient for the BWPP exam scores ( .75) was considered reas onable for an 18 item exam and t he Cronbachs alpha compute d for the satisfaction survey ( .89) was a n acceptable improvement over the first two pilots.

PAGE 77

62 Since no significant modifications were indicated fo r any aspect of the study design, the decision was made to launch the main study. Initiation of the Main Study The main study, initiated in April 2005, was essentially seamless with the conclusion of the third pilot test. In fact, with the permission of this researchers doctoral committee, the data generated from that 10 participant pilot study was folded into the main study. Thus, there was a need to recruit only 118 more participants in order to reach the target sample size of 128. (Again, the final N of this study was 129, for reasons discussed earlier in this chapter.) The design and execution of the main study mirrored the protocol established by the third pilot. This protocol is described in the following several sections and should be understood a s also describing the protocol of the third pilot study. Study Session Preparation Every study session was prepared and proctored by the principal investigator of this study. Prior to each study session, each workstation was prepped in the same manner: after an initial check to ensure it was functioning properly, it was logged into the study site via Internet Explorer, and set to display the participant login screen. When participants arrived for a session, the suitabili ty criteria were recited to them, and they were asked if they met those criteria. Those who stated they did not meet one or more of the criteria were told they could not participate in the study, and their appointment was cancelled in the study database, which automatically incremented the

PAGE 78

63 number o f available slots on the study Web sites sign up page by one. Fortunately, this scenario occurred only a couple of times. Those who did meet the participation criteria, were given an overview of the session protocol, some general instructions and and asked if they had any questions informati on (see Appendix F for this prep sheet) The next step, then, was to randomly assign each participant into one of the two treatment groups. The procedure for random assignment procedure is discussed next. Random Assign ment of Participants into Treatment Groups The assignment of participants into the two treatment conditions was guided by two concerns: randomization of the process and conformity to the Cohens power table recommendations of at least 64 participants per treatment group (see the section on sample size above). To these ends, this researcher devised an assignment method that resulted in an equal number of participants in both treatment groups, while retaining a sufficient degree of randomization to ensure th e two treatment groups were equivalent. The random assignment of participants to tre atment gro ups was accomplished through a relatively simple lottery syste m. In preparation for the study, 128 white poker chips were each coded with a unique six digit numbe r (e.g., 1 6 3425 ) and placed in a black cloth bag. The six digit code served three purposes, the first of which was to designate one of the two treatment groups. Half of the 128 codes ended in , representing the full page condition, and the other half e nded in , representing the partial page condition. The second purpose of each chips unique code was as a login

PAGE 79

64 code that would be entered on one of the computer workstations in order to access to the online study site. Each code could only be used on ce to log into the study site because after login, its third function kicked in; namely, to serve as a unique user ID for that session. All data generated under that user ID were stored as a separate record in a database table located on the study Web site s server. After study participants were prepped for their session, the bag of poker chips was shaken vigorously for a few moments. Then, each participant was instructed to reach into the bag without looking and pull a single poker chip from the bag. Once a chip was selected it was never placed back into the bag in order to prevent a participant from drawing a previously used code. The participants were told only that the number on the chip was their login code for the study. However, as discussed above, th e codes actually determined the treatment group to which each participant was assigned. Participants who logged in with codes ending in received the full page version of the BWPP tutorial, while those logging in with codes ending in received the partial page version. Random Selection f or Post Session Interviews Once participants were assigned to their treatment groups, another drawing was conducted to select one of the participants in that study session for the post session interview. (If ther e was only one participant in a study session, he or she would be selected for the interview by default.)

PAGE 80

65 For this drawing, one chip was placed into a small white bag for every participant in the study session. The chips were uncoded, with only one being red a nd the rest blue. So, for example, if there were three participants in a study session, the bag would contain one red and two blue chips. The bag was shaken vigorously for a moment, after which participants took turns drawing a chip out of the bag. Whichev er participant drew the red chip would be interviewed following his or her completion of the BWPP tutorial. After the drawing the red and blue chips were retrieved from the participants Informed Consent Once the interview selection was finished, the participants were told to sit at one of the computer workstations and to log into the study site using the code on their white chip. The participant slated for the post session interview, however, was instructed to replace the first digit of the code (which was always a one) with a nine. Doing so would cause that persons record in the study database to be flagged as an interviewee. It also triggered a pop up message to appear at the end of that persons computer session, reminding him or her that they were slated to be interviewed The first thing presented to participants after logging in was a consent to participate form. The consent form gave the short title of the study, the studys Institutional Review Board status, and outlined the purpose of the study, the benefits for those participating in the study, compensation for the study, confidentiality and the use of data collected, and the consequences for not participating in the study, which was simply that they would not receive the benefits as described on the form. (See Append ix G for the contents of the consent form.) Participants were instructed to read the contents of the

PAGE 81

66 form, then to click yes if they wished to participant in the study or no if they declined to participant. All participants who showed up for their sche duled study session consented to participate in the study. The Web Skills Assessment Program (WSA) After the consent form, participants completed the WSA program, which will be described later in this chapter. The initial screen told participants that the program was designed to interact with them by name, then instructed them to create a code name (i.e., something other than their real names so as to maintain their anonymity). The WSA program collected data on participant gender, age, prior level of HTM L awareness (i.e., what it is and what it is used for), and experience using HTML. It also generated data on how well participants performed the various tasks presented to them during the program. It is important to note that, unlike the BWPP tutorial, the re was only one version of the WSA program. Thus, all participants experienced the exact same WSA program, regardless of which treatment group they were assigned to. The Basic Web Page Programming Tutorial (BWPP) Immediately upon completion of the WSA program, participants were automatically taken to the BWPP tutorial, which is also described in detail later in this chapter. Participants experience of this program varied according to which of the two treatment groups they had been randomly assigned. Those assigned to t he full page condition could see the entire program interface at once, meaning that all navigation controls and features of the program could be accessed

PAGE 82

67 without having to scroll down. Of course, this arrangement limited the amount of instructional content that could be displayed on a page, resulting in more pages per section than the partial page version. Clicking forward (or back) through the pages of the program was analogous to turning pages in a book. Participants assigned to the partial page version o f the BWPP tutorial saw a similar interface in terms of how the controls and features of the program were configured. The only difference in this version of the tutorial was that a good deal more instructional text was presented on a page, relative to the full page version. Th is resulted in fewer pages per section, but required the participant to scroll down each page in order to view all of the pages contents, as well as the programs navigational controls and features menu. Participants of both versions ran into a few progra m errors, requiring them to perform a task in order to correct the error and get the program back on track. For instance, after clicking on to the next page in the Images section, they were met with an error message stating that the image on the page cou ld not be found and instructing them to notify the system administrator by clicking on the Send Email button at the bottom of the page. Such errors were intentionally programmed into the courseware in an effort to force participants to utilize features o f the program that they might not otherwise use during the course of the program, such as the Previous(page) button or the Send Email button. These errors were interspersed through the BWPP tutorial at the same locations in each of the two versions. Th e purpose of these manipulations was to give participants a fuller experience of the program interface to reflect upon when answering the satisfaction survey and/or post

PAGE 83

68 session interview questions. Because introducing errors into the program flow risked b iasing participants against the program, the num ber of these designed errors was very limited. Following the five content sections and the review section of the program, participants completed the final exam. However, participants were required to complet e the satisfaction survey before learning their exam score in an effort to mitigate any effect the exam score might have on their answers to the survey. All participant exam and satisfaction data were entered into their respective database records. As disc ussed earlier, the exam score was the performance measure, and the satisfaction survey the primary measure of learner satisfaction for this study. Upon completion of the BWPP tutorial, participants not slated for the post session interview were given $20. 00 in cash and asked to sign a payment receipt. They were thanked for their participation, then dismissed. Their coded white poker chips were taken out of circulation by placing them in a box. The participant who was selected for the post session intervie w remained in the computer lab. If this participant finished the BWPP tutorial before any of the other participants in that session, the interview was conducted just outside the lab in an adjacent alcove. Otherwise, the interview was conducted in the lab itself. T he next section describes the interview process. The Post Session Interview All interviews were conducted by this researcher in the same manner, conforming to the question order in the interview guide described later in this chapter (see Appendix

PAGE 84

69 H for this guide). The interview format was semi structured, allowing for follow up questions to informants comments. No time limit for the interview was set. The interview was recorded on a digital audio recorder. Each interview was prefaced with the date of t he interview and the informants user ID code off of his or her white poker chip. This was to maintain the anonymity of the informant, but also provided a means for matching the interview with that persons session data record in the study database, which allowed for cross referencing during the analysis process. Upon conclusion of the interview, the informant was given $20.00 in cash and asked to sign a payment receipt, after which he or she was dismissed. The digital audio file of the interview was transf erred to this researchers desktop computer. The original audio file on the digital recorder was then erased. Confidentiality and Use of Data Collected for This Study None of the data generated by any of the participants in this study could be identified with a particular individual in any way. The six digit login/user ID code with which participants logged into the study site was the only unique identifier for all data entered into the study database or recorded in a digital audio file. Those codes had ab solutely no connection to any participants identity. The data generated from this study were accessed only by this researcher, one research assistant, and members of this researchers doctoral committee on an as needed basis. Neither the research assistan t nor the doctoral committee members could identify any participant of this study based on the data to which they had access.

PAGE 85

70 The primary use of the participant data collected during this study was for the writing of this dissertation report, although the data may be published in other venues in the future. All study data will be retained by this researcher on a CD ROM indefinitely. Finally, confidentiality of parti cipant identity and data extended as well, to al l data collected during the three pilot stud ies preceding the main study. Data Collection Instruments The variables of interest to be measured in this study were learner performance and learner satisfaction. Learner performance was measured as the score on the BWPP tutorials final exam, while a s atisfaction survey was the primary instrument used to measure learner satisfaction. Post session interviews were conducted with 59 study participants, generating some additional satisfaction related data, as well as some perceptual data pertaining to eleme nts of learning through Web based instructional programs. Other, primarily demographic, data were also collected for each participant at the beginning of his or her study session. More specifically, gender, age, prior awareness of HTML (i.e., what i t is and what it is used for), level of experience using HTML, and length of session data were collected during each study session, although not originally with the intention of using any of it for analysis purposes. Instead, these data were originally int ended as a second level check for each participants suitability for the study; in other words, as a control for variability in outcome due to differing levels of participant Web skills and experience with HTML. In the end, because of the multiple self report mechanisms it was deemed unnecessary to use these data as a filtering

PAGE 86

71 mechanism for participant suitability. Nevertheless, the demographic data did prove to be useful in, among other things, determining the equivalence of the two treatment groups. The following subsections discuss the instruments used to measure each of these data points, beginning with the variables of primary interest first. Participant De mographics The WSA Program The Web Skills Assessment program (WSA) as a data collection instrument cannot be dealt with in as straightforw ard a manner as the other instruments employed in this study. It is, essentially, an historical artifact from a previous incarnation of this study. As such, it requires some context and a bit of a preface. The WSA program was dev eloped by this researcher originally as a primary filter for participant suitability for the first pilot study conducted in Ap ril and May of 2004. It consisted primarily of a set of questions and tasks representing some of the basic concepts and activities with which one possessing functional Web skills (and, by extension, functional computer skills) should be familiar. The Web concepts and skills targeted in the WSA program are familiarity with Web forms and Web form elements, point and click mouse skills, the ability to navigate among, orient oneself within and manipulate multiple open windows, and familiarity with scrolling. Asking for participants to enter their gender and age was simply as a convenient way of having participants interact with form radio buttons and textboxes. The WSA concludes with two questions pertaining to participants level of familiarity with the topic covered in the studys main tutorial, which as was mentioned earlier, was originally the GAF tutorial It also entered the session start time for

PAGE 87

72 participants into their respective database records, which was used to help compute how long it took them to complete their study session (i.e., their length of session data). ( See Appendi x I to view the WSA instrument.) The original idea w as to have all prospective study participants complete the WSA program before being allowed to participate in the study. Only those who performed adequately on the WSA or those that indicated little or no familiarity with and/or experience using the GAF we re to be allowed to participate in the study. However, before a grading rubric for the WSA could be developed and implemented, it was decided that the process of having prospective participants show up at the study lab with no guarantee they would be allow ed to participate would probably be an ineffectual way to recruit study participants. Even though the idea to use the WSA as a control for Web skills and familiarity with the GAF was dropped, it was left in the study protocol primarily because it had alrea dy been integrated into the study Web site and would take too much time to programmatically untangle and remove it. This decision was made more palatable by its virtues of being a very short program and harmless to the rest of the study. The fact that it c ollected data that might prove to be useful later was considered a potential bonus. When the GAF tutorial was replaced by the BWPP tutorial, it was a simple matter to modify the WSAs last two questions to refer to the topic of HTML. For this study, then, the WSA provided the demographic data of gender, age, questions prior awareness of HTML (i.e., what it is and what it is used for), and experience using HTML. W hile these demographic data were not originally intended to be nor specified as variables of in terest for analysis purposes, their collection did allow

PAGE 88

73 for the investigation of several other interesting relationships, such as exam score by HTML experience. Finally, despite the fact that the Web Skills Assessment moniker might imply that it was des igned to measure participants level of Web skills, the WSA was never used as a measurement instrument. Therefore, there is no validity or reliability report to offer for this instrument. Learner Performanc e The BWPP Exam The measure for learner perform ance in this study was participant s score s on the BWPP exam. It should be noted here that using the BWPP exam score as the study performance measure was, perhaps, a less direct way of gauging whether or not participants actually learned the material. A mo re direct measure would have been to have participants actually construct a Web page upon the completion of the tutorial and score it according to an established rubric. However, doing so would have been quite problematic and ultimately impractical. It wou ld have required that the scoring of the task be carried out either by the BWPP tutorial the programming of which was beyond the capabilities of the researcher, or by the researcher, which would have extended the already lengthy session time to an unreasonable duration. The BWPP tutorial was a replacement for the original WBT program used in the first pilot study conducted in April and May 2004 (see Appendix B for more information). Whereas, the original WBT program involved highly subjective decision making d uring its exam, producing highly unreliable data, the BWPP courseware

PAGE 89

74 provided a tutorial on a fairly straightforward topic, with the promise of generating much more reliable data. The BWPP tutorial used in this study was actually a much pared down version of a CBI program entitled Internet Programming (IP), that was developed by Tina Majchrzak for her 2001 dissertation research with undergraduate students in an instructional technology program (Majchrzak, 2001). The IP programs selection as a replacement for th e original WBT program was due in large part to the solid reliability coefficient (r = .81) of its posttest 2 The intact IP content proved to be too extensive and, thus, time prohibitive for the purposes of this study. Therefore, with the permission of D r. Majchrzak, this researcher culled out certain sections of the IP courseware to create a much shorter program focusing solely on how to create a very basic Web page using HTML. By the end of the development process, the tutorial component of the BWPP tut orial consisted of only five of the IPs coursewares original 15 sections: Introduction to HTML, The HTML Document Structure, Logical and Phy sical Tags, Lists, and Images. ( S ee Appendix J f or a more detailed de scription of the BWPP tutorial ) Because the t utorial portion of the BWPP tutorial represented only five of the IP coursewares original sections, the final exam for the program had to be adjusted accordingly. Only 15 of the 36 IP posttest questions related to the five sections in the BWPP tutorial. A reliability test of Majchrzaks study data on those 15 questions yielded a rather marginal Cronbachs alpha of .68. At the suggestion of Dr. Majchrzak, three of her studys retention test questions (all relating to the BWPP tutorial content) were ad ded to the 15 posttest questions, and another reliability test was conducted for her study data.

PAGE 90

75 The Cronbachs alpha calculated for all 18 questions combined was a more respectable .72, which was a defensible reliability coefficient for an 18 item instru ment intended to measure learner performance. Thus, the BWPP final exam consisted of 18 multiple choice items pertaining only to the five content sections of the BWPP tutorial. Each of the test questions had four possible answers to choose from. A score of 78% (i.e., at least 14 out of 18 questions answered correctly) was considered to be a passing score. This exam score was used to measure the participant performance in this study. Finally, the BWPP exa m was administered in the exact same format as the rest of the BWPP tutorial, respective to the two screen design treatments. T he exam items were constructed in the exact same way and followed in the same order for both the full page and part ial page treatment groups. The only difference between the treatment conditions was that for the full page group each exam item was presented one at a time on separate non scrolling pages whereas all 18 items were displayed on the same, scrollable page for the partial page group. The final exam items can be viewed in Appendix K. Since the purpose of this study was to compare the learner performance and satisfaction effects of two different Web based training screen designs, two versions of the BWPP tutorial were constructed. The first version produced was a full page design, where no vertical scrolling was required in order to see the entire page content. Once the full page version was validated, a partial page version was th en constructed. Every effort was made to insure that the partial page version was the operational (e.g., its feature set and navigation set) and content equivalent to the full page version. The only difference was that it provided more instructional conten t per page by coalescing a number of the

PAGE 91

76 full page version pages into a single page. This, of course meant that one had to scroll down an indeterminate amount in order to view all a pages content and/or features. For a graphic comparison of the two versions see Appendix A. Validity The validity of the scores from this instrument as an acceptable measure of performance rests on the precedent of Majchrzaks IP posttest. In her dissertation, she documents in detail t he process by which she established the validity of the IP posttest (Majchrzak, 2001, pp. 55 60). Given that all 18 items of the BWPP final exam were taken verbatim from Majchrzaks IP posttest and that each of the BWPP tutorials five content sections wer e represented in the exam, it was reasonable to assume that the BWPP exam inherited the construct and content validity of the IP posttest. The tutorial and exam components of the BWPP tutorial were reviewed by Dr. Majchrzak throughout the development proce ss, with her providing a good deal of valuable editorial and design input. When the BWPP tutorial was finally complete, Dr. Majchrzak communicated her satisfaction with the programs fidelity to her original IP content, as well as her opinion that the BWPP courseware constituted a coherent, well designed instructional program on basic Web page programming using HTML. The contents of her email containing her approval can be found in Appendix L. In addition to Dr Majchrzaks pos itive assessmen t of the BWPP tutorial a content analysis of the tutorial was conducted by five independent reviewers: two instructional technology faculty members and three advanced instructional technology doctoral students. All of these reviewers had expertise in instructional design and four had expertise in using HTML to create Web pages. All reviewers agreed both that the

PAGE 92

77 BWPP tutorial adequately represented the domain of Web page construction using HTML and that the final exam suffici ently sampled the tutorial content. Therefore, the BWPP tutorial final exam was found to be acceptable as an instrument for measuring learner performance in this study. With the validity of the full page version already established, the partial page versio n was subjected only to a verification review. This process involved three independent reviewers, with its purpose being to verify that the organization, structure, instructional content (i.e., text, graphics and interactions), final exam items, and supple mental features of the two versions were exactly the same. Since the Learner Satisfaction Survey was integrated into the tutorial, its items were also reviewed. The verification was performed by having each reviewer go through both versions of the BWPP tutorial simultaneously and note any discrepancies. This was accomplished via a workstation that had been outfitted with two computers, each set up to run one of the two versions. Upon completion of the review, all three reviewers verified that the two ve rsions of the tutorial were identical except for the amount of text presented on a page. Reliability Apart from the reliability test using Majchrzaks study data described above, two other reliability tests of the BWPP exam were conducted. The first test was performed on data from the 12 participants who took part in the pilot study conducted during January 2005. At that time, the tutorial component of the BWPP tutorial included a sixth section on creating tables in a Web page, a nd the exam consisted of 22 items, reflecting the expanded curriculum. With the exception of the satisfaction survey, the study design was the same as for the main study. After eliminating the four questions

PAGE 93

78 relating to tables, a Cronbachs alpha of .75 wa s computed for the remaining 18 test items. After modifications to the satisfaction survey were completed (see Appendix B for more information about these modifications), a third, 10 participant pilot study was carried out during March and April of 2005. T he Cronbachs alpha for that pilots exam data was calculated as .75. With reliability coefficients of .72, .75, and .75 across three separate samples of participants, it was concluded that the BWPP exam score reliability was fairly stable and of suf ficient magnitude to warrant its use in the main study. Learner Satisfaction Th e Learner Satisfaction Survey Learner satisfaction was measured primarily by an online 10 item, self report of their attitude toward the design of the BWPP tutorial which th ey had just completed. Each survey item was constructed as a concise, positively phrased statement that characterized either the program design using an adjective or (e.g., The program design was user friendly) or the participants general disposition to ward the program design ( e. g. I liked the way the program was designed). Participants were asked to rate their level of agreement with each statement on a five poin t Likert scale, with one being str ongly disagree and five being strongly agree (see Appendi x M for all 10 items). Participants also had the option to submit comments for each of the survey items, as well as submit final comments regarding any aspect of the program interface. The satisfaction survey was strategically integrated into the study ses sion protocol as a requirement for completion of the WBT program. It was presented to participants immediately after all their exam answers had been submitted, but before they were given

PAGE 94

79 their exam scores. The perceived benefits of this arrangement inc luded not only an assurance that each participant would complete the survey, but also that a participants reported level of satisfaction would not be influenced by his or her exam score. Taken as a whole, the survey instrument was designed to elicit only participants overall level of satisfaction with the BWPP tutorial design. The survey items pertained, essentially, only to a general characterization of the program design, rather than to any specific feature of the program design, such as the presence or absence of scrolling or the amount of instructional text displayed on a page. The goal was to generate a measure that could be used to detect any significant differences in satisfaction level between the groups. A more pointed inquiry into participants p erceptions regarding the impact of program design on learner satisfaction (and performance) was reserved for the post session interviews, which will be discussed below. Finally, like the BWPP exam, the construction and order of the satisfaction survey ite ms were identical for both treatment groups, with the only difference being that each survey item was presented on a separate page for the full page group, whereas all 10 items were displayed on the same page for the partial page group. Validity The validity of the satisfaction instrument for this study was established through four independent content analyses. Two university faculty members with instructional design and two advanced instructional technology doctoral students with substantial real world experience in designing instructional courseware were asked to assess the content validity of the satisfaction survey. At least three of the four reviewers

PAGE 95

80 had considerable experience in constructing survey instruments. No changes we re made to the satisfaction survey based on the ir reviews. Reliability The reliability of scores obtained from the satisfaction survey was established from the results of the pilot study conducted during March and April of 2005. An analysis of the pilot data yielded a Cronbachs alpha of .89 for this instrument. The Post Session Interview One participant from each study session was randomly selected for a post session interview. As a result, 59 interviews were carried out during the course of this study, each one conducted by the studys principal investigator. All interviews were recorded on a digital audio recorder and transferred to a desktop computer, with each audio files name consisting of that participants study session userid code and t he interview date (e.g., 314225_ [05 05 23].wav). The random selection process and the interview protocol are described later in this chapter. The interview was semi structured, progressing through a series of ordered questions, but allowing the i nterviewer to pose follow up questions. The questions revolved around the following: 1. W hether or not the participant liked the interface design of the BWPP tutorial 2. T he participants perceptions pertaining to the impact a WBTs screen design might ha ve on learner performance and/or satisfaction

PAGE 96

81 3. T he participants preferences regarding the amount of text displayed on Web pages ; especially, instructional Web pages 4. T he participants perceptions regarding the impact scrolling in a WBT program might have on learner performance and/or satisfaction 5. T he participants preferences with regard to scrolling in WBT programs A list of the interview questions can be found in Appendix H. Each question was presented in two parts, with the first part phrased as a clo sed ended question (e.g., Overall, did you like the program interface of this instructional experience? ) and the second part being a request for the participant to elaborate on their answer to the first part. The interviews solicited participant perceptio ns and preferences regarding the primary research questions posited in this study; specifically whether or not the interface design of WBT programs has an appreciable impact how well students learn the instructional content of the program and/or how satisf ying the instructional experience is. Whereas the issue of scrolling in a WBT program a prominent feature of concern at the base of this study was approached only obliquely in the BWPP tutorial and not at all in t he satisfaction survey, it was given particular focus in the post session interviews. The audio recording of each interview was reviewed by the studys principal investigator and transcribed into a database via a Web based form created for that purpose (see Appendix N). The data generated during the post session interviews were qualitative in nature. However, because the interview questions were initially presented as closed ended questions, it was possible to categorize and codify participants responses to each question. The discrete responses participants gave to each question fell

PAGE 97

82 into done of four categories: no, yes, it depends, or no preference. These responses were numerically coded (e.g., no = 0, yes = 1, it depends = 2, no preference = 3) and entered into the database. So coded, it became possible to conduct certain types of quantitative analyses with the interview questions. The anecdotal aspect of the post session interview data (i.e., participants explanations of their responses) provided deeper insight into participants per ceptions and/or preferences regarding both general and specific aspects of WBT interface design. These perceptions and preferences were used to cast the results of this studys quantitative analyses into clearer relief. A sample transcribed interview can b e found in Appendix O. Interview Inter Rater Reliability Procedure Inter rater reliability for the post session interview data was established by having an independent research assistant record data for a random sample of the interviews, then conducting a cross tabulation analysis between the originally recorded interview data and the assistants recorded data. The procedure by which both the original and the reliability data were recorded was essentially the same: both data sets were entered while the res pective rater was listening to digital audio files of the interviews, and the same database entry form was used. The only difference was that the reliability data were entered into a separate (but identical) database table. The research assistant was train ed in how to enter the interview data in the database. She was also trained in how to operate the audio player computer application in tandem with the database entry form.

PAGE 98

83 The participant interviews used to determine inter rater reliability were randomly selected by first writing the userid codes for all 59 interview audio files on separate slips of paper, wadding the slips up placing them in a shoebox then shaking the shoebox vigorously for a few moments The research assistant was then instructed to select 15 of the paper slips without looking. This s ample of 15 interviews represented 25% of all interviews. Interview Inter Rater Reliability Outcome The original and reliability data recorded for each interview question in the inter rater reliability sample were cross tabulated to determine the level o f agreement This yielded a reliability ranging from 80% to 100% across all 12 interview questions, with three questions posting a 80% reliability, two question posting a 93% reliability, and seven questions posting a 100% reliability. Thus, using the perc ent agreement calculation for inter rater reliability (dividing the number of total observations by the total number of agreements between the original and reliability raters) yielded an average reliability of 93.8%. On this basis, it can be concluded tha t the originally recorded interview data were reliable. Data Analysis This study generated data that enabled an analysis of the two primary research relationships of concern in this study: the impact a WBT programs screen design might have on learner pe rformance and the impact a WBT programs screen design might have on learner satisfaction. Data on participant performance were collected as percent scores

PAGE 99

84 on the BWPP tutorials final exam. Participant satisfaction data were collected as five point Likert scale ratings for each of 10 survey items pertaining to participants level of satisfaction with the program design. The 10 satisfaction survey responses were combined to give a single satisfaction measure (i.e., the mean of the 10 responses ) The satisfa ction survey also generated some qualitative data regarding participant satisfaction in the form of optional comments submitted by a number of participants. In addition, a post session interview conducted with a randomly selected subset of study participa nts yielded some perceptual data regarding the impact a WBT programs screen design may or may not have on learner performance and satisfaction, as well as preference data pertaining to the two screen designs of interest (i.e., full page and partial page). This information allowed for a keener insight into participants attitudes, beliefs, and preferences regarding WBT screen design, whether or not it has any impact on learner performance and/or satisfaction, and, if so, in what ways and to what degree. Par ticipant interview responses were also coded in such a way as to allow for some quantitative analyses to be conducted on the data. A lthough no other relationships were specifically targeted for analysis in this study, other data collected incidental to the main thrust of the study allowed for additional investigations. In particular, gender, age, prior awareness of HTML (i.e., what it is and what it is used for), experience using HTML, and length of session data provided opportunities to explore their relat ionship to the two primary dependent variables of learner performance and learner satisfaction. The two primary questions investigated in this study are reiterated below along with the particular analysis methods employed to investigate them:

PAGE 100

85 1. Is there a significant difference in performance between learners using a scrolling, partial page WBT and those using a non scrolling, full page WBT design? A n independent t test was employed to investigate if WBT screen design had any significant impac t on learner performance with t he dependent variable being the BWPP tutorial exam score. 2 Is there a significant difference in satisfaction between learners using a partial page WBT and those using a full page WBT design? An independent t test was also used to look at the relationship between WBT scre en design and learner satisfaction. The dependent variable for this analysis was the mean of satisfaction survey responses. In addition to the se two primary study questions two other lines of investigation were also pursued O n e was the possible eff ects of several variables (gender, age, prior awareness of HTML, experience using HTML, and total session time ) on both the BWPP exam score and satisfaction level for which a multiple regression was performed. The other was a chi square analysis of the co ded post session interview data to see if a significant difference existed between how each treatment group responded to the questions The next chapter presents the results of these analyses.

PAGE 101

86 Chapter Four Results Introduction This chapter provides the results of the analyses conducted on the data g enerated during this study. Descriptive statistics of the two treatment groups are provided first. This is followed by the analysis results for the two primary research questions posited by this study: is there a significant relationship between WBT screen design and (a) lea r ner performance and/or (b) lea r ner satisfaction. After that, the possible effects of several variables (gender, age, prior awareness of HTML, experience using HTML, and total session time) on both the Basic Web Page Programming (BWPP) e xam score and satisfaction level are explored. The chapter concludes with an analysis of the results of the participant responses elicited during the post session interviews. Performance and satisfaction data analyses are based on the participation of 129 undergraduate students who were randomly assigned to two conditions of WBT screen design: full and partial page screen design. Finally, an alpha level of .05 was used for all statistical tests. Equivalence of the Two Treatment Groups The 129 undergradua te students who participated in this study were randomly assigned to two conditions of WBT screen design: full and partial page screen design. The two treatment groups appeared to be very similar across all variables tested. There

PAGE 102

87 were no significant differences between these the two groups for any of the demographic variables of gender, age, prior awareness of HTML (i.e., what it is and what it is used for), or Experience using HTML. The same can be said for total session time. M ore to the point of t his entire study, t here did not appear to be any significant differences between the treatment groups in terms of BWPP exam score and satisfaction. Each of these factors will be considered in turn. Gender Equivalence The percentage of participants o f each gender did not significantly differ by treatment group, c 2 (1, N = 129) = 0.65, p = .42. A small effect size of 0.14 was found for the difference in gender between treatment groups. Table 4 shows the result of a chi square test of inde pendence for gender by treatment group. Table 4 Gender b y Treatment Group Gender Total Group N = 129 Full Page n = 65 Partial Page n = 64 c 2 p Male 44 (34.1%) 20 (30.8%) 24 (37.5%) Female 85 (65.9%) 45 (69.2%) 40 (62.5%) 0.65 .42 ns Note. n s = not statistically significant ( p > .05) Age Equivalence Treatment groups did not differ significantly by age, t (127) = 1.33, p = .19, with the difference representing a small ( 0 .21) effect in the direction of the full page group. T able 5 reports the means and standard deviations for age grouped by treatment group.

PAGE 103

88 Table 5 Age b y Treatment Group Treatment Groups N M SD t df p Full Page 65 22 a 6.16 Partial Page 64 21 a 2.96 1.33 127 .19 ns Note a I n years rounded to the nearest year ns = not statistically significant ( p > .05) Prior HTML Awareness Equivalence There was also no significant difference between treatment groups regarding prior HTML awareness, c 2 (1, N = 1 29) = 3.46, p = .06. A small effect size of 0.33 was found for the difference in prior HTML awareness between treatment groups. Table 6 shows the result of a chi square test of independence for prior HTML awareness grouped by treatment group. Ta ble 6 Prior HTML Awareness b y Treatment Group Prior HTML Awareness Total Group N = 129 Full Page n = 65 Partial Page n = 64 c 2 p No 75 (58.1%) 43 (66.2%) 32 (50%) Yes 54 (41.9%) 22 (33.8%) 32 (50%) 3.46 .06 ns Note. ns = not statistically significant ( p > .05) HTML Experience Equivalence With regards to HTML experience, only 18 (14%) of study participants reporting they had some level of experience using HTML. The decision was made to collapse the four categories of experience into a sing le group, called some experience and compare it to the group who had no experience. A chi square was conducted, resulting in finding no

PAGE 104

89 significant difference in HTML experience between treatment groups, c 2 (1, N = 129) = 0.001, p = .97. A very sm all effect (0.01) was calculated. Table 7 shows the result of a chi square test of independence for HTML experience grouped by treatment group. Table 7 HTML Experience b y Treatment Group HTML Experience Total Group N = 129 Full Page n = 65 Partial Page n = 64 c 2 p No experience 111 (86.0%) 56 (86.2%) 55 (85.9%) Some experience 18 (14.0%) 9 (13.8%) 9 (14.1%) .001 .97 ns Note. ns = not statistically significant (p > .05) Total Session Time Equivalence There was also no differen ce in the two treatment groups with regards to the amount of time it took to complete the study session, t(127) = 0.56, p = .58. A small effect size (0.10) was calculated for the session time difference between treatment groups. Table 8 shows the result of an independent t test of total session time by treatment group. Table 8 Total Session Time b y Treatment Group Treatment Groups N M SD t df p Full Page 65 64.5 a 19.77 Partial Page 64 62.5 a 21.20 0.56 127 .58 ns Note. a In m i nutes. ns = not statistically significant ( p > .05)

PAGE 105

90 Conclusion Regarding Equivalency of Treatment Grou ps With no significant differences between the treatment groups on the demographic variables or with total session time it appears that the procedure for randomly assigning participants to the t reatment conditions resulted in groups that evidence d no statistically significant differences Since equivalency in treatment groups tends to mitigate the effects of any threats to internal validity that might exist, any differences f ound regarding learner performance and/or satisfaction can be reasonable attributable to WBT screen design with a high degree of confidence The BWPP Exam Score and Satisfaction Level A correlation analysis was conducted to gauge the relationship be tween BWPP exam score and satisfaction level for all participants combined. The correlation coefficient for the BWPP exam scores and satisfaction level was found to be significant ( r = .22, p = .01), with exam scores sharing about 5% of its variability wit h satisfaction level ( R 2 = .05). This suggests that, across both treatment groups, participants with higher exam scores tended to express higher levels of satisfaction. When this relationship was examined within the full page and partial page treatment gr oups separately, the correlation coefficients in both groups were similar, although non significant ( r = .21, p = .09 and r = .22, p = .09, respectively). One thing to keep in mind here is that participants did not see their exam score until after they ha d completed the satisfaction survey. Therefore, whatever else may be concluded about the relationship between participant exam score and satisfaction level, it

PAGE 106

91 cannot be said that participants satisfaction lev el was attributable to having know n how well they did on the BWPP final exam Learner Performance Effects Data on participant performance were collected as percent scores on the BWPP tutorials final exam. An independent t test was employed to determine if a significant difference existed between the exa m score means of the two treatment groups: full page and partial page. The t test yielded a non significant t value, t(127) = 0.834, p = .41; thus, the null hypothesis for this research question cannot be rejected. A small effect size (0.15) was calcu lated in favor of the partial page group. Table 9 provides more detail regarding this test result. While the partial page group performed, on average, slightly higher on the BWPP exam than the full page group this difference was not significant. Table 9 Independent T Test Results of BWPP Exam Score s b y Treatment Group Treatment Groups N M SD t df p Full page 65 69.25 17.01 Partial page 64 71.81 17.93 0.83 127 .41 ns Note. ns = not statisticall y significant ( p > .05) Learner Satisfaction Effects Participant satisfaction data were collected as five point Likert scale ratings for each of 10 survey items pertaining to participants level of satisfaction with the program design. A mean rating for the 10 survey items was computed and an independent t test was conducted to test the hypothesis that there would be a significant difference in

PAGE 107

92 satisfaction level between learners in the two treatment groups. However, this test also resulted in a non signi ficant t value t ( 127 ) = 1.293, p = 20 with a small effect size (0.22) calculated in the direction of the partial page group. Table 10 provides more detail regarding this test result. Table 10 Independent T Test Results of Satisfaction Level b y Treatment Group Treatment Groups N M SD t df p Full page 65 4 .09 0.51 Partial page 64 4.20 0.47 1.29 127 .20 ns Note. ns = not statistically significant ( p > .05) Secondary Relationships The demographic and length of session data generated incidental to the studys primary data collection allowed for t he testing of several other secondary relationships of possible interest. Multiple regression was used to examine the possible effects of several variables on both the BWPP exam score and satisfaction level. The variables used as predictors were treatment group, gender, age, prior awareness of HTML, experience using HTML and total session time. Multicollinearity was not an issue, as none of the predictor variables was highly correlated. Correlations ranged from .134 (gender with age) to .164 (treatment group with prior awareness of HTML). The possibility of interactions between each predictor variable and treatment group was explored for both the exam score and satisfaction. For the exam score, when the interactions of eac h predictor with treatment were added to the main model, the change in R 2 ranged from 0% to less than 2%. For satisfaction, the change in R 2 for each

PAGE 108

93 added interaction was less than 1%. None of these interactions w as found to be statistically signific ant, and therefore only the main effect models for the exam score and satisfaction are reported here. The six predictor variables of treatment group, gender, age, prior awareness of HTML, HTML experience and total session time explained 20.6% of the varia nce in the exam scores, F (6,122) = 5.281, p = .000. Using the beta coefficient, which statistically controls for the other variables in the model, age was a significant predictor of scores ( b = 0.166, p = .048), with scores decreasing with age. P rior awareness of HTML was also a significant predictor of score ( b = 0.295, p = .001), indicating that those with some level of awareness were more likely to score higher on the exam. Experience in using HTML was another significant predictor ( b = 0. 191, p = .022), with lower scores associated with higher levels of reported experience. Total session time was yet another significant predictor ( b = 0.255, p = .002), with higher scores accompanying more time taken in the study session. Treatment group and gender were the only two non significant predictors in this model. As for satisfaction, the same six predictor variables explained 13.9% of the variance in satisfaction levels, F (6,122) = 3.290, p = .005. Here, again focusing on the beta coef ficient, only two variables were found to be significant predictors: gender ( b = 0.220, p = .011) and prior awareness of HTML ( b = 0.191, p = .029). Females generally reported higher levels of satisfaction than did the males in this study. And like exam scores, having some awareness of HTML was associated with greater satisfaction levels. All other variables tested were non significant predictors. Table 11 shows the results of the multiple regression analysis.

PAGE 109

94 Table 11 Multiple Regression Results o f Both Exam Score a nd Satisfaction Level ( N = 129) BWPP Exam Score Satisfaction Level R 2 = 0.206 R 2 = 0.139 Predictor Variables b t p b t p Treatment group 0.020 0.237 .813 ns 0.089 1.031 .304 ns* Gender 0.006 0.073 .942 ns 0.220 2.575 .011 Age 0.166 1.995 .048 0.133 1.532 .128 ns* Prior HTML awareness 0.295 3.563 .001 0.191 2.216 .029 HTML experience 0.191 2.329 .022 0.025 0.287 .774 ns* Session time 0.255 3.106 .002 0.124 1.456 .148 ns* Note. ns = not statist ically significant ( p > .05). Post Session Interviews Fifty nine of the 129 study participants were randomly selected for post session interviews. The random selection procedure for the post session interview is described in Chapter Three. Of the 59 part icipants interviewed, 24 (41%) were male, ranging in age from 19 to 32 years ( M = 22.83, SD = 3.32) and 35 (59%) were female, ranging in age from 18 to 26 years ( M = 20.66, SD = 1.91). The demographic make up of interviewees split by trea tment group is provided in Table 12. Thirty two (54%) of this group reported having no prior awareness of HTML and 53 (90%) said they had no experience in using HTML. Table 13 shows how the treatment groups were split in terms of HTML awareness and experie nce.

PAGE 110

95 Table 12 Post Session Interviewees Gender a nd Age Split b y Treatment Group Age Treatment Group Gender N Range in Years M SD Male 10 (37%) 20 30 22.80 2.82 Female 17 (63%) 18 23 20.47 1.42 Full Page Combined 27 (100%) 18 30 21.3 3 2.30 Male 14 (44%) 19 32 22.86 3.74 Female 18 (56%) 18 26 20.83 2.31 Partial Page Combined 32 (100%) 18 32 21.72 3.13 Table 13 Post Session Interviewees HTML Awareness a nd Experience Split b y Treatment Group Prior HTML Awareness HTML Experience Treatment Group N No Yes None Some Full Page 27 15 (56%) 12 (44%) 23 (85%) 4 (15%) Partial Page 32 17 (53%) 15 (47%) 30 (94%) 2 (6%) With regard to the specific responses to the interview questions, response frequencies for each of the 12 interview questions were calculated and converted into percentages. A chi square was then conducted for each of the 12 items to determine if any significant difference existed between the responses of the two treatment groups. The

PAGE 111

96 response freque ncies and percentages, as well as the chi square results for each interview question are provided in Table 14.

PAGE 112

97 Table 14 Po st Session Interview Responses f or Total Group and b y Treatment Group Treatment Group Interview Question Categories of Responses Total Group N = 59 Full Page N = 27 Partial Page N = 32 c 2 a) No 2 (3.4 %) 0 2 (6.3%) 1 Overall, did you like the program interface of this instructional program? b) Yes 57 (96.6 %) 27 (100%) 30 (93.8%) 1.75 ns a) No 7 (11.9 %) 3 (11.1%) 4 (12.5%) 2 Did the design of program interfa ce influence whether or not you felt satisfied with (or liked) this instructional experience? b) Yes 52 (88.1 %) 24 (88.9%) 28 (87.5%) 0.03 ns a) No 0 0 0 b) Yes 57 (96.6 %) 26 (96.3%) 31 (96.9%) 3 Do you think that how an instructional programs interface is constr ucted has an impact on how well people like the program? c) It Depends 2 (3.4 %) 1 (3.7%) 1 (3.1%) 0.02 ns a) No 8 (13.6 %) 2 (7.4%) 6 (18.8%) b) Yes 48 (81.4 %) 22 (81.5%) 26 (81.3%) 4 Do you think that how an instructional programs interface is constructed has an impa ct on how well people learn the material? c) It Depends 3 (5.1 %) 3 (11.1%) 0 4.95 ns a) No 16 (27.1 %) 9 (33.3%) 7 (21.9%) b) Yes 39 (66.1 %) 17 (63.0%) 22 (68.8%) c) It Depends 1 (1.7 %) 1 (3.7%) 0 5 Do you prefer to have an idea of how much text is on a Web page at the start befor e you start reading it? d) No Preference 3 (5.1 %) 0 3 (9.4%) 4.50 ns a) Small Chunks 58 (98.3 %) 27 (100%) 31 (96.9%) 6 How do you prefer to have instructional text presented to you on a Web page: in relatively small chunks or in longer passages? b) Longer Passages 1 (1.7 %) 0 1 (3.1%) 0.86 ns a) No 3 (5.3 %) 0 3 (9.7%) b) Yes 46 (80.7 %) 21 (80.8%) 25 (80.6%) 7 a Do you find it easier to read, understand, and remember new material on a Web pag e if there is a limited amount of text on the page? c) It Depends 8 (14.0 %) 5 (19.2%) 3 (9.7%) 3.45 ns a) No 31 (52.5 %) 12 (44.4%) 19 (59.4%) b) Yes 24 (40.7 %) 14 (51.9%) 10 (31.3%) 8 Do you think the amount of scrolling involved in an online instructional program has any effect on your satisfaction level regarding the instructional experience? c) It Depends 4 (6.8 %) 1 (3.7%) 3 (9.4%) 2.84 ns

PAGE 113

98 Table 14 (Continued) Treatment Group Interview Question s Categories of Responses Total Group N = 59 Full Page N = 27 Partial Page N = 32 c 2 a) No 37 (62.7 %) 12 (44.4%) 25 (78.1%) b) Yes 19 (32.2 %) 12 (44.4%) 7 (21.9%) 9 Do you think the amount of scrolling involved in an online instructional program has any effect on how well you learn the material? c) It Depends 3 (5.1 %) 3 (11.1%) 0 8.52 ** a) Scroll Back 17 (28.8 %) 6 (22.2%) 11 (34.4%) b ) Click Back 37 (62.7 %) 19 (70.4%) 18 (56.3%) 10 If you wanted to find some information in the pr ogram you had read previously, would you prefer to have to scroll back up a page to find it, or to click back through the previous pages where scrolling is not required to see the pages content? c) It Depends 5 (8.5 %) 2 (7.4%) 3 (9.4%) 1.28 ns a) No 30 (50.8 %) 10 (37.0%) 20 (62.5%) b) Yes 20 (33.9 %) 12 (44.4%) 8 (25.0%) 11 Do you think having to scroll down a page to view more content and/or to get to some features of an instructional program distracts you from focusing on the m aterial? c) It Depends 9 (15.3 %) 5 (18.5%) 4 (12.5%) 3.85 ns a) Scrolling 9 (15.3 %) 4 (14.8%) 5 (15.6%) b) Non scrolling 45 (76.3 %) 22 (81.5%) 23 (71.9%) 12 Given the choice in an online instructional program, do you have a preference between having to scroll down each page to view more instructional information or having to click a button to move between pages where you can see all of the pages information at once? c) No Preference 5 (8.5 %) 1 (3.7%) 4 (12.5%) 1.52 ns Note. a Question 7 was inadvertently skipped for two (3.4 %) respondents, so N = 57 for this question ns = not statistically significant ( p > .05). ** statistically significant ( p = .01).

PAGE 114

99 Interview Question 1 Item 1 of the post session interview asked, Overall, did you like the program interface of this instructional program? An overwhelming majority of the interviewees (96.6%) said they did like the program interface, with just 3.4% in dicating they did not. Interestingly, both negative responses were from members of the partial page group. A comparison of the treatment groups found no significant difference in responses for this item, c 2 (1, N = 59) = 1.75, p = .19. Reasons pro vided by respondents who liked the interface design of the BWPP tutorial regardless of which treatment group they were in, revolved around its functionality (e.g., ease of navigation, overall interface design). The r easons provi ded by the two respondents who did not like the program were that it was a boring color scheme and contained insufficient text emphasis (such as color or bold). One of the two respondents, however, indicated he did not like having the program control s located at the bottom of the screen, primarily because he was used to having controls at the top of the page. When asked if having navigation buttons at the top of the program interface might tempt one to move on before reading all information on the pag e, he said it would, especially for novice computer users who might not even be aware there was anything further down on the page. Even so, his preference was having controls located at the top. Interview Question 2 The second interview item asked, Did t he design of program interface influence whether or not you felt satisfied with (or liked) this instructional experience? Again, the majority of interviewees (88.1%) responded in the affirmative, with 11.9% responding in

PAGE 115

100 the negative. No significant diffe rence in responses between treatment groups was found c 2 (1, N = 59) = 0.03, p = .87. While most respondents in both treatment groups thought the program interface had an impact o n their level of satisfaction with the learning experience they had just completed, the perceived strength of that impact ranged from slight (e.g., "Only to a very minor degree.") to considerable (e.g., If it's complicated, it'll make me frustrated, and I'll just want to completely quit the program.). Unfortunately, the seven respondents who said the interface design had no impact on their satisfaction level were not asked why they thought that was so. Interestingly, however, all seven answered the next interview question which basically asked the same thing, but genera lized the focus to other people in the affirmative. Interview Question 3 Item 3 asked, Do you think that how an instructional programs interface is constructed has an impact on how well people like the program? Here, there were no negative responses, with 96.6% indicating that a WBTs interface did have an impact on how well people liked the program itself. However, 3.4% of the respondents were equivocal, saying it may or may not depending on certain factors. No significant difference was found betwee n the two treatment groups, c 2 (1, N = 59) = 0.02, p = .90. The main reasoning behind the majority opinion was that a WBT interface must be user friendly, a term frequently employed by respondents. In this regard, references were made to a prog rams ease of navigation. One common thrust of opinion was that anything about a programs interface that led to a users frustration would negatively

PAGE 116

101 impact his or her level of satisfaction with the program and the learning experience. The more time and e nergy one has to expend in learning and maneuvering within the program interface, for instance, the more frustrating of an experience it would likely be. From a more positive angle, several respondents indicated that, inasmuch as the interface design helps motivate or keep the person interested, it would positively impact ones level of satisfaction. The two equivocal respondents both indicated that an individuals personal preferences and level of familiarity with computers and the Web probably play a majo r role in how satisfactory they judge a WBT to be. One made a distinction between the aesthetic and functional aspects of an interface design, saying that the latter would have much more of an impact on satisfaction level than the former. Finally, as previ ously pointed out, it was an interesting finding that seven of the respondents who said that a programs interface influenced how satisfied people, in general, are with a learning experience nevertheless reported that the BWPPs interface had no impact on their personal satisfaction with the study learning experience ( interview question 2). Interview Question 4 Post session interview question 4 asked, Do you think that how an instructional programs interface is constructed has an impact on how well people l earn the material? Here, 81.4% answered in the affirmative, 13.6% in the negative, and 5.1% saying it may or may not depending on certain factors. A comparison of the treatment groups found no significant difference in responses for this item, c 2 (2, N = 59) = 4.95, p = .08, although it may be of interest to note that all three equivocal respondents were members of the full page treatment group.

PAGE 117

102 The consensus among those who think that a programs interface design has an impact on how well people learn the program material was that the more time and energy one has to expend in learning and working with the program interface, the less focused he or she can be on the instructional material. However, there was much less consensus on how much of an im pact it might have on learning, with opinions ranging from slight to heavy. Unlike opinions regarding a programs interface impact on satisfaction, fewer respondents thought it had any influence on how well one learns the program material. Seven respondent s essentially divorced the interface from the instructional material, saying that the material was there regardless of how it was presented or accessed. One added that learning was most influenced by the quality of the instructional material. For those who were equivocal, individual preferences, interests, attributes and characteristics determine whether or not a program's interface might impact how well they learn the material. For instance, someone who easily remembers what he or she has read or who was v ery interested in the subject matter might remain focused on the material regardless of the clunkiness of the interface design, while another might more easily distracted by a problematic interface. Interview Question 5 For question 5, which asked Do yo u prefer to have an idea of how much text is on a Web page at the start before you start reading it?, about two thirds (66.1%) said they did, 27.1% said they did not, 5.1% said they had no preference, and 1.7% said it probably depended on certain factors. No significant difference was found between the two treatment groups regarding this item, c 2 (3, N = 59) = 4.5, p = .21, although it should

PAGE 118

103 be noted that all three respondents who reported having no preference were all in the partial page group. Those respondents who preferred to have some prior idea of the amount of text on a Web page indicated they wanted to be able to set their expectations for how much reading they were in for and about how much time it would take to finish the material on th at page. Some said that it would also provide a measure of their progress as they read. Those who preferred not to know how much text was on a page all indicated that knowing they had a lot of text to read on a Web page ahead of time was intimidating or ot herwise de motivating and might be inclined to either skim the page or skip it altogether. As one person put it, if it was a lot of text, I probably wouldn't read it. If I knew it was really long, I'd probably skim it, but if I didn't know how long it was and I didn't know what was coming next I'd be more apt to just keep reading, cause I wouldn't want to miss anything. The one equivocal respondent said her preference depended on what the topic was and whether or not she was pressed time. Interview Qu estion 6 Interview item 6 asked, How do you prefer to have instructional text presented to you on a Web page: in relatively small chunks or in longer passages? All but one respondent (98.3%) said they preferred to have text presented in smaller chunks. A member of the partial page group, the sole respondent preferring text presented in longer passages. No significant difference in responses was found between treatment groups, c 2 (1, N = 59) = 0.86, p = .35.

PAGE 119

104 Several reasons were voiced for prefer ring instructional text presented in small chunks rather than longer passages: it makes it more likely that one will read all of the information rather than just skim it; it makes it easier to stay focused on, interested in and/or pay attention to the mate rial; it provides a better sense of progress (i.e., it is more positively reinforcing); it makes it easier to comprehend and absorb information; and it seems like less and/or faster reading (even though it may not be in actuality). The single respondent wh o preferred text presented in longer passages said he liked having more information readily available to him rather than having to navigate through menus and or more pages to get to more information. Interview Question 7 Question 7, which asked, Do you f ind it easier to read, understand, and remember new material on a Web page if there is a limited amount of text on the page?, was inadvertently skipped for two of the respondents, so all results for this item reflect an N of 57. The majority of respondent s (80.7%) responded in the affirmative, 5.3% answered in the negative. Eight respondents (14%) were equivocal, indicating that it depended on certain factors. No significant difference in responses was found between treatment groups, c 2 (2, N = 57) = 3.45, p = .18. The reasons provided by those who found limited amounts of text on a Web page easier to read, understand, and remember mirrored those provided for preferring chunked instructional text. However, three respondents, all of whom previously stated their preference for chunked instructional text, indicated that the amount of text presented on a Web page had no impact on their ability to read, comprehend or remember the pages

PAGE 120

105 material. For the eight respondents who were equivocal on this ques tion, it depended on how interested they were in the topic and/or what type of information was being presented (e.g., straight text, interactive examples, etc.). Interview Question 8 Question 8 asked, Do you think the amount of scrolling involved in an online instructional program has any effect on your satisfaction level regarding the instructional experience? For this item, a slim majority (52.5%) said they did not think scrolling affected their satisfaction level, while 40.7% said it did have an impa ct and 6.8% said it might or might not depending on certain factors. Again, there was no significant difference in responses between treatment groups, c 2 (2, N = 59) = 2.84, p = .24. Even so, it is interesting to note that three of the four equiv ocal respondents were members of the partial page treatment group. For those who thought scrolling did impact their satisfaction with a WBT learning experience, some said the process of having to orient their eyes to moving lines or blocks of text required more effort to stay focused on the actual material and/or interfered with the flow and continuity of information. Others indicated that having to scroll through a body of text makes it more likely they will skim rather than thoroughly read the material. I t is noteworthy, however, that none of these respondents considered scrolling to have any more than a moderate impact on their level of satisfaction; in fact, most indicated the effect on satisfaction was small. For those who found scrolling to have no imp act on their satisfaction level, virtually every one said that they were accustomed to having to scroll through Web pages, with some adding that the advent of the wheel mouse made the act of

PAGE 121

106 scrolling much less of an issue. It should also be noted that man y of those for whom scrolling was a factor in their satisfaction level also recognized the ubiquity of scrolling on the Web. For those who gave an equivocal answer to this question, the amount of scrolling involved seemed to be key: if scrolling was limite d, there was little or no impact on satisfaction level, but if the amount of material on a page required more extensive scrolling to get through, then they would be less satisfied with the learning experience. Other factors for some of these respondents we re ones level of familiarity with the Web and computers, the type of information being presented (e.g., graphics, text) and/or whether text was presented in small chunks or longer passages. Scrolling would likely have more of an effect for more novice com puter/Web users and scrolling through pictures and/or chunked text was perceived as less aversive than scrolling through long passages of uninterrupted text. Interview Question 9 Interview item 9 asked, Do you think the amount of scrolling involved in a n online instructional program has any effect on how well you learn the material? The majority of respondents (62.7%) said scrolling had no impact on how well they learned the material in a WBT, while 32.2% said it did and 5.1% said it may or may not have an impact depending on certain factors. This question was the only post session interview item where a significant difference in responses between treatment groups was found, c 2 (2, N = 59) = 8.52, p = .01. A large effect size of 0.82 was comput ed, indicating that how a respondent answered this question was related to the treatment to which they were exposed. In this study, 78.1% of the partial page group denied any scrolling effect on

PAGE 122

107 learning the material in a WBT, compared to 44.4% in the full page group. Considering the breakout of responses by treatment group in Table 14, the full page group was essentially evenly split on the issue (44.4% for both negative and affirmative responses), although it should be noted that all three equivocal respo ndents were in the full page group. Thus, participants in the partial page group were much less likely to perceive scrolling as having any impact on how well they learned in a WBT program. The reasoning of respondents for this item, was essentially the sam e as that provided for the previous question where the object of scrollings impact was ones satisfaction level. However, 17 respondents (28.8%) shifted their position on the effects of scrolling when its impact was focused on learning rather than satisfa ction. Of the 31 respondents who said scrolling had no impact on satisfaction, two said that it did have an impact on learning, while two others became more equivocal. Of the 24 respondents who said scrolling did not have an effect on satisfaction, a third (eight) said it had no impact on learning, while one other became more equivocal. In addition, all four respondents who were equivocal regarding the impact of scrolling on satisfaction level became more definitive when asked about scrollings effect on le arning, with two asserting there was no effect and two saying there scrolling had no impact on learning. A slight majority of these 17 respondents (10 or 58.8%) were members of the full page treatment group. The majority of the opinion shifts (10 or 58.8%) were in the direction of scrolling having no impact on learning, with six (60%) of these shifts occurring within the partial page group.

PAGE 123

108 Interview Question 10 Question 10 asked, If you wanted to find some information in the program you had read previou sly, would you prefer to have to scroll back up a page to find it, or to click back through the previous pages where scrolling is not required to see the pages content? Slightly less than two thirds of respondents (62.7%) preferred to click back through previous pages, 28.8% preferred to scroll up on a page, and 8.5% said it probably depended on certain factors. No significant difference in responses was found between treatment groups, c 2 (2, N = 59) = 1.28, p = .53. Respondents who preferred to scroll back up to locate information on a Web page essentially considered scrolling up as more efficient than clicking back through previous pages. This efficiency was characterized in the following ways: scrolling requires less effort than clicking (espe cially when using a wheel mouse); scrolling up was more convenient and faster than clicking back since one does not have to leave the page he or she is already on; scrolling up the same page was functionally safer (e.g., the link for clicking back may be b roken or incorrect); and, perhaps related to the previous point, lag time in the loading of previous pages was wasted time (which could greatly contribute to a less satisfying and effective learning experience). For the more equivocal respondents, their pr eference depended on whether or not there was a delay in the loading of previous pages and/or how far back the information was in the program. If there was no delay in the reloading of pages, the preference would be to click back, whereas scrolling would b e preferred if there was a delay in the re loading of previous pages. T here also seemed to be a positive correlation between search mode preference and how far back the desired information was; that is, clicking back would be preferred if the information wa s located

PAGE 124

109 only a few paragraphs away (translated as a couple of pages away), whereas scrolling up was more desirable if the information was many paragraphs away. Among those with a preference for clicking back, many thought that the act of clicking require d less effort than that of scrolling (even with a wheel mouse). However, the most frequent reason given for the click back preference was the perception that it was easier to locate the information based on physical and spatial cues provided by the pages t hey had already read. As one respondent described it, he inherently has a snapshot in his mind of the page where the information was located, and when he clicks back through previous pages, he looks for the page that matches the contours of this snapshot Thus, his first level of orientation to the information is based on an image of the page containing the information rather than on, say, a search of the text the each previous page. Another reason given, seemingly related to the issue of orientation on a page, was the possible difficulty in finding ones place after locating the previous information. With clicking back, one has a good sense of the number of pages that were clicked back through; thus, making it a simple task of clicking forward that many p ages. With scrolling up a page, however, one may have to put in more effort (e.g., skimming the text again) in finding the original stopping point. It should be noted here that this question was, in part, intended to get at the importance of spatial orien tation in electronic text, which was discussed in Chapter Two. Thus, respondents who did not broach the subject on their own were usually asked a follow up question as to which method (scroll back or click back) better facilitated their orientation to prev iously read information. While all those preferring to click back indicated that method as being superior (i.e., scrolling interferes with their picture of

PAGE 125

110 where the information is), all but a very few of those who preferred to scroll back reported either that it was easier for them to orient by scrolling or that they perceived no appreciable difference between the two methods. Finally, after the conclusion of this study, it was realized that the possibility of participants using the Web browsers Find fea ture was not anticipated or addressed in the study protocols. This was an oversight that is commented on in more detail in Chapter Five as one of the recommendations for improving the study. Interview Question 11 Item 11 asked, Do you think having to scr oll down a page to view more content and/or to get to some features of an instructional program distracts you from focusing on the material? Half of the respondents (50.8%) said that scrolling was not a distraction, while 33.9% thought it was. Of note, th is question produced the greatest number of equivocal responses (15.3%). A comparison of the treatment groups found no significant difference in responses for this item, c 2 (2, N = 59) = 3.85, p = .15. The great majority of respondents who said h aving to scroll down an instructional Web page was not a distraction offered simply that scrolling was the prevalent method for viewing the content of Web pages. In other words, they were quite used to scrolling and did so pretty much without thinking abou t it. Some added that the act of scrolling was greatly facilitated by the wheel mouse. A couple of these respondents said that they did not consider the act of scrolling any less distracting than clicking a button to move through separate Web pages. Those respondents who considered scrolling to be a distraction varied in their assessment of the magnitude of that distraction, but most said it

PAGE 126

111 was a minor distraction. By far, the most common reason for viewing scrolling as a distraction was the temptation for skimming through a page, which could easily result in missing some important information. A few of these respondents said that scrolling required a greater effort to keep ones place on the page because of the shifting text. For those respondents giving a equivocal answer to this question, the amount of scrolling appeared to be the main concern: the more scrolling required, the greater likelihood of it becoming a distraction, as more focus might be given to just traversing the program than the material. Fi nally, for those who perceived scrolling either as a definite or possible distraction, the impact on satisfaction level was considered slightly greater than on learning the material. Interview Question 12 The final post session interview item 12 asked, Given the choice in an online instructional program, do you have a preference between having to scroll down each page to view more instructional information or having to click a button to move between pages where you can see all of the pages information at once? Over three quarters of the respondents (76.3%) said that in the end they preferred a non scrolling WBT interface. Even among the partial page group, the majority of respondents (71.9%) stated a preference for a non scrolling WBT interface design Only 15.3% preferred a scrolling format, while 8.5% had no preference. No significant difference in responses was found between treatment groups, c 2 (2, N = 59) = 1.52, p = .47. Of those respondents indicating preference for a scrolling WBT sc reen design, the reasons given included the following: scrolling is faster and than clicking (especially if

PAGE 127

112 there each new page takes time to load); scrolling is more efficient time wise; scrolling requires less effort than clicking (especially when using a wheel mouse); scrolling provides the user with more control over how much text is displayed (i.e., information can be scrolled through slowly, line by line versus clicking to a whole page of text; scrolling pages are technologically safer (e.g., less po ssibility of broken navigation links/buttons); scrolling is less distracting than clicking back; and more information can be placed on a page at once. Those respondents preferring a full page, non scrolling screen design cite d the following reasons: it ch unks the information up, making it less intimidating and easier to absorb and digest the information; it provides a more streamlined and aesthetically pleasing instructional experience; it requires less manipulation of the mouse (i.e., less effort); it mak es it more likely one will read all the information presented rather than skim through it or even skip it entirely; it suggests the instructional program was well designed and of high quality (i.e., the perception that a full page interface design requires more effort and thought to construct leads to the assumption that as much effort and thought went into every aspect of the program); it is easier to navigate; it is easier to remain oriented within (e.g., when returning to ones place after looking up pre viously read information); and it provides a greater sense of forward progress, which translates to more motivation and satisfaction with the experience.

PAGE 128

113 Chapter Five Discussion Introduction This chapter first summariz es the purpose of the study, the research questions and the results obtained for those questions. A more detailed discussion of the study results follows, covering not only the primary research questions, but several secondary questions that were not origi nally delineated in the study proposal. This is followed by recommendations for the design of the WBT program interface, with the chapter concluding with suggestions for future research. Purpose of the Study The purpose of this study was to investigate wh ether or not the interface design of a Web based instructional programs has an impact on how well learners learn the program material and/or how satisfied learners are with the learning experience. More specifically, the study sought to determine if ther e was a significant difference between two particular WBT screen designs, referred to in this study as full page and partial page. Again, the full page design allows the learner to view an entire WBT page at once, but only by limiting the amount of ins tructional material displayed on the page. The partial page design provides more instructional content per page, but requires the learner to scroll down the page in order to view all of the page content and program features.

PAGE 129

114 Review of the Research Questi ons There were two primary questions fueling this study: 1. Is there a significant difference in performance between learners using a scrolling, partial page WBT and those using a non scrolling, full page WBT design? 2 Is there a significant difference in satisfaction between learners using a partial page WBT and those using a full page WBT design? It was hypothesized at the outset of the study that the full page design would yield superior performance and satisfacti on results. Results f or the Research Questions An analysis of the performance and satisfaction data collected for this study yielded the following results for the two research question: 1. No significant difference was found in performance between the f ull page and partial page treatment groups. Thus, the null hypothesis for this question could not be rejected. 2 No significant difference was found in satisfaction level between the full page and partial page treatment groups. The null hypothesis for th is question could also not be rejected. Discussion Performance data were obtained through participants completion of the Basic Web Page Programming (BWPP) tutorials final exam. Satisfaction data were generated

PAGE 130

115 through an online satisfaction survey that participants completed immediately following the exam, but before they received their exam scores. Additional demographic data were collected during participants completion of the Web Skills Assessment (WSA) program. This included participant gender, age, prior awareness of HTML, and experience using HTML. The total length of time it took each participant to complete his or her study session was also collected. Qualitative data regarding participants perceptions and preferences pertaining to WBT interfa ce design, in general, and toward scrolling, in particular, were obtained through post session interviews conducted with 59 randomly selected study participants. The full page and partial page treatment groups were compared on BWPP exam score and satisfact ion level, as well as on gender, age, prior awareness of HTML, experience using HTML, and total session time. Analysis results indicated that there was no significant difference between the groups for any of these variables. It was concluded, therefore, th at the treatment groups were equivalent for all variables measured. Please note that for the discussion presented in this chapter, scrolling (or rather its presence or absence) will be referred to as the single difference between the full page and partial page WBT screen designs. As a feature of WBT interface design, however, scrolling and the amount of instructional content contained on a WBT page should be considered as two sides of the same coin. In other words, it is a given that when a WBT page contain s more instructional content than can be displayed at one time, scrolling will necessarily be present. In the interest of brevity, references to scrolling should be read as the absence or presence of scrolling along with its implications for the amount of instructional content contained on a WBT page.

PAGE 131

116 Learner Performance Outcomes Analysis of the BWPP exam scores indicated that there was no significant difference in performance between the two treatment groups. It would appear, then, that scrolling (the single difference between the full page and partial page screen design) had no significant effect by itself, on how well participants performed on the tutorial exam. One expectation at the outset of this study was that the full page group would outperform the partial page group. Much (though not all) of the literature reviewed in Chapter Two seemed to suggest learning might be better facilitated by a non scrolling WBT screen design: screen density studies with electronic text; the perceived benefits of i nformationally lean instructional text chunked into smaller, more digestible portions; the possible disruption of information processing and retention resulting from the often distracting and disorienting activity of scrolling; and the frequently negative effects of large blocks of text on learner attention, endurance and motivation. Together, they seemed to make a reasonable case that learners would likely perform better using a non scrolling WBT interface. As it turned out, however, the average exam scor e of the partial page group was marginally higher than that of the full page group While t here was no statistical ly signific an t difference in BWPP exam scores between the two groups it was still an interesting find ing in light of initial expectations to the contrary

PAGE 132

117 Learner Satisfaction Outcomes There was also no significant difference found in satisfaction level between the full and partial page groups. Therefore, it can be concluded that scrolling alone was not a significant factor in how sati sfied participants were with the learning experience. On average, participants in both treatment groups indicated about the same level of satisfaction regarding their learning experience, with the partial group participants tending to rate their level of satisfaction slightly higher than did the full group participants. This was a bit more of a surprise than the performance outcome in that the bulk of the literature pertaining to learner satisfaction indicated that the level of s atisfaction with an online instructional experience might be more susceptible to the effects of scrolling than performance due to factors such as the disruption of spatial orientation, inefficiency of navigation, copious amounts of instructional text, and the diversion of attention away from the instructional material. Reflections o n the Perform anc e an d Satisfaction Outcomes As to why scrolling appeared to have had no appreciable effect on learner performance or satisfaction, the post session intervie w data may cast some helpful light. The reader should however, remember that only 59 (about 46%) of the 129 study participants were interviewed. All but two of the interview respondents reported that they liked the interface design of the BWPP tutorial. This was regardless of the version to which they were assigned. User friendly aspects of the programs interface were provided as reasons, such as ease of navigation accessibility to program features. The two respondents members

PAGE 133

118 of the partial page grou p who did not like the screen design pointed to certain of its aesthetic qualities, such as the color scheme, but neither indicated scrolling as a factor for their dislike of the program. While 85.7% of all interview respondents thought that a WBTs p rogram interface either did or could have some impact on learning, nearly two thirds did not think scrolling, as a distinct interface feature, did. According to respondents comments, the more time and effort a learner has to spend working with the program interface, the less learners tend to focus on the instructional material, which could hinder learning and performance. However, scrolling was generally perceived as a fairly innocuous aspect of the program interface, primarily because of its ubiqu itousness on the Web. In addition, the advent of the wheel mouse has seemed to make the process of scrolling much less aversive than it once was (Nielsen 1997 2003 2005b ; Spool Snyder DeAngelo & Sch roeder 1999 ). Half of respondents did not conside r scrolling to be a distraction from focusing on the instructional content, and even the majority of those who thought it was a distraction indicated it was only a minor one. Apparently other factors pertaining to the screen design, such as, perhaps, poorl y located navigation buttons, are more apt to be an influential distraction. While spatial disorientation during scrolling was a distracting factor for a few respondents, for most it apparently was not. Perhaps this may be attributable to the pervasiveness of scrolling on the Web in that, through repetition, one either becomes accustomed to the phenomenon and/or develops a personal strategy to compensate for it. Individual learner attributes, interests, preferences, and characteristics might almost certainl y play a role in whether or not a programs interface a ffects ones learning

PAGE 134

119 experience. For instance, several respondents, especially those who were equivocal on one or more interview questions pertaining to screen design effects on learners, claimed tha t such effects could be mitigated or exacerbated by how interested they were in the subject matter. Others pointed out that ones level of familiarity with computers and the Web might well factor into whether or not ones performance was impacted by the pr ogram interface. This particular possibility, of course, was anticipated in this study, as evidenced by the participant suitability criteria instituted for this study (see Chapter Three for more information). Another reason why scrolling may not have an im pact on learning indirectly harkens back to Clarks (1983 1991 1994) argument that only instructional methodology, not learning media or its attributes has any effect on learning. Several respondents said the interface had nothing at all to do with le arning the material, asserting simply that instructional material was there to be had regardless of how it was presented or accessed. The question of scrollings impact on learning was the only interview item for which a significant difference was found in how the treatment groups responded. As to why more than three quarters of the participants in the partial page group saw scrolling as having no impact on learning versus less than half of the full page group participants, it can only be speculated. Perhap s, partial page participants were afforded a greater clarity regarding their experience of scrolling, by virtue of having just completed an online learning experience that involved a scrolling interface. Those in the full page group would have had to think back on past experiences with scrolling interfaces. Separated by the fog of time from those past experiences, their immediate experience

PAGE 135

120 with the non scrolling screen design might have prejudiced many of them against scrolling. While no significant d ifference in satisfaction level was found between the two treatment groups in this study, data from the post session interviews suggests that a WBTs screen design can be an issue for some when it comes to ones satisfaction level with the learning experience. For the interview respondents, all said that a programs interface either definitely does or could have an impact on ones satisfaction level. That impact could range from slight to considerable, d epending on factors related to the interface itself (e.g., how complicated the interface is perceived to be or how functional and/or aesthetically pleasing one finds the interface), and/or to learners personal attributes, characteristics and preferences ( e.g., ones level of computer and Web skills). The interview data also suggests that the user friendliness of a program interface might have the greatest impact on level of satisfaction. This, of course, is right in line wi th Nielsens (1993, 2003) and Shneidermans (1998) usability attributes. Interestingly, only about 90% of the respondents reported that the BWPP tutorials interface contributed to their satisfaction level with the overall BWPP learning experience. This wo uld seem to indicate that some respondents were somewhat self contradictory, on the one hand saying that a programs interface impacts lea r ner satisfaction, but on the other that their satisfaction level with the BWPP experience had nothing to do with the BWPP screen design. The reason for this discrepancy is unclear, since none of these respondents was asked to clarify the apparent contradiction. Perhaps the order of questioning contributed to this seeming contrariety. Instead of proceeding

PAGE 136

121 fr om the personal and specific to the non personal and general, it may have been more cognitively coherent to advance in the opposite order. Even though the majority of respondents were of the opinion that a WBT programs interface had an impact on satisfact ion level, only about 41% of them considered scrolling to be a significant factor in satisfaction. Disruption of spatial orientation, the temptation to skim the material (or even skip large parts of it altogether), and the amount of scrolling involved were a concern for some, but overall, respondents said they had simply acclimated to the reality of scrolling on the Web. Switching the focus from participant perceptions about scrollings effect on performance and satisfaction, to their more general preferen ces regarding WBT screen design, over three quarters of the respondents said that, given the choice, they would prefer a non scrolling, full page interface design for Web based instructional programs. This position was supported in the overwhelming prefere nce for WBT pages consisting of limited amounts of leaner, chunked up instructional text. Very few preferred long pages of big blocks of uninterrupted text. In addition, two thirds of the respondents said they preferred to have some idea of how much text i s on a WBT page at the outset, primarily to gauge how much effort and time they will be expending on it. And when it comes to having to locate previously read information for review, nearly two thirds stated a preference for clicking back through a series of full screen, non scrolling pages rather than scrolling up on a long page. All of these preferences would appear to favor the full page interface design over the partial page design, if not in terms of performance and learning, then certainly in satisfa ction levels. Nevertheless, it must be remembered that neither participants

PAGE 137

122 performance on the BWPP exam nor their reported satisfaction levels with the learning experience was distinguished in any statistically significant way. So if the general preferen ce of participants was for the non scrolling, full page design, but scrolling was not indicated as a significant factor in their performance or, especially, their satisfaction levels, then what else might be able to account for this apparent discrepan cy? One final factor in this studys finding of no significant difference in learner performance or satisfaction might be the fact that the instructional content of the partial page version was an exact duplicate of the full page version. That is to say, t hat the full page version was developed first, and that a single page in the partial page version consisted simply of several pages of content from the full page version. The full page version not only required more time and effort to program, but it also required a great deal of effort to ensure that the tutorials instructional content followed good instructional design practices, while fitting well into the limited dimensions of the content area. The result was lean, chunked up instructional content a goal often discussed in the literature (Alessi & Trollip, 2001; Fleming & Levie, 1993; Galitz, 1993; Grabinger & Osman Jouchoux, 1996; Horton, 2000; Kruse & Keil, 2000; Merrill, 1994; Nielsen, 2000; Piskurich, 2000; Shneiderman, 1998; Smith & Mosier, 1 986; Tullis, 1997). Since the instructional content of the partial page version was an exact duplicate of the full page version, it shared some of the benefits of the latters instructional design. Therefore, the partial page version did not suffer fr om some of the pitfalls of scrolling pages discussed in the literature, such as, long, uninterrupted blocks of text ( Horton 2000). While its pages contained more text and other instructional content than did those of the full page version, that inst ructional material was lean and chunked up. Thus,

PAGE 138

123 participants in the partial page group may not have experienced the level of intimidation, spatial disorientation, or a sense of slow progress that they might otherwise have. So, in effect, the difference s crolling made in this study may have been mitigated to some degree by the way the instructional text was constructed. Secondary Relationships Interactions between the two dependent variables in this study (learner performance and satisfaction) and other possible predictor variables (age, gender, prior awareness of HTML, experience using HTML, and total study session time) were also investigated. The results suggest that learner performance was impacted by more of these predictor variables. BWPP exam sco res tended to increase with both prior awareness of what HTML was and what it was used for, as well as with the length of study session time. It was also found that exam scores tended to decrease with age and experience using HTML. It makes sense that hav ing some idea of HTML could provide a performance edge if that prior awareness included a deeper familiarity with HTML other than just term recognition. Otherwise, it is difficult to explain why simply having heard of the term and/or knowing what HTML is u sed for should result in any performance increase. If ones knowledge of HTML is more than cursory, then a better argument for this relationship can be made. But if this were the case, it would imply that performance would, of course, increase with more kn owledge of and/or experience with using HTML. This, surprisingly, was not the case in this study. That performance tended to actually decrease with HTML experience would appear to be counterintuitive. If it was

PAGE 139

124 difficult to understand why prior aw areness of HTML would lead to better performance, it is doubly so to imagine why more experience using HTML would result in poorer performance. In this latter case, however, it is possible that some pr ior familiarity with HTML served as a barrier for some participants to absorbing the information presented in the BWPP tutorial Having some familiarity with HTML, perhaps some participants proceeded through the tutorial more quickly than they would have otherwise possibly only skimming or even skipping over significant portions of the instructional material. D oing so might have come back to haunt them during the BWPP exam, where some exam questions might have pertained to those inadequately read or skipped content areas Another way prior HTML experience mi ght have served as an obstacle for a participant is in the form of cognitive dissonance, where in either new information about some instructional topics might have been different from what the participant thought he or she already knew or the information wa s presented in manner unfamiliar to the participant This situation may be related to research on le arners mental models and their p reconceptions positing that learners strongly held preferences way interfere with their performance on new tasks (Donovan Bransford & Pellegrin o, 1999; Bransford, Brown & Cocking, 1999) In either situation, it could be that the new information d id not register and supplant the participants prior understanding, such that when f aced with an exam question on the topic, the p articipant automatically falls back on that prior understanding of the topic It might also be that suspicion should fall on the questions posed to parti cipants regarding their prior HTML awareness and HTML experience. It is possible that one or

PAGE 140

125 both of the questions, which were presented to participants during the Web Skills Assessment program, could have been better const ructed to enhance participants clarity about what they were being asked. However, it is all but impossible to gauge if there were errors in how participants respond ed to these two questions. E ven if it was known t hat self report errors occurred, there is no way to determine the nature of those errors; for example, whether or not a participant understood the question properly, intentionally gave a false response, or if he or she simply clicked the wrong button. Ther efore, it seems all that can done is to report these findings, speculate as to their accuracy and significance, and suggest that, perhaps, a better way of asking the questions could be found. The other findings for performance here are a bit more unde rstandable; that exam scores increased with the amount of time spent in the study session, and that scores tended to decrease with age. Session time is not always positively related to better performance, since it is possible that the longer it takes a lea rner to complete a particular learning experience, the more difficulty he or she may be experiencing with the material. This may be especially true with tests. However, in this study, it makes sense that, on average, the more time the participants took wit h the material, the better they performed. HTML even very basic HTML can be difficult to learn, even when the learning process is stretched out over days or weeks. In this study, the learning process was condensed into a very short time frame (on av erage one hour). Coupled with the fact that the participants could not actually practice creating a basic Web page from scratch during the BWPP tutorial, it seems reasonable that participants who took more time with the content sections stood a better chan ce of doing well on the final exam.

PAGE 141

126 Unfortunately, no data were collected on the time it took for participants to complete just the BWPP exam; Thus, it can only be speculated that the bulk of time participants spent in their respective study session was d evoted to the content sections. As far as exam scores decreasing with age, it may be that older participants (and 14 were 25 years or older) had, on average, less overall computer and Web experience, which, in some way translated to lower exam scores. Howe ver, with no other study data being able to credibly contribute to an explanation for this phenomenon, this is only speculation. With regard to satisfaction level, gender and prior awareness of HTML were the only significant predictors. Females tended to r eport higher levels of satisfaction than males in the study, as did those with som e prior awareness of HTML. E ven though females tended to rate their level of satisfaction with the program interface higher than males, both genders tended to report high sat isfaction levels, with males averaging 4.0 on a 5 point Likert scale and females averaging 4.23. Unfortunately, no other data from this study illuminated either of these findings. Recommendations Deriving From This Study Based on the experience gained fro m conducting this study, as well as from its outcomes, a number of recommendations can be made regarding: (1) the design of WBT programs; (2) how this study can be improved; and (3) further research The following sections discuss each set of suggestions in turn.

PAGE 142

127 Recommendations for the Design of WBT Programs 1 Make instructional text lean and chunked. No matter the screen design employed, much of the available evidence in the literature, as well as from the data gathere d in this study suggests that there are both learning and satisfaction benefits of lean and chunked instructional information. Lean instructional text maximizes the instructional message, while minimizing distracting, superfluous information. Chunking up t ext into mind sized chunks (Merrill, 1994, p. 153) facilitates the absorption, comprehension, and retention of information. In contrast to blocks of long, uninterrupted text, chunked text is much less intimidating, and may reduce the temptation to skim o r skip parts of the instructional information. Finally, chu n ked text seems to provide a greater sense of forward progress through the material, leading to a greater sense of accomplish and motivation. 2 Limit the amount of scrolling on pages. If, for wha tever reason, a partial page interface design is selected for the WBT, it would probably be wise to limit the amount of scrolling required on its pages. This would result in more pages, but participant comments in this study indicate that too much scrollin g can be tiring and lead to frustration, which, in turn, can impact learner motivation. No more than a few screenfuls of information should be placed on a page (Koyani, Bailey & Nall, 2003; Nielsen 1997). Th is studys post session interview data also suppo rt this recommendation, as most respondents indicated that, while they did not mind having to scroll some they would find copious amounts of scrolling aversive. 3 Place visible cues on scrolling pages to compensate for spatial disorientation. Based on s ome of the anecdotal data from the post session interviews, if scrolling pages

PAGE 143

128 are employed in a WBT, it might be a good idea to devise a system of visual cues that can be interspersed throughout the instructional text to help users stay oriented to where they are in the program as they scroll. The trick here, of course, would be to make these cues apparent enough to register with the learner, but innocuous enough so that they do not create a distraction and interrupt the learners focus. Visual cues could be text based or image based, with the caveat that graphics used as visual cues are of no use if the user has his or her Web browser set to not display graphics. E ach cue may also need to be unique; otherwise, in a long scrolling page, with the visual cue s rolling up or down the screen, they would probably not be nearly as identifiable and, thus, effective. 4 Let learners choose the interface design. If scrolling truly does not produce a significant difference in performance or satisfaction, as the result s of this study appear to indicate then it might be appropriate to allow learners to select the type of WBT screen design they prefer. However, the resources needed for developing, producing, and maintaining separate versions of a WBT program might make t his untenable. An alternative approach would be to enter all instructional content into a database, then develop a Web delivery system flexible enough to construct the selected interface design on the fly and insert the instructional content into it. This option could also be costly, especially on the front end of the develop process. However, if the system was flexible and robust enough, then it may prove to be cost effective over the long term, as additional instructional courses could be developed (with in the guidelines established for this system), without having to duplicate the delivery system.

PAGE 144

129 5. Consider employing a non scrolling, full page interface design. While this study found no statistically significant performance or satisfaction difference b etween full page and partial page designs, anecdotal data from the post session interviews indicated a fairly strong preference for the full page interface design. Among the chief reasons given by participants for this preference for a full page design wer e: information was provided in smaller, easily consumed chunks; information presented in smaller portions is less intimidating; it provided a greater sense of forward progress and accomplishment, which was more motivating; and it increases the likelihood t hat the learner will not skip any of the information. Of course, one downside to the full page interface design is that it can cost more to develop, in terms of effort, time, and expense, than a partial page interface due to the greater number of pages that must be created, programmed, and tested as well as the process of parsi ng the instructional content to fit within the space limitations of a full page design Recommendations for Improving This Study 1. Eliminate the hybrid characteristics of the partial page treatment Even though the partial page design contained large amounts of instructional content which requir ed p articipants to scroll, each content section still consisted of several individual contiguous pages hyperlinked to one another in the same fashion as the full page design. C oalescing all of a sections content into a single scrolling page would perhaps have provided partial page participants a more intense scrolling experience and possibly bring some of the perceived advantages and/or disadvantages of scrolling into starker relief.

PAGE 145

130 2. Collect data on the amount of time spent on each WBT page. This would be a relatively simple programming addition that would allow a comparison of the average time participants spent per page between the full page and partial page groups. Time comparisons could also be made for specific pages or sets of pages (for the full page version, times spent on the pages that make up one scrolling page can be combined). Such time per page comparisons could provide valuable insight into how well each version facilitates both the overall learning experience, as well as specific tasks and/or functions. It might also indicate differences in how people work with and in each type of screen design. 3 Intersperse inquiries into participants satisfaction level throughout the program. Essentially, this would be taking a series of satisfaction readings during the course of the study session by asking the partici pant to rate their level of satisfaction with the learning experience at that particular moment in time. These intermittent inquiries would need to be phrased in exactly the same way each time. This string of dynamic satisfaction data points would reveal changes in participants level of satisfaction at different points in the program. These data could be monitored remotely in real time by programming the studys Web delivery framework to deliver these data to the computer screen of the researcher as it is collected. If a participants satisfaction data fluctuates in a curious way, the researcher could ask that participant about the changes at the conclusion of his or her study session. The downside to this is that interrupting the learning experience could conceivably have a negative effect on a participants performance and/or satisfaction level. Therefore, if implemented, such interruptions would best be located just prior to the

PAGE 146

131 start of the BWPP tutorial and at the end of each section. The existing Lear ner Satisfaction Survey would, of course, be the final check. 4 Program keyboard hotkeys for some or all program features and functions. For instance, to move to the next page in the tutorial, a participant could either use the mouse to click the Next button or press, say, the right directional arrow key. The BWPP program could be programmed to capture these keyboard data for each tutorial page. Participants would, of course, have to be alerted to these keyboard equivalents at the start of the tutorial. The benefit to this would be to see how often and under what circumstances keys were used instead of the mouse to operate the program. More particular to the focus on scrolling, it would be interesting to learn if some participants preferred to use hotke ys over the mouse for scrolling up and down pages in the partial page version. 5 Have participants indicate their level of interest in the tutorial topic at the beginning, and end of the tutorial. This would be similar to the intermittent satisfaction i nquiries discussed in number 2 above, except that these interest inquiries would not interrupt the flow of the tutorial. The purpose of these inquiries would be to see if participants interest in the tutorial topic might be a factor in their performance and/or satisfaction level. At the end of their study sessions, participants could be shown their reported interest level data before and after the tutorial, then asked if their level of interest in the topic had any effect on how much effort they put into the learning experience and whether or not it a ffected how satisfied they were with the learning experience. Another question could ask if their

PAGE 147

132 interest level had more or less impact on their performance and satisfaction level than other factors, such as scrolling. 6 Time a task for finding previously read information. This idea derives from post session interview question 10 where participants were asked if, when trying to locate information they had previously read, they would prefer to click back t hrough a series of full page, non scrolling pages or scroll up on a long page in search of the information. In this study, participants gave contrasting reasoning for their preferences, with some saying that scrolling was more efficient (i.e., faster) than clicking and others asserting the exact opposite. The idea would be to insert one or more tasks into the tutorial where participants would need to go back to some previous point in the tutorial, then measure the amount of time it took them to do this. Exa ctly how this would work is unclear, but the start and stop times for this task would have to be triggered by some participant induced event, such as a clicked link or button. A comparison of average task completion times for the two treatment groups coul d reveal if one method was, indeed, more efficient than the other to any appreciable degree. Having a quantitative measure for this question would also allow good commentary on the differences in participant perception on this matter. 7 Replace artificia l program errors with reasonable learning supportive participant tasks. This suggestion pertains to the four artificial program errors each participant experienced during the BWPP tutorial. This matter is covered in Appendix J, but briefly, four errors were intentionally programmed into the BWPP tutorial, each requiring participants to use some feature of the program to correct it. The purpose was to

PAGE 148

133 provide participants with a richer experience of the program interface, by forcing them to use s everal features of the program they might not otherwise use. Once the error was corrected, the participant could continue on with the tutorial. There was a concern that these program errors could conceivably have a negative impact on participants perfor mance and/or satisfaction level. While there was no evidence that this was the case, it would seem a more constructive tact to design positive, topically relevant tasks to achieve the same purpose as the program errors. 8 Insert a Skip This Page link o r button at the top of each tutorial page. This suggestion derives from participant comments regarding scrolling pages containing a great deal of information and the temptation some have to skim or just skip these pages altogether. While it is unclear how data on skimming could be collected, putting a Skip This Page button or link at the top of each tutorial page might be one way of garnering some data about skipping pages. The Skip would be more relevant to and telling for the partial page particip ants since it would give them the option to skip long, scrolling pages without having to scroll down to the bottom of the page in order to click the Next button. The Skip and Next buttons would both be programmed to record if they were clicked. If t he former was clicked, but the latter was not on a page, then it could be warranted to assume that the participant did not view the entire page. The only other explanation would be that, after scrolling down to view the entire page, the participant scrolle d back up and clicked the Skip button, which would be much less likely. Of course, in the case of the full page version, one could not make that assumption, since both buttons would be visible at

PAGE 149

134 the same time. The unknown for the full page group partici pants would then be whether or not they used the Skip button in lieu of the Next button to move to the next page. A comparison of these skip data between the two treatment groups might shed some light on this temptation to skip theory. If the partial page group used the Skip button significantly more than the full page group, it could be suggested that scrolling does result in more skipped, or at least partially read, pages. 9 Collect all key press and mouse click event informa tion. This would be a matter of programming the BWPP tutorial to collect the sequence of the keys participants press, as well as the tutorial links and buttons they clicked during the tutorial. An analysis of this sequence of participant activity might rev eal some interesting information regarding how members of the two treatment groups operated the program. 10 Refine the post session interview questions. After a review of the post session interview audio files, it became apparent that several of the qu estions could have been better constructed to more clearly and directly get at the issue of scrolling and its impact on learning and learner satisfaction. Some respondents seemed to have difficulty understanding what was being asked at times. Perhaps, it w ould be advantageous to prepare a list of defined terms for the interviewees and even some visual aids for illustrating some terms and concepts that are referred to in the interview. 1 1 Be prepared to ask respondents about apparent discrepancies in their responses. During the course of the interviews, participants would sometimes provide a response to a question that appeared contradictory to a response they gave earlier to different question. Sometimes this was caught and addressed in the interview, but review of the interview audio files revealed other instances that were not. Perhaps the solution to

PAGE 150

135 this problem would be the development of an interview tracking sheet that would hel p the interviewer keep track of : 1. W hich questions have been asked. Thi s is to make sure no questions are inadvertently skipped during the interview, as was done twice in this study. (See the discussion of post session interview question 7 in Chapter Four.) 2. P articipants discrete responses to each question (e.g., yes, n o, it depends, no preference, etc.). 3. P articipants response consistency by cross referencing related questions. In other words, each interview question on the tracking sheet is flagged with an indication of which previous questions it is related to After the participant gives a response to a question, the interviewer can check to see if the participants response is consistent with the responses given for all other related questions. If it i s not, the participant can be asked to clarify the appare nt discrepancy. 1 2 Construct clearer questions for gauging participants knowledge of HTML. Given the confusing and inexplicable results obtained for the BWPP exam scores relationship to participants prior awareness of HTML and their experience using HTML, it would make sense to revamp the way this information was approached. Originally, only one question was asked for each of these concepts (see Appendix I). However, it would probably be a better idea to triangulate on each concept by asking a series of more specific questions that, taken together better exemplify each of the concept. For example, instead of simply asking Do you know what HTML is and what it is used for?, participants could be asked to select the correct definition of HTML from a

PAGE 151

136 number of possible choices. This question could be followed by a multiple select question that asks the participant to identify all purposes for which HTML is used. 1 3 Control for and or track the use of the Web browsers Find feature. In this st udy, n o attempt was made either to control or to gather information about the use of the Web browsers Find feature during participants study sessions This was perhaps an oversight, as use of the Find feature could circumvent some of the issues involved in finding previously read information and in reorienting back to a persons point of forwa rd progress activities that were suspected as having a possible impact on participants performance and/or satisfaction level U se of the Find feature might well n egat e the need to scroll during such activities and, therefore, entirely avoid any possible performance and/or satisfaction effects that might be associated with scrolling in the performance of these tasks If so, one role of scrolling would be eliminate d (or at least diminish ed ) and cease to be a factor in the study if it is even warranted to be considered as such This issue is, of course, most relevant for those undergoing the partial page treatment where the amount of information contained on a sing le page exceeds the bounds of the screen Since a Web browsers Find feature is functional only within the page that is currently being viewed by the user, it would be of no use in finding information located on previous pages. This is true for both full and partial page screen designs. Thus, the Find features only usefulness would be for locating information within the current page. And while this would be a practical use for partial page participants, it would be much less so for full page participants, given the limited amount of text on a page in a full page interface design.

PAGE 152

137 The question of how to handle the issue seems to yield no practical alternatives other than to instruct participants not to use the Find feature and/or to ask participants to repo rt on their use of the feature after completing the BWPP tutorial. Disabling the Find feature might be an option but how this could be done is not readily apparent short of hack ing the browsers programming code. However, both disabling the feature and in structing participants to not use the feature w ould seem to impose unrealistic and unjustifiable restrictions on the participants. T racking the actual use of the Find feature might also be possible and even desirable but would to the best knowledge of th is researcher, require either a relatively high degree of technological prowess or a high level of direct observation While either or both of these steps could be implemented it would undoubtedly require more expenditure of time, effort, and money. I t wo uld seem then, that the most readily practical alternative would be to ask participants to self report on their use of the Find feature after they have completed the tutorial. This could be done programmatically or through direct questioning by the study session proctor. Regardless of the level of information gathered regarding participants use of the Find feature its synthesis could reveal important details about if, when, and how users might use the Find feature in a WBT program and to what degree it might mitigate or otherwise impact user scrolling especially i n the context of investigating differences between full and partial page WBT interface designs. Finally, th is discussion of the Find feature seems to also call for some comment regarding t he possibility for inclusion of a Search feature in the tutorial. The most salient difference between the Find and S earch features is that the former is limited to a single page (i.e., the page the user is currently viewing), while the latter ranges across all or at

PAGE 153

138 least a large number of pages in the WBT program While there are certainly benefits from a Search mechanism in WBT programs, it would seem that its functional range would be an issue for linearly designed WBT programs ( such as the BWPP used in this study ) where the student must complete all sections in order In other words, the restrictions of access imposed by a linear program would need to be safeguarded in the Search mechanism which one might suppose would mean that its functional rang e would be limited to only those sections of the instructional program t h at the student has completed. The practicality of this would be dictated by the technological capabilities of the researcher(s), as well as other considerations, such as time and mone y. I t is for t his reason that, within the confines of improving this study, a Search mechanism is not being recommended. Recommendations for Future Research 1. Let all participants experience both interface designs. In this proposed study, participants would experience both interface designs. There are several ways in which to organize these experiences: (1) having two separate tutorials (one full page and the other partial page) taken one after the other; (2) breaking a single tutorial up into two sub tutorials (one full page and the other partial page); (3) alternating section screen designs within one tutorial (e.g., Sections 1, 3, and 5 are of full page design, while Sections 2, 4, and 6 are of partial page design); or (4) randomly determining the screen design of each section within one tutorial, as long as each design was represented equally. For the first three renditions, which screen design comes first could be randomly determined.

PAGE 154

139 2. Vary the load time of next and previous pages in a full page non scrolling format. This proposed study was prompted by participants in the current study who indicated that a major factor in their preference for or against a non scrolling screen design was how fast pages load. While the specific focus in the curren t study was the loading of previous pages, it could be broadened to both previous and next pages, since the full page design requires more pages to be load and more often than in the partial page design. The main purpose of this proposed study would be to determine the load time threshold at which point it starts to a ffect the learners satisfaction level. 3. Place navigational controls at the top and bottom of each tutorial page. The tutorial would keep track of which buttons were clicked for each page. T he idea here is twofold: (1) to see if there is a clear preference for location of the navigational controls; and (2) to gauge whether or not participants in the partial page group might be more tempted to skip text on a page. The concept is very similar t o the Recommendations for Improving This Study section above. 4. Vary the amount of scrolling involved in a partial page design. The idea for this proposed study is to gauge the acceptable limits of text (amount and density) on a scrolling page. The question relates to participant statements in the current study who said they did not mind scrolling as long as there was not too much of it. The organization of the study could follow the renditions outl ined in item 1 above, except that page length would replace screen design. 5 Test retention over time. This would basically be an extension of the current study, where participants would take another exam on HTML after a certain amount of time had elapse d since taking the tutorial. A comparison of exam scores might provide a

PAGE 155

140 better indication of whether one screen design might be more instructionally advantageous than the other. Conclusion T his study failed to find that the presence or absence of scrol ling alone is a significant factor either in how well a person performs in a WBT program or how satisfied they are with the learning experience. Post session interview data were consistent with these results by revealing that a majori ty of interview respondents did not think scrolling had any impact on either learning or satisfaction with the learning experience. Perhaps the main reason behind these results is that the pervasiveness of scrolling pages on the Web has instilled an expec tation of scrolling among the majority of users. It may be as more recent literature on Web scrolling suggests, that Web users, over time, have simply become more accustomed to and, thus, tolerant of scrollin g Also, t h ere is little doubt that the advent of the wheel mouse has taken the edge off the act of scrolling for many people. It is interesting to note however, that even though the majority of post session interview respondents saw no relationship between scrolling and their performance or satisfaction level, most of the respondents indicated a preference for a full page WBT interface They provided a number of reasons for this preference, many of which revolved around the idea of chunking up the instructional content into smaller, more digestible portions. W hether or not th ese anecdotal preferences consti tute a compelling enough reason for a WBT designer to choose a full page design over a partial page

PAGE 156

141 design however, must be debat ed on grounds other than the results of this particular study such as time and cost effectiveness. Do the findings of this study, then, put the issue of scrolling in WBT screen design to rest? Hardly As was pointed out in Chapter s One and Two there is a dearth of research looking at the effects of scrolling specifically within the domain of Web based instructional programs. G uidelines proffered pertaining to scrolling in WBT interface design are derived primarily, if not in entirely, through extrapolations from research on scrolling as it is manifested in other contexts such as Web searches and finding information in a text passage. While these guidelines have merit and may well be useful in informing WBT interface design decisions they have not yet been tested sufficient ly in the complex environment o f Web based instruction Hopefully, t his study provide s one more thread w ith which to help weave a more useful, evidence based set of WBT development guidelines. Th at no significant differences in performance or satisfaction between full page and partial page groups was found in this particular study does not mean that WBT inst ructional designers are now free to decide this design issue on a whim or with the simple toss of a coin. What it does mean is that both interface designs remain viable options for the WBT designer for the time being.

PAGE 157

142 N otes 1 While the term frame is s ometimes used synonymously with page or screen to refer to a single computer screen of information, it has an alternative meaning with regard to the Web. In Web terminology, a frame refers to the division of a Web page into individual sections, each w ith its own hypertext reference (Alden, 1998, p. 69) where one or more parts of the screen can remain static while the other part or parts change and/or scroll (Barron, 1998, paragraph 13). A frame based screen design might be considered a hybrid of par tial page and full page designs. While conceptually, the frame based design might solve some issues of screen design (Bernard, 2001) such as navigation and program feature buttons disappearing as users scroll down a Web page (although this is also solved by the full page design), it can also create and/or exacerbate other design problems ( Barron, 1998; Bernard, 200 1 ) For instance, it can make printing more difficult, as well as increase access time due to having to transmit multiple pages ( Barron, 1998). In any case, for the purpose of this study, the frame based design does not present a screen design option substantially unique from either the partial page or full page designs. 2 The Cronbachs alpha coefficient of the Internet Programm ing posttest was actually repor ted to be .89 (Majchrzak, 2001, p.39); however, this was including all 37 posttest questions. The 37th posttest question was an essay question and could not be included in this studys WBT exam because it was beyond the capa bilities of this researcher to

PAGE 158

143 program an adequate computer scoring rubric for an essay question. Thus, for the purposes of this study, it was more proper to calculate the reliability coefficient of the Majchrzaks 36 multiple choice posttest questions, wh ich turned out to be .80. 3 The Global Assessm ent of Functioning Rating Scale (GAF) is a composite index that mental health clinicians use to judge a persons highest level of functioning during the past year (Kaplan & Sadock, 1991). A GAF score reflects [a n individuals] current overall occupational, psychological, and social functioning [but] is not supposed to reflect physical limitations or environmental problems (Morrison, 1995). The GAF is used as Axis V in the multiaxial diagnostic system of the Diag nostic and Statistical Manual of Mental Disorders (fourth edition), which is the primary reference for clinical diagnoses of mental illnesses in the United States (American Psychiatric Association, 2000).

PAGE 159

144 References Ageless Learner.com. (2001). Usabi lity, user centered design, & learnability Retrieved August 15, 2005 from http://www.agelesslearner.com/intros/usability.html Alden, J. (1998). Learning technology series: A trainers guide to web based instruction: Getting started on intranet and intern et based training. Alexandria: VA: American Society for Training and Development. Alessi, S. M., & Trollip, S. R. (2001). Multimedia for learning: Methods and development (3rd ed.). Boston: Allyn and Bacon. American Psychiatric Association. (2000). Diagnos tic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: A merican Psychiatric Association Atkinson, R. & Shiffrin, R. (1968). Human memory: A proposed system and its control processes. In K. Spence & J. Spence (Eds.). The psychology of learning and motivation: Advances in research and theory (Vol. 2). New York: Academic Press. Ayersman, D. J. (1996). Reviewing the research on hypermedia based learning. Journal of Research on Computing in Education 28 (4), 500 525. Baker R (2003) The Impact of Paging Vs. Scrolling on Reading Online Text Passages. Usability News 5, 1 Retrieved August 24, 2005 from : http://psychology.wichita.edu/surl/u sabilitynews/51/paging_scrolling.htm

PAGE 160

145 Barron, A. E. (1998). Designing web base training. British Journal of Educational Technology, 29 (4), 355 370 Retrieved May 7, 2000 from http://itec h1.coe.uga.edu/itforum/paper26/paper26.html Barth, J. L. (1990). Method of instruction in social studies education (3rd ed.). Lanham, MD: University Press of America. Beard, D. V., & Walker, J. Q. (1990). Navigational techniques to improve the display of large two dimensional spaces. Behavior and Information Technology 451 466. Becker, H. J. (1992). Computer based integrated learning systems in the elementary and middle grades: A critical review and synthesis of evaluation reports. Journal of Educational Computing Research 8 (1), 11 41. Berson, M. J. (1996). Effectiveness of computer technology in the social studies: A review of the literature. Journal of Research on Computing in Education 28 (4), 486 499. Bernard L.M. (2001). Criteria for optimal web desig n (designing for usability ). S oftware U sability Research laboratory (SURL) Retrieved August 24, 2005 from http://psychology.wichita.edu/optimalweb/text.htm Bernard, M., Baker, R., & Fernandez, M. (2002). Paging vs. Scrolling: Looking for the Best Way to P resent Search Results Usability News 4, 1. Retrieved June 14, 2002 from http://psychology.wichita.edu/surl/usabilitynews/41/paging.htm Bernard, M., Fernandez, M. and Hull, S. (2002), The effects of line length on children and adults' online reading performance Usability News 4.2. R etrieved April 4, 2005 from http://psychology.w ichita.edu/surl/usabilitynews/42/text_length.htm

PAGE 161

146 Bixler, B., & Bergman, T. (2001). Definition of web based training (WBT) Retrieved August 24, 2002 from http://www.personal.psu.edu/staff/b/x/bxb11/CBTGuide/WBT/WBTDef.htm Bransford, J., Brown, A., & Cockin g, R. (Eds.) (1999). How people learn: Brain, mind, experience, and school Washington, DC: National Academy Press. Retrieved August 23, 2005 f rom http://newton.nap.edu/html/howpeople1/ Brehover, D. M. (2000). The relevance of performance improvement to i nstructional design. In G. M. Piskurich, P. Beckschi, & B. Hall (Eds.), The ASTD handbook of training design and delivery: A comprehensive guide to creating and delivering training program instructor led, computer based, or self directed. New York: McGra w Hill. Brown, T. (1997a). Multimedia in education: An introduction to computer based instruction http://scs.une.edu.au/573/573_1.html Brown, T. (1997b). Multimedia in education: Learner control Retrieved May 24, 2004 from http://scs.une.edu.au/573/573_3 .html Brown, T. (1997c). Multimedia in education: Constructivism Retrieved May 24, 2004 from http://scs.une.edu.au/573/573_5.html Catania, A. C. (1992). Learning (3rd ed.). Englewood Cliffs, NJ: Prentice Hall. Christmann, E., Badgett, J., & Lucking, R. (1997). Microcomputer based computer assisted instruction within differing subject areas: A statistical deducti on. Journal of Educational Computer Research 163(3), 281 296. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research 53(4), 445 459.

PAGE 162

147 Clark, R. E. (1991). When researchers swim upstream: Reflections on an unpopu lar argument about learning from media. Educational Technology 31(2), 34 40. Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21 29. Cohen, J. (1992). A power primer. Psychological Bulletin 112, 155 159. Crosby, M. E., & Stelovsky, J. (1995). From multimedia instruction to multimedia evaluation. Journal of Educational Multimedia and Hypermedia 4( 2/3), 147 162. Cuban, L., & Kirkpatrick, H. (1998). Computers make kids smarter right? Techn os 7(2), 26 31. Cunningham, D. J. (1986). Good guys and bad guys. Educational Communication and Technology Journal 34(1), 3 7. Donovan, S., Bransford, J., & Pellegrino, J. (Eds). (1999). How people learn: Bridging research and practice Washington, DC: N ational Academy Press. Retrieved August 23, 2005 from http://bob.nap.edu/html/howpeople2/ Dyson, M. C., & Kipping, G. J. (1998). The effects of line length and method of movement on patterns of reading fr om screen. Visible Language 32, 150 181. Eberts, R. E. (1997). Computer based instruction. In M. Helander, T. K. Landeaur, & P. Prabhu (Eds.), Handbook of human computer interaction (2nd ed., pp. 825 847). Amsterdam, The Netherlands: Elsevier. Ellis, A. L ., Wagner, E. D., & Longmire, W. R. (1999). Learning technology series: Managing web based training: How to keep your program on track and make it successful. Alexandria: VA: American Society for Training and Development.

PAGE 163

148 Fleming, M., & Levie, W.H. (1993) Instructional message design: Principles from the behavioral and cognitive sciences. Englewood Cliffs, NJ: Educational Technology Publications. Fletcher Flinn, C. M., & Gravatt, B. (1995). The efficacy of computer assisted instruction (CAI): A meta analy sis. Journal of Educational Computing Research 12 (3), 219 241. Friend, C. L., & Cole, C. L. (1990). Learner control in computer based instruction: A current literature review. Educational Technology November, 47 49. Galitz, W. O., (1993). User interface screen design. Wellesley, MA: QED Information Sciences. Geraci, M. G. (2002). Designing Web based instruction: A research review on color, typography, layout, and screen density Retrieved May 21, 2005 from http://aim.uoregon.edu/pdfs/Geraci2002.pdf Gordon S. E. (1994). Systematic training program design: Maximizing effectiveness and minimizing liability Englewood Cliffs, NJ: PTR Prentice Hall. Grabinger, R. S., & Osman Jouchoux, R. (1996). Designing screens for learning. In H. Van Oostendorp & S. de Mul (Eds.), Advances in the discourse processes: Vol. LVIII. Cognitive aspects of electronic text processing (pp. 101 212). Norwood, NJ: Ablex. Greenfield, P. (1984 ). Mind and media: The effects of television, video games, and computers Cambridge, MA: Harvard University Press. Harrell, W. (1999). Effective monitor display design. International Journal of Instructional Media 26(4), 447 458.

PAGE 164

149 Hergenhahn, B. R. (1988). An introduction to theories of learning (3rd ed.). Englewoods Cliffs, NJ: Prentice Hall. Hiemstra, R., & Brockett, R.G. (1994). From behaviorism to humanism: Incorporating self direction in learning concepts into the instructional design pro cess. In New ideas about self directed learning Norman, OK: Oklahoma Research Center for Continuing Professional and Higher Education, University of Oklahoma. Retrieved May 23, 2004 from http://www distance.syr.edu/sdlhuman.html Huang, W. Diefes Dux, H., Imbrie, P.K., Daku, B. & Kallimani, J.G. (2004). Learning motivation evaluation for a computer based instructional tutorial using ARCS motivational design model. Paper presented in the annual meeting of Frontier In E ducation Conference, Savannah GA. Herg enhahn, B. R. (1988). An introduction to theories of learning (3rd ed.). Englewoods Cliffs, NJ: Prentice Hall. Horton, W. (2000). Designing web based training: How to teach anyone anything anywhere anytime. New York: Wiley. Hsu, T., Wang, C. & Wang, H. (20 02). An empirical study of the learning motivation, satisfaction and effectiveness among web based learners in Taiwan. In G. Richards (Ed.), Proceedings of World Conference on E Learning in Corporate, Government, Healthcare, and Higher Education 2002 (pp. 1628 1631). Chesapeake, VA: AACE. Huitt, W. (May, 2000). The information processing approach Retrieved June 12, 2002 from http://teach.valdosta.edu/whuitt/col/cogsys/infoproc.html

PAGE 165

150 Johnston, R. (1995). The effectiveness of instructional technology: A revi ew of the research. In Proceedings of the virtual reality in medicine and developers exposition Cambridge, MA: Virtual Reality Solutions, Inc. Kaplan, H. I., & Sadock, B. J. (1991). Synopsis of psychiatry: behavioral science, clinical psychiatry (6th ed.). Baltimore: Williams & Wilkins. Kay, A. (1996). Revealing the elephant: The use and misuse of computers in education. Educom Review 31 (4), 22 24,26,28. Keller J. M., & Suzuki, K. (1988). Use of the ARCS motivation model in courseware design. In D, H. Jonassen (Ed.), Instructional designs for microcomputer courseware ( pp 401 434). Hillsdale, NJ: Lawrence Erlbaum. Kerlin, B. (1992) Cognitive Engagement Style Self Regulated Learning and Cooperative Learning Retrieved on May 21 2002 from www.lhbe.edu.on.ca/teach2000/onramp/srl/self_reg_learn.html Kolers, P. A., Duchnicky, R. L., & Ferguson, D. C. (1981). Eye movement measurement of re adability of CRT displays. Human Factors 23, 517 527. Koyani, S. J., Bailey, R. W., & Nall, J. R. (2003). Research based Web design & usability guidelines: current research based guidelines on Web design and usability issues Retrieved August 23, 2005 fro m http://usability.gov/pdfs/chapter8.pdf Kozma, R. B. (1991). Learning with media. Review of Educational Research 61(2), 179 211. Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology, Research and Development 4 2(2), 7 19.

PAGE 166

151 Kruse, K. ( 2004 ) The m agic of l earner m otivation: The ARCS m odel Retrieved February 18, 2005 from http://www.e learningguru.com/articles/art3_5.htm Kruse, K., & Keil, J. (2000 ). Technology based training: The art and science of design, develo pment, and delivery. San Francisco, CA: Jossey Bass/Pfeiffer. Kulik, C. L.C., & Kulik, J. A. (1991). Effectiveness of computer based instruction: An updated analysis. Computers in Human Behavior ,7(1&2), 75 94. Laurel, B. (1990). The art of human computer i nterface design Reading, MA: Addison Wesley. Lee, M. J., & Tedder, M. C. (2004). Introducing expanding hypertext based on working memory capacity and the feeling of disorientation: Tailored communication through effective hypertext design. Journal of Educ ational Computing Research 30 (3), 171 195. Levi, M. D. (1998). A shaker approach to web site design BLS Research Papers Retrieved April 12, 2000 from http://stats.bls.gov/ore/htm_papers/st9 70120.htm Lim, G (2003). ICT supported l earning s trategies and learner c entred Instruction. Centre for IT Learning (CITEL) Temasek Polytechnic, Singapore. Retrieved March 3, 2006 from http://www.cdtl.nus.edu.sg/brief/V6n9/sec2.htm Liu, M., & Reed, M. W. (1995). The effect of hypermedia assisted instruction on second language learning. Journal of Educational Computing Research 12 (2), 159 175. Lookatch, R. P. (1995). The strange but true story of multimedia and the Type I error. Technos 4(2), 10 13. Lookat ch, R. P. (1996). The ill considered dash to technology. The School Administrator April 28 31.

PAGE 167

152 Lookatch, R. P. (1997). Multimedia improves learning: apples, oranges, and the Type I error. Contemporary Education 68 (2) 110 113. Lynch, P., & Horton, S. (20 05). Web Style Guide Retrieved March 6, 2006 from http://www.webstyleguide.com/index.html Majchrzak, T. L. (2001). Effects of deadline contingencies in a web based course on html Ph.D. dissertation, Department of Secondary Education, University of South Florida, Tampa, FL. Mazur, J. E. (1990). Learning and behavior (2nd ed.). Englewood Cliffs, NJ: Prentice Hall. Mergendoller, J. R. (1996). Moving from technological possibility to richer student learning: Revitalizing infrastructure and reconstructed pedag ogy. Section 4: Grading the policymakers' solution. Educational Researcher 25 (8), 43 45. Merriam Webster Online. (2005). Definition of scroll. Merriam Webster's Online Dictionary Retrieved August 12, 2005 from http://www.m w.com/dictionary/scrolling Merr ill, M. D. (1994). Instructional design theory Englewood Cliffs, NJ: Educational Technology Publications. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychology Science 63, 81 97. Mills, C. B. & Weldon, L. J. (1987). Reading text from computer screens. ACM Computing Surveys 19, 4, 329 358. Morrison, G. R., Ross, S. M., & ODell, J. K. (1988). Text density level as a design variable in instructional displays. ECTJ 36(1), 103 1 15.

PAGE 168

1 53 Morrison, G. R., Ross, S. M., Schultz, C. W., & ODell, J. K. (1990). Learner preferences for varying screen densities using realistic stimulus materials with single and multiple screen designs. ERIC # ED323 940. U.S. Department of Education Educationa l Resources Information Center. Morrison, J. (1995). DSM IV made easy: The clinicians guide to diagnosis. New York: Guilford Press. Murphy, E. (1996). Review of the book The art of human computer interface design Readings in Technology & Education. Ret rieved June 12, 2002 from http://www.stemnet.nf.ca/~elmurphy/emurphy/laurel.html Muter, P. (1996). Interface design and optimization of reading of continuous text. In H. Van Oostendorp & S. de Mul (Eds.), Advances in the discourse processes: Vol. LVIII. Co gnitive aspects of electronic text processing (pp. 101 212). Norwood, NJ: Ablex. Mwaura, C. (2003). An i nvestigation of the innovation decision process of faculty members with respect to web based instruction Ph.D. dissertation Department of Educational Studies, Ohio University Athens, OH Nielsen, J. (1997). Changes in Web usability since 1994 Retrieved August 23, 2005 from http://www.useit.com/alertbox/9712a.html Nielsen, J. (2000). Designing web usability: The practice of simplicity. Indianapolis, IN : New Riders. Nielsen, J. (2003). Usability 101: Introduction to Usability. Retrieved August 23, 2005 from http://www.useit.com/alertbox/20030825.html

PAGE 169

154 Nielsen, J. (2005a). Lower Literacy Users Retrieved August 23, 2005 from http://www.useit.com/alertbox/20050314.html Nielsen, J. (2005b). Scrolling and Scrollbars Retrieved August 23, 2005 from http://www.useit.com/alertbox/20050711.html Oppenheimer, T. (1997). The computer delusion. The Atlanti c Monthly July 45 62. Parsons, C. (2001 ). The efficiency and preference implications of scrolling versus paging when looking for information in long text passages on a web page. PhD. Dissertation, Department of Educational Technology University of North ern Colorado, Greeley CO Retrieved May 5, 2005 from http://www.jujapa.com/clark/cgp_diss.html Pepi, D., & Scheurman, G. (1996). The emperors new computer: A critical look at our appetite for computer technology. Journal of Teacher Education 47 (3), 229 236. Petkovitch, M. D., & Tennyson, R. D. (1985). Clarks learning from media: A critique. Educational Communications and Technology Journal 32(4), 233 241. Piolat, A., Roussey, J. Y., & Thunin, O. (1997). Effects of screen presentation on text reading and revising. International Journal of Human Computer Studies 47, 565 589. Piskurich G. M. (2000). Design and delivery of self directed training. In G. M. Piskurich, P. Beckschi, & B. Hall (Eds.). The ASTD handbook of training design and delivery: A com prehensive guide to creating and delivering training program instructor led, computer based, or self directed. New York: McGraw Hill.

PAGE 170

155 Reader, L. M. & Anderson, J. R. (1980). A comparison of texts and their summaries: Memorial consequences. Journal of Verbal Learning and Verbal Behavior 19, 121 134. Reeves, T. C. (1993). Pseudoscience in computer based instruction: The case of learner c ontrol research. Journal of Computer Based Instruction 20(2), 39 46. Reeves, T. C. (1998). The impact of media and technology in schools: A research report prepared for the Bertelsmann Foundation Retrieved January 28, 2002 from http://www.athensacademy.org/instruct/media_tech/reeves0.html Ross, S. M., Morrison, G. R., & ODell, J. K. (1988). Obtaining more out of less text in CBI: Effects of varied text density levels as a fun ction of learner characteristics and control strategy. ECTJ 36(3), 131 142. Ross, S. M., Morrison, G. R. (1989). Reducing the density of text presentations using alternative control strategies and media. ERIC # ED308 836. U.S. Department of Education Edu cational Resources Information Center. Russell, T. L. (1999) The no significant difference phenomenon: as reported in 355 research reports, summaries and papers (4th ed.). Raleigh, NC: North Carolina State University. Salomon, G., Perkins, D. N., & Gloverson, T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Resea rcher 20(2), 2 9. Schwartz, E., Beldie, I. P., & Pastoor, S A. (1983). A comparison of paging and scrolling for changing screen contents by inexperienced users. Human Factors 25, 279 282.

PAGE 171

156 Severinson Eklundh, K., Fatton, A., & Romberger, S. (1996). The pa per model for computer based writing. In H. Van Oostendorp & S. de Mul (Eds.), Advances in the discourse processes: Vol. LVIII. Cognitive aspects of electronic text processing (pp. 101 212). Norwood, NJ: Ablex. Shneiderman, B. (1998). Designing the user in terface: Strategies for effective human computer interaction (3rd ed. ). Reading, MA: Addison Wesley. Sloan, B. (1997). Cyberhope, or cyberhype? CMC and the future of higher education Retrieved April 25, 2005 from the World Wide Web: http://alexia.lis.ui uc.edu/~haythorn/cmc_bs.htm Smith, P. L. & Ragan, T. J. (1993). Instructional design. New York: Merrill. Smith, S. L., & Mosier, J. N. (1986). Guidelines for designing user interface software (Technical Report ESD TR 86 278). Hanscom Air Force Base, MA: U SAF Electronic Systems Division. Sperling, G. A. (1960). The information available in brief visual presentation. Psychological monographs 74, Whole No. 498. Spool, J., Scanlon, T., Snyder, C., DeAngelo, T., & Sch roeder, W. (1999). Web site usability: A de signers guide San Francisco: Morgan Kaufmann Publishers, Inc. Steinberg, E. R. (1989). Cognition and learner control: A literature review, 1977 88. Journal of Computer Based Instruction 16 (4), 117 121. Tinker, M. A., & McCullough, C. M. (1962 ). Teaching elementary reading (2nd ed.) New York: Appleton Century Crofts.

PAGE 172

157 Tullis, T. S. (1997). Screen design. In M. Helander, T. K. Landeaur, & P. Prabhu (Eds.), Handbook of human computer interaction (2nd ed., pp. 825 847). Amsterdam, The Netherlands: Elsevier. U llmer, E. J. (1994). Media learning: Are there two kinds of truth? Educational Technology Research and Development 42(1), 21 32. Van Oostendorp, H. & van Nimwegen, C. (1998). Locating information in an online newspaper. Journal of Computer Mediated Commun ication 4 (1), September 1998. Retrieved August 23, 2005 from http://jcmc.indiana.edu/vol4/issue1/oostendorp.html Vockell, E., & Brown, W. (1992). The computer in the social studies curriculum Watsonville, CA: Mitchell McGraw Hill. Webopedia Computer Dic tionary (2005a). What is HTML? Retrieved August 8, 2005, from http://www.webopedia.com/TERM/H/HTML.html Webopedia Computer Dictionary. (2005b). What is scroll? Retrieved August 8, 2005, from http://www.webopedia.com/TERM/s/scroll.html Webopedia Computer D ictionary ( 2005c). What is Word Wide Web? Retrieved August 8, 2005, from http://www.webopedia.com/TERM/W/World_Wide_Web.html Wikipedia the free encyclopedia (2006). HTML Retrieved March 22, 2006 from http://en.wikipedia.org/wiki/HTML Youngman, M., & Sch arff, L. (1998). Text width and margin width influences on readability of GUIs A paper presented at the 1998 Southwest Psychological Association Convention Retrieved August 27, 2005 from http ://hubel.sfasu.edu/research/textmargin.html

PAGE 173

158 Appendices

PAGE 174

159 Appendix A: Comparison of Full Page and Partial Page WBT Screen Design s A Non Scrolling, Full Page Design A Partial Page Scrolling Design (Separat e screens are at 600 x 480 resolution.) (Black box approximates view on 17 inch monitor with a printout of the Web page equaling two 8.5 x 11 pages.) Figure 1. Comparison of F ull Page and Partial Page Screen Designs

PAGE 175

160 Appendix B: Modifications to the Original Proposed Study Prior to the first pilot test for this study in April 2001, three proposed instruments were droppe d from the study protocol: the Study Suitability Survey the Participants Web Skills Assessment Sheet and the GAF Worksheet The Study Suitability Survey was a proposed filtering tool for making sure that all prospective study participants met certain criteria for taking part in the study. It was original ly conceived as a paper based instrument that was to be administered to all undergraduate students enrolled in designated social work, rehabilitation, and psychology classes during the term the study was conducted. The survey was intended to assess each st udents level of experience with the Web, as well as h is or her familiarity with the Global Assessment of Functioning Rating Scale (GAF), which is discussed below. Only those students demonstrating a certain level of Web proficiency and who have h ad no s ignificant experience with the GAF would have qualified for participation in this study, and it was this filtered group from which a random sample was to be drawn. The idea of limiting inclusion into the study to only those who meet these criteria was to c ontrol for any interaction effects related to prior Web experience and/or HTML experience. However, during the initial development of the studys Web delivery framework, the Study Suitability Survey protocol was determined to be too impractical to implemen t as originally conceived. It was also redundant, as participants would also be completing a brief Web based program intended to assess their level of Web skills and familiarity with the topic of the WBT program. This Web s kills program was the Web Skill s Assessment (WSA) program discussed in Chapter Three. And as discussed in Chapter

PAGE 176

161 Appendix B : (Continued) Three, while the WSA program remained a part of the study protocol, it was never used as a filtering mechanis m for study participation. F or same r easons, the proposed Participan ts Web Skills Assessment Sheet was also deleted from the study protocol. Once study was ready to pilot for the first time in April 2004, it became apparent that other modifications were needed with regard to the sampling p rocess for study participants. First, the studys principal investigator (PI) did not have the desired level of to the target population, which limited the pool of potential participants and forced the PI to undertake a more direct recruitment campaign. Th e recruitment of participants was severely hampered by the fact that test runs of the studys WBT program averaged around two hours. This made participation in the study a hard sell to potential participants, as it became clear that original incenti ves pro posed for study participation (class extra credit and a free WBT program on a mental health related topic) were outweighed by the time and effort students would have to expend to participate. This realization led to more changes in the sampling protocol. T he next round of recruitment measures included the additional incentive of $20.00 in cash to those who participated in the study. The recruitment campaign itself expanded from targeting students and classes in specific mental health related academic progra ms to general recruitment of any undergraduate student in any academic program, as long as he or she met the participation criteria. The methods of recruitment are described in Chapter Two. The combination of expanded recruitment campaign and the promise of pecuniary reinforcement resulted in the recruitment of the participants needed in order to conduct the first pilot test in April 2004. It also negated the need for selecting a n over sample of

PAGE 177

162 Appendix B: (Continued) 150 participants, as originally pro posed (to offset the possibility of attrition). The recruitment campaign was so successful, that there was a perpetual waiting list of potential replacement participants to draw from if any scheduled participant cancelled an appointment or simply no showed Also, the fact that participation was first come first served (as long as the participant met the participation criteria) maintained an adequate randomness to the sampling process. The outcome of that first pilot resulted in some further modifications, this time to some of the original data collection procedures and instruments, most notably the WBT program around which the entire study revolved. The WBT program used in th e first pilot was entitled the Global Assessment of Functioning Rating Scale ( G AF). The GAF tutorial was developed by Community Mental Health Online Education (CMH OLE), a Web based education and training initiative of the Department of Mental Health Law & Policy, Louis de la Parte Florida Mental Health Institute at the University of South Florida in Tampa. CMH OLE developed and offered a number of continuing education credit WBT programs to mental health professionals across the United States. This studys PI was the primary instructional designer and Web programmer for these WBT pro grams, including the GAF tutorial. Participants in this first pilot study were required to complete the GA F tutorial, which consisted of four content sections, a practice section and culminated in an eight item final exam. Each of the eight GAF exam items had the participants read a brief vignette involving a fictional person, then entering a GAF score (between 0 and 100) for that fictional person. While there was a single digit best answer each of the exam items,

PAGE 178

163 Appendix B: (Continued) he correctness of participant answers were judged by the computer based on a 21 point range, extending from 10 points below the best answer to 10 points above the best answer. For example, if the best answer for one of the exam items was 42, then any answer between 32 a nd 52 was considered correct. The reason for this range of correct answers reflects the problematic nature of the GAF instrument itself. As an assessment tool, the procedure for using the GAF rating scale was very straightforward; however, the crux of its successful utilization was the level of knowledge, skills, and facilities of the practitioner employing it. Clinical judgment in assigning GAF scores is paramount. Because of the inherent subjectivity involved in clinical assessments, practitioners do not always agree with each other when it comes to the GAF scores assigned for particular cases. And with a 100 point scale to work with, practitioners rarely assign the exact GAF score. Thus, the scale allows for some flexibility. In fact, for the development of the GAF tutorial, the best answer for each vignettes of the GAF tutorials final exam was derived essentially as the average of the ratings submitted for that vignette by a panel of 33 pr actitioners with experience and expertise in using the GAF. The i ndividual GAF ratings for each vignette varied sometimes quite widely. It is no secret that, in practice, learning how to use the GAF rating scale skillfully is challenging to most working mental health practitioners (lice nsed and paraprofessionals, ali ke). The CMH OLEs GAF tutorial results provided ample evidence of this, as the majority of mental health practitioners who took this online course had to retake the tutorial at least twice before successfully completing it.

PAGE 179

164 Appendix B: (Continued) In hin dsight, it is no wonder complete novices to the GAF (as were the first pilot study participants) would have a very difficult time in successfully completing the GAF tutorial. Aside from the fact that it took pilot participants an average of near ly two hour s to complete the tutorial, the main problem stemming from this pilot was the failure to establish a strong enough reliability coefficient to justify continuing with the main study. The original GAF exam data from 24 pilot participants yielded a Cronbachs alpha of .40. Several alternative methods of analysis were conducted in an effort to salvage the study, but to no avail. The Cronbachs alpha never rose above .322. It was eventually determined that the nature of the GAF was far too problematic to ever y ield reliable results for the studys target population. Therefore, the decision was made to replace the GAF tutorial with one pertaining to a much more concrete subject matter. Eventually, a CD ROM based instructional program, entitled Internet Programmi ng (IP) was identified as a possible replacement for the GAF program (see Chapter Three for more information). Once the first pilot was underway, it became apparent that the selection process for the post session interview needed to be slightly modified. While the study computer lab could accommodate up to four participants at a time (the computer lab for the first pilot study was located in a suite of rooms that allowed for four Internet connected computer workstations), the actual number of participants participating in a study session at any one time varied from one to three. Therefore, instead of focusing on the random

PAGE 180

165 Appendix B: (Continued) selection of every third participant for the interview, it was decided to randomly select one partic ipant from each study session. This new random selection process is discussed in Chapter Three.

PAGE 181

166 Appendix C: Frequency Table of Participant Ages Table 15 Frequencies of Study Participant Ages ( N = 129) Age Frequency Percent Cumulative Percent 18 8 6.2 6.2 19 24 18.6 24.8 20 30 23.3 48.1 21 20 15.5 63.6 22 19 14.7 78.3 23 8 6.2 84.5 24 6 4.7 89.1 25 3 2.3 91.5 26 1 .8 92.2 27 1 .8 93.0 28 1 .8 93.8 30 2 1.6 95.3 32 2 1.6 96.9 33 1 .8 97.7 44 1 .8 98.4 46 1 .8 99.2 52 1 .8 100.0

PAGE 182

167 Appendix D: Samples of Recruitment Materials Figure 2. Sample Recruitment Poster

PAGE 183

168 Appendix D : (Continued) Figure 3 Sample Recruitment Handbi ll The following is a sample recruitment Email (in courier font) that was disseminated to various university contacts, such as undergraduate class instructors and student organizations: Hi. I'm now recruiting research subjects for my main dissertati on study. If you know of anyone who meets the criteria below and who would like to earn $20.00 cash for a single study session, please forward them this information. The study is running through May. At the moment, I have 96 slots available. I am opening sessions in phases. The current phase runs through May 6; however, if folks cannot come to any of these sessions, they can submit their email address to my waiting list, and I'll contact them as soon as more slots are available. These are single sessions, and while the length of study sessions will vary depending on how fast the individual works, the current average is around 65 minutes

PAGE 184

169 Appendix D: (Continued) (although subjects should be prepared to spend 2 hours). For most days, I will be running 3 s essions per day beginning at 9:00 AM, 12:00 PM, and 3:00 PM. I am also open to setting up special sessions on weekday evenings, Saturday and Sunday, but these would need to be set up by contacting me directly by office phone (xxx xxxx), cell (xxx xxx xxxx) or email (XXXXX@xxxx.xxx.xxx). PURPOSE OF THE STUDY: The focus of this study is an inherent aspect of Web page design that could have important implications for the efficiency and effectiveness of Web Based Training (WBT) programs. I t is hoped that the results of this study will help inform current and future WBT designers in making fundamentally sound decisions about their instructional program designs. Subjects will take an online course about how to create basic Web pages using HT ML (the basic programming language for the Web). In addition, one person in each session will be randomly selected for a brief audio taped interview. The study is completely anonymous and innocuous. The only personal information asked is gender and age. WHERE The study is being conducted here at xxxx in xxx xxxx. PARTICIPANT CRITERIA: 1) They must be an undergraduate. 2) They must know little or nothing at all about how to create Web pages using HTML (the base language for constructing Web pages) by i tself. If they are fairly familiar with HTML even if through the use of a design view application, such as Dreamweaver I'm afraid I will NOT be able to use them. However, if they do not know how to create a Web page, or if they somehow create Web pages without ever seeing any of the HTML code, they would be a good candidate for my study. 3) They must possess "adequate web skills." By this I mean that they are not a complete novice to computers and the Internet/World Wide Web that they know how to us e a Web browser and are fairly familiar with how to get around on the Web. COMPENSATION: Each subject will be paid $20.00 for completing a study session.

PAGE 185

170 Appendix D: (Continued) HOW TO SIGN UP For more details and/or to sign up for the stud y, go to http://xxxxxx.xxxx.xxx.xxx/study.htm Subjects select a study session slot and are asked only for their first name, phone number, and email address in case they must be contacted about changes in appoint ment times. When they sign up, they will be issued a confirmation document that will include directions for canceling or changing their appointment, directions to the study site, and parking information. Online registration is the preferred way of signing up for the study, as subjects receive a confirmation with directions and instructions. However, if necessary, students may also register by contacting me (phone: xxx xxxx; cell: xxx xxx xxxx; or email:xxxxx@xxxx.xxx.xxx), and leaving their first name, pho ne number, and email address. I will return their call to either schedule an appointment or to inform them that all slots for the pilot study have been filled. Thanks. My Best, Phil The following is a recruitment advertisement placed in the university student newspaper: Undergrad Subjects needed for USF study. $20 single session. Details and criteria at: xxxxxx.xxxx.xxx.xxx/study.htm

PAGE 186

171 Appendix E: The Study Participant Scheduling Process Virtually all of the participant sign up and sessio n scheduling was done automatically via the study Web site. The sites home page provided links to a synopsis of the study, the criteria for participating in the study, a map and written directions to the study site, and to contact information for the stud ys principal investigator (see Figure 4). Figure 4 The Study Web Site Home Page. Students were instructed to read the criteria for participation in the study. The home page also displayed a message updated in real time about the sta tus of the

PAGE 187

172 Appendix E : (Continued) study; that is, whether or not participants were still being accepted, and, if so, how many participant slots were still available over what time period. The message also included the current average session time, which was calculated directly from the start and stop times of those participants who had already completed their study sessions. If no study slots were available, students could click on a button to put their email address and phone number on a waiting list to be contacted in the event slots were to open in the future. If an appointment was cancelled, an email was sent to those on the waiting list that a slot had opened and was available on a first come first served basis. If sessions were not currently being sc heduled for some reason a message to that effect would be provided, along with a date for when more sessions might be opened. If slots were still available, students would click a button to continue on to the scheduling page. However, before arriving at the scheduling page, students were taken to a page that presented the participation criteria (see Figure 5). They would then click a button to continue on to the scheduling page, which was an interactive monthly calendar.

PAGE 188

173 Appendix E: (Continued) F igure 5 Participation Criteria Page The calendar highlighted only the days on which study sessions were being held (see Figure 6). For time management purposes, study sessions were made available in roughly two week blocks of time. This was an effort to fill each session with as many participants as possible, and, thus, maximize the time available for collecting data for this study. The days on which sessions were being scheduled contained links for the three daily study sessions.

PAGE 189

174 Appendix E: (Continu ed) Figure 6 The Scheduling Calendar Generally speaking, three session times were made available each weekday, with two hours and fifteen minutes allowed for a session. Session one ran between 9:00 AM and 11:15 AM, s ession two between 12:00 PM and 2:15 PM, and session three between 3:00 PM and 5:15 PM. Participants had until midnight the day before to schedule for the first session of the day, until 11:00 AM the day of for the second session, and until 2:00 PM the day of for the last session of the day.

PAGE 190

175 Appendix E: (Continued) Up to three participants could sign up for a particular session time, which meant that on a normal day up to nine subjects could participate in the study each day. Often, however, not all slot s in a session would be filled, such that a session might consist of only one or two participants. There were, of course some days when one or more sessions times were not available due to conflicts in the principle investigators ( PI ) schedule. The PI could deactivate any given session if he was going to be unavailable during that period, making sure no one could schedule themselves during that time. (It should be noted, here, that this PI was the sole proctor for every study session.) In addition, spe cial sessions could also be arranged for participants whose own schedules conflicted with the routine session times. Eleven such sessions were conducted, taking place at some alternate time on a weekday and consisting of a single participant. Except for t he time frame, all other study sessions parameters were implemented as usual. When a student selected (i.e., clicked on the session link) a session date and time from on the calendar, he or she was taken to the session sign up form (see Figure 7), which as ked for his or her first name only and a telephone number and email address where he or she could be reached if the PI needed to cancel that session for some reason..

PAGE 191

176 Appendix E: (Continued) Figure 7 The Session Sign Up Form After submitting the s ign up form for their selected day and time, they received a confirmation of their study session appointment containing a confirmation code they could use to cancel and/or reschedule their appointment online (see Figure 8). A link to the cancellation page was also located on the home page of the study site. An copy of the confirmation was also emailed to the address provided during sign up.

PAGE 192

177 Appendix E: (Continued) Figure 8 Study Session Confirmation To cancel a study session appointment online, a s tudent would return to the study site home page and click the session cancellation button to take them to the cancellation form (see Figure 9). Online cancellation required the confirmation code given to the student when he or she first signed up.

PAGE 193

178 Append ix E: (Continued) Figure 9 The Cancellation Form. Once the student submitted their confirmation code on the cancellation page, he or she receives a confirmation of cancellation message, with a button they could click if they wanted to re schedule, in w hich case he or she would be taken to the scheduling calendar (see Figure 10). Students who cancelled their appointments were emailed an invitation to go back online and re schedule. Figure 10 The Cancellation Confirmation.

PAGE 194

179 Appendix E: (Continued) T he study PI could also, of course, cancel a students appointment, which was not an uncommon occurrence. During the course of the study, 46 students no showed for their scheduled study session. Many others cancelled by telephone or email. Scheduling or ca nceling an appointment automatically updated the study database, decrementing or incrementing the overall number of slots available for the study (128), as well as the number of available study slots for each study session. Since the study Web site was con trolled by this database, the management of study sessions to be largely automatic. For example, if all three slots for a session were filled, the Web sites scheduling calendar would gray out (i.e., deactivate) that link, effectively closing that sess ion. However, if one of the participants in a full session cancelled his or her appointment, the link for that session would automatically be reactivated and that session slot made available again, thus, re opening that session. At the same time, the overa ll number of available slots for the study would be incremented by one. When all 128 slots were filled, the scheduling calendar would become inaccessible.

PAGE 195

180 Appendix F: Subject Prep Checklist Determine Their Appropriateness For Study: 1. Are you an under graduate student? 2. Are you familiar with the Internet and the World Wide Web? 3. Are you familiar with a Web browser such as Internet Explorer or Netscape? 4 Do you think you could get around adequately on the Web? 5. Do you know how to create a We b page using only HTML code? General Instructions and Information: 1. Overview session 2. In the WSA program, use your best judgment for each question and task. 3. If an error occurs, first follow any instructions that might be provided. If there are no instructions or you follow the instructions and the error does not correct itself, notify me. 4. Dont share purpose of study w/ others who might wish to participate in the study. 5. Dont share answers with w/ others who might wish to participate in the s tudy. 6. Interviews are audio taped, but identified with userid only 7. Must complete and sign a receipt for payment this information is kept confidential 8. Orient to bathroom, water fountain, and vending machines.

PAGE 196

181 Appendix G: Informed Consent Form The following is the content of the consent form each participant was required to read and sign before being allowed to participate in the study. Consent to Participate in this Study Instructions Please read the following information and indicate wh ether or not you consent to participate in this study at the bottom of this page. Short Title of Study Screen Designs in Web Based Training Institutional Review Board (IRB) Status On February 5, 2004, the University of XXXXX XXXXXXX's Division of Resear ch Compliance certified this study as having met the federal criteria as an exempt study (IRB Protocol No. 102185). Purpose of the Study The focus of this study is an inherent aspect of Web page design that could have important implications for the effici ency and effectiveness of Web Based Training (WBT) programs. It is hoped that the results of this study will help inform current and future WBT designers in making fundamentally sound decisions about their instructional program designs. Benefits for Study Participants 1. The experience of participating in a dissertation study, especially if they are interested in pursuing a Ph.D. themselves. 2. The Basic Web Page Programming program they will be taking during the course of this study can be considered a n incentive, in and of itself especially to students who are interested in learning how to create and/or modify basic Web pages using HTML (the basic Web page programming language). 3. Each study participant will receive $20.00 in cash at the conclusio n of their study session (see the section on Compensation for Participation below). Compensation for Participation Participants will each receive $20.00 in cash at the conclusion of their study session. Each subject receiving money will provide their fu ll name, contact information (address,

PAGE 197

182 Appendix G: (Continued) phone number, and email address), and signature as acknowledgment they received the payment. There will be no way to connect an individual payment record with any individual data record (see the Confidentiality and Use of Data section below for more information). Confidentiality and Use of Data Collected for this Study It is important that you understand that none of the data you generate during this study (including audio taped intervie ws, if you are selected for such) will be identifiable with you in any way. The 6 digit study code with which you logged into the study site is the only unique identifier for the study records, and the study codes will have absolutely no connection to any individual identifying information. The data generated from this study will be accessed only by XXXXXXX XXXXX (and members of his doctoral committee as needed). The data will be used in his dissertation report and may be published in the future. All data will be retained by Phillip Grace on a CD ROM indefinitely. However, as indicated above, all data will be anonymous. Consequences for Choosing NOT to Participate in this Study The only negative consequences for you choosing not to participate in this stud y are that you would not receive the benefits delineated above under the section Benefits for Study Participants. If you have any questions about any of the information above, please see the proctor. If not, please indicate your consent to participate i n this study below. ________________________________________________________________________ YES: I have been provided an oral explanation of the study by the study's principal investigator, read t he above information and consent to participate in this study. NO: I have been provided an oral explanation of the study by the study's principal investigator, read the above information and DO NOT wish to participate in this study.

PAGE 198

18 3 Appendix H: Post Session Interview Guide 1. Overall, did you like the program interface of this instructional program? Why? 2. Did the design of program interface influence whether or not you felt satisfied with (or liked) this instructional experience? Explain. 3. Do you think that how an instructional programs interface is constructed has an impact on how well people like the program? Explain. 4. Do you think that how an instructional programs interface is constructed has an impact on how well people learn the mate rial? Explain. 5. Do you prefer to have an idea of how much text is on a Web page at the start before you start reading it? Explain. 6. How do you prefer to have instructional text presented to you on a Web page: in relatively small chunks or in longer pas sages? Why? 7. Do you find it easier to read, understand, and remember new material on a Web page if there is a limited amount of text on the page? Explain. 8. Do you think the amount of scrolling involved in an online instructional program has any effect on your satisfaction level regarding the instructional experience? If so, in what way? Explain. 9. Do you think the amount of scrolling involved in an online instructional program has any effect on how well you learn the material? Explain. 10. If you wante d to find some information in the program you had read previously, would you prefer to have to scroll back up a page to find it, or to click back

PAGE 199

184 Appendix H: (Continued) through the previous pages where scrolling is not required to see the pages conten t? Explain. 11. Do think having to scroll down a page to view more content and/or to get to some features of an instructional program distracts you from focusing on the material? If so, how much of a distraction is it? Explain. 12. Given the choice in an online instructional program, do you have a preference between having to scroll down each page to view more instructional information or having to click a button to move between pages where you can see all of the pages information at once? If so, why?

PAGE 200

185 Ap pendix I: Web Skills Assessment Program (WSA) The basic flow of the WSA is both described and graphically depicted here, beginning with the opening screen (see Figure 11). Some instruction screens and feedback screens that were displayed to study participa nts are not included here. Figure 11 The Opening Screen for the WSA Program Essentially, the intention of the WSA was to gauge how familiar the participant was with the types of tasks and situations he or she would be encountering during the BW PP tutorial. Its original purpose was to filter out as potential study participants those whose lack of Web (and, by extension computer) knowledge and skills might confound the study results. However, it was never used in this way. See Chapter Three for mo re information regarding the role of the WSA program in the study. It should be noted that the WSA interface was primarily a full page design, with the exception pf page eight, which was intentionally designed to be a scrolling page. The

PAGE 201

186 Appendix I : (Cont inued) program was presented to members of both treatment groups as such. Also, while the WSA window covered the entire screen, the actual program interface was only 600 pixels wide by 450 pixels in height (again, except for page eight, whose length was i ntentionally exaggerated) The first task was for the participant to enter some demographic information (see Figure 12). The primary reason for this was to see if the participant understood how to use these types of form elements to enter data on a Web p age. Figure 12. Task 1: Using Form Elements on a Web Page

PAGE 202

187 Appendix I : (Continued) The next task, shown in Figure 13, was intended to see if the participant could differentiate between certain form elements by name (at least with regard to rad io buttons). The program recorded the number of tries it took the participant to make the correct selection, as well as the order of selections. Figure 13 Task 2: Differentiation Between Form Elements By Name

PAGE 203

188 Appendix I : (Continued) The purpos e of the next task (see Figure 14) was to gauge the participants understanding of the function of certain form elements. Taken together, the first three tasks of the WSA involved the specific types of form elements that the participant would encounter dur ing the BWPP tutorial. Figure 14 Task 4: Differentiation Between Form Elements By Function

PAGE 204

189 Appendix I: (Continued) Figure 15 shows the number sequencing task, where the participant was instructed to click on all the graphic numerals (i.e., whi te number on a black, circular background) in sequence as quickly, but as accurately, as possible. Once clicked each graphic numeral disappeared, while the number clicked appeared in the text box at the bottom of the page in the order it was clicked. The p rogram counted the number of seconds it took for the participant to clicked all 10 numerals. Figure 15 Task 5: Numbering Sequencing

PAGE 205

190 Appendix I: (Continued) When all numbers had been clicked, the participant was automatically taken to the next page (Figure 16) that co nsisted of a single link to be click ed in ord er to see how well he or she did on the task Figure 16 Link f or Displaying Results of Number Sequencing Task

PAGE 206

191 Appendix I: (Continued) The intention behind this task was to gauge the participants skill in using the mouse. While a scoring rubric was never developed or tested for this exercise, test runs by six different individuals of varying levels of familiarity with computers suggested that a person with functional computer skills should be able to cl ick on all the numbers within roughly 10 to 17 seconds. Of course, this was only the most cursory of tests and lacked any credible validity or reliability measures. When the participant clicked on the link to see how well he or she did on the number sequ encing task, the WSA window automatically advanced to the next page (page 7). However, before the participant had a chance to see page 7, a new window opened on top of the WSA window. The new window displayed the results of the participants number sequen cing task and instructed the participant to get back to the WSA window (see Figure 17). Figure 17 Number Sequencing Task Results Page

PAGE 207

192 Appendix I: (Continued) Because windows obscuring other windows is a common experience when working on the W eb, and because the phenomenon might easily occur during the BWPP tutorial, the idea behind this task was to see if the participant knew how to both recognize and successfully maneuver within such a situation. Since the new window completely covered the W SA window, the participant had at least three ways of getting back to the WSA window without closing the new window: (1) minimizing the new window, (2) using the Alt + Tab keyboard combination or (2) clicking the WSA window button located in the taskbar. I f the participant successfully navigated back to the WSA window without having to close the new window, they saw page 7 of the WSA program displayed as in Figure 18. Figure 18 Page 7 Content If New Window Was Not Closed

PAGE 208

193 Appendix I: (Continued) On the other hand, if the participant was not familiar with any of the methods mentioned above or otherwise had no idea of how to bring the WSA window to the top, he or she was instructed to click the link in the message. Clicking this link closed the me ssage window, thus, revealing page 7 of the WSA program. However, if the participant clicked the link, page 7 displayed a different message (Figure 19). The program recorded whether or not this link was clicked. Figure 19 Page 7 Content If New Window W as Closed

PAGE 209

194 Appendix I: (Continued) If the participant was able to get back to the WSA page 7 window without closing the new window, they were given the task of going back and closing that new window (see Figure 18 again). This could be accomplished using any of the techniques mentioned above. If the participant did not know how to do this, they were instructed to click on the link indicated, which closed the new window automatically. The program also recorded whether or not this link was clicked. The next page in the WSA was a long that scroll ed off the bottom of the screen (see Figure 20). At the top of this page was a pretense for the participant having to return to the previous page and instructions for the participant to scroll to the bottom and click o n the Previous button located there. Figure 20 Task 7: Scrolling To t he Previous Button

PAGE 210

195 Appendix I: (Continued) The intent of this task was simply to see if the participant understood the concept of scrolling. The program recorded if the Previous button was clicked, the assumption being that the participant had performed some type of scrolling in order to reach the button. Once back on page 7, the participant was told that the (imaginary) task had been completed after all and to click th e Next button to continue. When page 8 displayed again, it was no longer a long, scrollable page, but conformed to the normal interface dimensions (see Figure 21). On this page, the participant was told that, next, he or she will be asked a cou ple of que stions pertaining to HTML. Figure 21 The New Page 8

PAGE 211

196 Appendix I: (Continued) The next two pages in the WSA were questions related to the participants level of familiarity with HTML. The first asked about participants prior awareness of HTML ( Figure 22) and the second about his or her level of experience using HTML (Figure 23). Figure 22 Question Regarding Prior Awareness of HTML

PAGE 212

197 Appendix I: (Continued) Figure 23 Question Regarding Experience Using HTML The last page of the WSA was simply thanked the participant for their cooperation and informing them that when they clicked the Next button, they would be taken to the BWPP tutorial.

PAGE 213

198 Appendix J: Description of the Basic Web Page Programming Tutorial This appendix provi des a description of the structure, organization, and content of the tutorial. Please note that a full page and a partial page version of the BWPP tutorial were developed for this study, with both versions being identical in every way except for the amount of content contained on a page. All images of the tutorial in this appendix come from the full page version, as its pages were more economical in terms of space. The Dimensions and Layout of the BWPP Tutorial Interface Figure 24 shows the tutorials title screen. The dimensions of the program interface for the full page version was 600 pixels wide by 450 pixels in height; and neither vertical nor horizontal scrolling was required. While the partial page version was the same width, its pages varied in lengt h, although none of its pages were less than 450 pixels high. While horizontal scrolling was not present in this version either, vertical scrolling was required for the vast majority of tutorial pages.

PAGE 214

199 Appendix J: (Continued) Figure 24 Title Screen for the BWPP tutorial The layout of the program interface, keyed for identification of the interface elements, is depicted in Figure 25. Element 1 is simply the title bar of the program. Element 2 is the section header, which contained the number and title of the section a participant was currently in (in this case, Section 3: Logical and Physical Tags). The informational and instructional content of the tutorial was displayed in element 3, the content area. Element 4 is the na vigation b ar, which was the primary means for getting around in the tutorial. It consisted of two or three buttons, depending on the type and purpose of the page. Most pages provided three buttons (Restart, Previous, and Next), but some pages provided only the Resta rt and Previous buttons (e.g., the tutorials Main

PAGE 215

200 Appendix J: (Continued) Menu). The copyright statement is located in element five. Element 6 is the page counter, which informed the participant what page number they were on in relation to the total nu mber of pages in the section. Element 7 is the menu bar, consisting of four buttons that provided access to a feature of the tutorial (Main Menu, Help, Resources, Glossary), as well as a Quit button, for exiting the tutorial. Finally, element 8 is the Send Email button, which could be used to email the studys principal investigator, but also served a clandestine purpose during the tutorial. This will be discussed later. Figure 25 Layout of the Tutorial Interface The Structure and Inst ructional Content of the BWPP Tutorial As discussed in Chapter Three, the Basic Web Page Programming (BWPP) tutorial was adapte d from Dr. Tina Majchrzaks WBT, Internet Programming (IP). The

PAGE 216

201 Appendix J: (Continued) IP content was abridged in order to fit the design and time frame of this study, to the effect that only five of the IPs 15 content sections made it into the final instantiation of the BWPP. The tutorial was structured as a linear WBT, requiring participants to complete one section befor e moving on to the next. It was prefaced with a welcome and orientation segment, followed by five content sections, a review section, and a final exam. The tutorials content was organized as follows: 1. Welcome 2. Orientation (optional) 3. Section 1: In tr oduction t o HTML 4. Section 2: The HTML Document Structure 5. Section 3: Logical and Physical Tags 6. Section 4: Lists 7. Section 5: Images 8. Section 6: Review 9. Section 7: Final Exam Welcome and Orientation The Welcome segment welcomed the participa nt and served as the program introduction. It provided a few informational pages regarding the tutorials origin, its purpose, and its organization and structure. It also segued into the optional Orientation

PAGE 217

202 Appendix J: (Continued) S egment (see Figure 26 ) which overviewed several functional and operational features of the program, including the layout of the program interface, how to navigate within the tutorial, the primary and supplemental features of the tutorial, the final exam, and conventions used in the program (e.g., glossary words, static and interactive examples). While strongly encouraged to complete the Orientation segment, participants could choose to skip it. Not only could participants come back to it at any time, but all the information in the Orientation segment was also available in the Programs Help feature. Figure 26 Segue From Welcome to Optional Orientation Segment Section 1: Introduction to HTML In the first content section of the tutorial, the participant was given a brief overview of what HTML is and how it is used to create Web documents. Several HTML tags and tag attributes (e.g., ,
, and Size) were introduced and

PAGE 218

203 Appendix J: (Continued) demonstrated through static and/or dynamic exa mples as a w ay of orienting the participant to HTML as a tag language and how elements of an HTML document are expressed. (Static and dynamic, as well as interactive examp les are discussed later in this appendix.) Particular focus was given to the syntax by which thes e tags and their attributes are expressed. Figure 27 provides a sample page from Section 1. Figure 27 Sample Page From Section 1: Introduction to HTML Section 2: The HTML Document Structure In this section, the participant was introd uced to the structure of a basic HTML document. The structure was parsed out into its main elements (e.g., head, title, body, l inks etc. ), with each being discussed and demonstrated in examples. More tags and attributes were introduced for formatting te xt and links.

PAGE 219

204 Appendix J: (Continued) Whereas Section 1 dealt with tags isolated from the HTML document, here tags were discussed in relation to the main elements of an H TML document. Participants were taken step by step through the creation of a basic HT ML document. In addition to static and dynamic examples, interactive examples were employed so that participants could begin to actually manipulate the attributes of certain HTML elements. Section 3: Logical and Physical Tags Here, participants were int roduced to the concepts of logical and physical tags. The ramifications for employing each category of tags was impressed upon them through examples. Section 4: Lists In this brief section, participants were introduced to both ordered and unordered lists The tags and attributes for defining and customizing both types were demonstrated by examples. Section 5: Images This was the last and longest content section. It was here that participants were instructed in how to include graphic images into an HTML d ocument. Participants were first shown how to place a simple, static image into the document and introduced to some of the image tags attributes. Through interactive examples, they were also shown how to manipulate these attributes to alter how an image i s displayed in a browser. Next,

PAGE 220

205 Appendix J: (Continued) participants were shown how to make an image interactive by mapping clickable areas to the image file using the image coordinate system. Figure 28 shows a page from Section 5. Fi gure 28 Sample Page f rom Section 5: Images Section 6: Review This section was simply a condensed review of the previous five content sections. No images or examples were included. Section 7: Final Exam The final exam consisted of 18 multiple choice que stions, each with four possible answers. The questions derived directly from the tutorials five content sections. The exam questions and answers can be found in Appendix K, but Figure 29 is a sample of an exam question page.

PAGE 221

206 Appendix J: (Continued) Figure 29 Sample Exam Question Page A score of 78% (14 out of 18 answered correctly) was considered passing. However, before receiving their exam results, participants were required to complete the 10 item Learner Satisfaction Survey. Th e Satisfaction Su rvey was discussed in Chapter Three and the survey items can be found in Appendix M. After completing the Satisfaction Survey, participants were given their exam results (see Figure 30). They were given their score, and told which question s they answered correctly and those answered incorrectly. A link was provided if they wished to see the correct answers to the questions they missed.

PAGE 222

207 Appendix J: (Continued) Figure 30 Exam Results Page Participants were thanked for their participation and instructed to quit the program. Those participants who were randomly selected for the post session interview were also reminded of that. Main Menu The Main Menu was accessed through its button located in the menu bar at the bottom right of the tutorial interface. It shared the same interface as the rest of the tutorial and listed all sections of the tutorial, including the Welcome and Orientation segments. Figure 31 shows the Main Menu of the BWPP tutorial. A checkbox preceded each section, but only those that had be en completed were checked. Those sections already completed and the next in line for completion were accessible (a section did not become accessible until the previous section was completed). Participants

PAGE 223

208 Appendix J: (C ontinued) could review any section they had already completed as many times as they wished. Thus, the Main Menu served both an informational and a navigational purpose, providing a means for the participant to keep track of their pro gress in the tutor ial, as well as a means of navigating among the sections of the program they either had already completed or section next in line for them to complete. Figure 31 The Main Menu Additional Features of the Program The BWPP also offered several other features: Help, Glossary, Resources, and Send Email. The Help, Glossary, and Resources buttons were all located in the menu bar at the bottom right of the tutorial interface, and the Send Email button was located below the menu bar. Clicking on the buttons for any of these features, displayed that feature in its own window.

PAGE 224

209 Appendix J: (Continued) The Help button provided explanations and/or tips on a number of topics, such as navigating within the program, instructions f or completing the program, and program features. All the information in the optional Orientation could also be found there. Figure 32 shows the Help window. Figure 32 The Help Window Clicking on the Glossary butt on opened a glossary of terms found in the BWPP tutorial. All terms found in the glossary were also in the body of the tutorial, appearing in bold, blue and underlined. Clicking on these hot words opened up the program's Glossary to that specific term. F igure 33 shows the Glossary window.

PAGE 225

210 Appendix J: (Continued) Figure 33 The Glossary Window The Resources button provided access to the other resources related to the topics in this program. Specifically, it provided extended information on HTML tags and Web character entities Figure 34 shows the Resources window.

PAGE 226

211 Appendix J: (Continued) Figure 34 The Resources Window The Send Email button brought up a short for m in a new window. The form allowed participants to send a question, comment or suggestion to the study principal investigator from any page in the program. However, as mentioned earlier the Send Email operation was used for a more clandestine purpose, wh ich will be discus sed later in this appendix. Figure 35 shows the Send Email window.

PAGE 227

212 Appendix J: (Continued) Figure 35 The Send Email Window Static, Dynamic and Interactive Examples The BWPP tutorial provided frequent examples of three t ypes throughout the content sections: static examples, dynamic examples, and interactive examples. The type of example employed for a particular concept or topic depending on the nature of that concept/topic and how much screen real estate the example need ed. While all static and some dynamic examples displayed entirely within the tutorials content area, some dynamic and all interactive examples opened a new window that displayed the results of the example code. Static examples, like the one depicted in Fi gure 36, were not dynamic or interactive in any way. They illustrated a point via simple text or graphics and required no action by the participant.

PAGE 228

213 Appendix J: (Continued) F igure 36 Sample of a Static Example Dynamic examples ill ustrated a point i n a two part fashion. First, when the page loaded, it displayed the examples code view; that is, how the particular HTML element being discussed was written as source code. Participants would click the code views Lets See It button t o display the results view, which showed how the code would display in a browser. Sometimes the dynamic examples were constructed to display both the code view and results view entirely within the tutorials content area. Figures 37 and 38 depict this typ e of dynamic example, with Figure 37 showing the code view and Figure 38 showing the results view. Clicking the Lets See It button in the code view toggled to the results view and clicking on the View Code button in the results view toggled back to th e code view.

PAGE 229

214 Appendix J: (Continued) Figure 37. Dynamic Example: Code View Figure 38. Dynamic Example: Results View in Content Area

PAGE 230

215 Appendix J: (Continued) Some dynamic examples, however, did not display the res ult code in the tutorials content area, but rather displayed the results view in its own window (see Figure 39). Figure 39 Dynamic Example: Results View in New Window Interactive examples employed text areas for the code view, al lowing participants to change the code information (e.g., size attribute values). When the Let's See It button was clicked, the results view was displayed in a new window. However, the if the participant changed any of the information i n the code view, the results view reflect the changes made by the participant. Figures 40 and 41 depict an interactive example, with Figure 40 showing the code view and Figure 41 showing the results view.

PAGE 231

216 Appendix J: (Continued) Figur e 40 Interactive Example: Code View Figure 41 Interactive Example: Results View

PAGE 232

217 Appendix J: (Continued) Most of the interactive examples provided some level of help. Participants could access this help by clicking the Click here for he lp link that was located somewhere on the page (see Figure 42). When this link was clicked, the contents of the example box essentially did what the participant was being instructed to do (see Figure 43). Clicking on the Hide Hel p link toggled the code view back to its initial state. Figure 42 Interactive Example: Help Link Figure 43 Interactive Example: Help View Intentional Program Errors During the design process of the BWPP tutorial, there was a concern that par ticipants might not take the opportunity to engage in any of these activities on their

PAGE 233

218 Appendix J: (Continued) own, resulting in a very limited experience of the tutorial interface. The decision was made, therefore, to induce participants to engage in s ome of these tasks by introducing a limited number of artificial program errors into the tutorial. Of course there was also the concern that such errors would artificially skew participants satisfaction level. It was decided the risk of negatively impacti ng participants satisfaction level was outweighed by the potential benefits of participants having a fuller experience with the program interface. In the end, four such errors were embedded in the tutorial, one each in content se ctions 1, 3, and 5, as well as in the Final Exam instructions. When a participant landed on a page containing one of these errors, an error message and instructions for correcting the error were displayed (see Figure 44 for one of these error messages). Af ter following the instructions, the error would be corrected and the participant would be able to c ontinue with the tutorial. O nce corrected, the error would never reappear again no matter how many times the participant viewed that page (e.g., during a r eview of a section). The error introduced in Section 1 instructed the participant to click the Previous button at the bottom of the screen, then when he or she was on the previous page, to click the Next button again. The section 3 error had the parti cipant click the Restart button at the bottom of the screen, then when the restart options appeared, the participant was to click the Restart Section 3: Logical and Physical Tags option. The error in Section 5 told the participant that an image on the page could not be found and instructed him or her to notify system administrator by clicking on the Send Email button at the bottom

PAGE 234

219 Appendix J: (Continued) of the page. The last error message, displayed during the instructions for the final exam, told the participant to click the Main Menu button at the bottom of the screen, then when the Main Menu appeared, to click the Section 7: Final Exam option. These fou r errors were designed to force the participant to make use of the Previous, Restart, Send Email, and Main Menu buttons, respectively, at least once during the tutorial. Figure 44 Example of an Artif icially Introduced Program Error

PAGE 235

220 Appendix K: The Basic Web Page Programming Exam Items Note: asterisks indicate the correct response. 1. Which tag is used to create a bulleted list? A. UL B. OL C. LI D. BI 2. Which tag causes the browser to display a bullet or number (depending on the kind of list in which it is used)? A. OL B. UL C. LI D. TYPE 3. Which tag allows you to specify either an exact or a relative size for text? A. SMALL B. FONT C. BIG D. REL 4. What is the mi nimum number of opening LI tags required for a list with 3 bullets? A. 1 B. 2 C. 3 D. 4

PAGE 236

221 Appendix K: (Continued) 5. Which tag is used to mark text as bold? A. B B. D C. BOLD D. DARK 6. Which tag must all browsers render the same? A. STRONG B. EM C. I D. KBD 7. Different browsers may render which of the following tags as they see fit? A. I B. IT C. EM D. U 8. In general, what will a browser do with a tag it does not recognize? A. report an error B. ignore it C. replace it with a close match D. fix it

PAGE 237

222 Appendix K: (Continued) 9. Which attribute of the image tag must be set to 0 (zero) to disable the box that appears around a clickable image? A. BORDER B. BOX C. SUBJECT D. ALT 10. Given the tag specification < /I> which of the following would be valid ways to use this tag? A. more than one of the following B. text C. text D. text 11. Which heading level tag will be displayed most prominently? A. HR B. H0 C. H2 D. H6 12. Which attribute is used to change the look of a bullet? A. VALUE B. TYPE C. LOOK D. NAME

PAGE 238

223 Appendix K: (Continued) 13. Which attribute of the image tag is used to specify what nongraphical browsers will see and what graphical browsers see while waiting f or the image to do wnload? A. BORDER B. BOX C. SUBJECT D. ALT 14. Given the start tag , what should the end tag look like? A. more than one of the following B. C. D. 15. In the i mage coordinate system, where is the origin (0,0) for the image? A. center B. top, left C. top, right D. bottom, left 16. What is the minimum number of opening UL tags required for a list with 3 bullets? A. 1 B. 2 C. 3 D. 4

PAGE 239

224 Appendix K: (Con tinued) 17. On a page that includes an image with text following it, the text that follows may or may not appear to download at a different rate of speed when the width and height of the image are specified. Will that rate be faster, slower, the same, or depend on the size of the image? A. faster B. slower C. same D. depends on image size 18. Which tag is used to create a numbered list? A. LI B. LN C. NL D. OL

PAGE 240

225 Appendix L: Dr. Tina Majchrzaks Approval of the BWPP tutorial [ Note: Dr. Majchrzaks October 10, 2004 email began with a final list of editorial and design comments not germane to her final assessment of the Basic Web Page Programming courseware. She closed her email with the following assessment of the BWPP tutorial.] My Opinion on the Courseware and Exam Dear Philip, I did not c ompare your adaptation with my content, side by side. However, I carefully read through all of your material and found it to reflect well the information I covered in my courseware, with the exception of the sections on the Internet, Development/Design, Fr ames, and some information that would have been gleaned by the students when completing the assignments. I agree with the items you chose to eliminate from the posttest. I would add that questions 2 (refers to frames) and 11 (refers to information learned when completing the table assignment) should also be eliminated. I feel that the course is reasonable as you have rendered it. The sections and questions eliminated are reasonable ways to reduce the length of the courseware for the purposes of your study. The content is viable and of interest in its reduced state. I would recommend that you check the Cronbach's alpha based on my data for the reduced question set represented in your exam in order to estimate the possible reliability of your instrument and t o make sure it is high enough for your purposes. I find your adaptation to be of the highest quality. Happy Data Collecting, Tina L. Majchrzak, Ph.D.

PAGE 241

226 Appendix M: Learner Satisfaction Survey Questions The following are the satisfaction survey question s that were presented to all study participants immediately following the submission of their individual final exam answers for scoring, but before they receive their score. The response to each survey item was on a five point Likert scale, where 1 was str ongly disagree and 5 was strongly agree Strongly Disagree 1 Disagree 2 Neutral 3 Agree 4 Strongly Agree 5 1. I liked the way the program was designed. 2. The program was easy to navigate. 3. Working with the program was sati sfying. 4. All features of the program were easily accessible. 5. The program design was efficient. 6. The program design was pleasing. 7. The program design was user friendly. 8. The program design wa s effective. 9. The program design was intuitive. 10. The program design was easy to work with.

PAGE 242

227 Appendix N: Web Form f or Entering Post Session Interview Data Into Database Figure 45 shows part of the Web form for enter ing the post session interview data into the database. For each question, the radio button for the participants discrete response to that question is selected. The transcription of the interview interaction between the interviewer and study participant is entered in the pop up Transcription Window, which is opened by clicking on the Transcribe link for that particular question. Figure 45 Web Form for Entering Post Session Interview Data in Database

PAGE 243

228 Appendix O: Sample Transcript of a Post Session In terview The following is a transcript from a post session interview conducted on May 5, 2005 with participant number 311425. Please note that I refers to the questions and comments of the interviewer (also italicized), and P refers to the responses of the study participant being interviewed. Also, ellipses within the text indicate unfinished statements. I: [Question 1] Overall, did you like the program interface of this instructional program? P: I did. It was really easy to navigate. You know, it he lped me out. I: So, it was easy to function within? P: Right. I: [Question 2] Did the design of program interface influence whether or not you felt satisfied with (or liked) this instructional experience? P: Yeah, I think it did in a way because if it w as hard for me navigate thru it, it would have taken me more time to kind of figure out what exactly I needed to do to get to the next page or what I needed to do to, you know, finish the section or things like that. So I think it did, you know, help me a little bit. I: If any aspect would have been aversive, would it have had an effect? P: Yeah. I think so. I: [Question 3] Do you think that how an instructional programs interface is constructed has an impact on how well people like the program?

PAGE 244

229 Appe ndix O: (Continued) P: I think it does because that's partly also what grabs their attention and keeps them, you know, motivated interested in the program itself. So I think it does effect how they feel about it. I: [Question 4] Do you think that ho w an instructional programs interface is constructed has an impact on how well people learn the material? P: No, I don't think so. How well they learn it? no. Because the navigation has nothing to do with the actual topic or whatever you're reading. The information is still going to be there. Whether you get to it or not, you're still going to have the opportunity to learn. Navigating thru it is just kind of keeping yourself there and being able to get there. I: So you see the two as being distinct? P: Right. I: So learning can exist outside how that learning is facilitated? P: Right. I: [Question 5] Do you prefer to have an idea of how much text is on a Web page at the start before you start reading it? P: Yeah. I think too much text will kind o f lead the reader off in a way that you kinda get, you know, its too much text, you're reading your eyes. It's a computer, so you're looking at a screen already as it is. I wouldn't put that much text on a page. I: Why? Do you think it's harder to read o n a computer screen? P: It's not harder to read, but it just gets kinda... you're looking at words on a computer screen, it gets kinda tiring after a while just looking at the words.

PAGE 245

230 Appendix O: (Continued) I: Do you get as tired reading as much text in a book? P: I think a book is more tiring than reading it on a computer. I: So if you see a page that you realize scrolls off the page, you prefer to gauge how much youre going to have to put into this? P: Yeah. To see how much... I: [Question 6] How do you prefer to have instructional text presented to you on a Web page: in relatively small chunks or in longer passages? P: Small chun ks. For the same reason as on the previous question; too much text on a page will kinda just bore me or I wouldn't really be interested, you know. Getting small chunks, I also learn it a lot easier than taking it all at once. Little by little, I'll learn it a lot better. I: [Question 7] Do you find it easier to read, understand, and remember new material on a Web pa ge if there is a limited amount of text on the page? P: Yeah, you know, the same thing. If I take it smaller, taking than more at a time, then I know that I'll actually comprehend it, learn it, than actually just reading it, not knowing what I'm reading. I: Do you get a better sense of progress with smaller chunks? Do you get a sense of accomplishment by having finished three smaller paragraphs as opposed to one longer paragraph? P: Oh yeah. I think that definitely that way because you've finished one and in your mind your understand that there's two more to accomplish, so you've already accomplished

PAGE 246

231 Appendix O: (Continued) one thing. But by knowing that you have a one whole section to complete then you don't have anything... there's no progression there. You've just completed one section. I: Do you think it may be easier to find primary points in smaller paragraphs? P: No I think it's easier knowing the primary point of the paragraph. Like I said, that way you can comprehend the information and a ctually learn it than just trying to find the topics or trying to find the points. I: And it's easier to do that in smaller chunks? P: Yeah. I: [Question 8] Do you think the amount of scrolling involved in an online instructional program has any effect on your satisfaction level regarding the instructional experience P: No, I don't think the scrolling had anything... no effect. I: So, is that because your used to scrolling? P: Yeah. I: [Question 9] Do you think the amount of scrolling involved in a n online instructional program has any effect on how well you learn the material? P: No, I dont think so at all. I: [Question 10] If you wanted to find some information in the program you had read previously, would you prefer to have to scroll back up a page to find it, or to click back through the previous pages where scrolling is not required to see the pages content?

PAGE 247

232 Appendix O: (Continued) P: No, I think it's easier to just scroll up and see the information than having to go back and having to, you know, regress is that what it is? to go back, and so I think it's easier scrolling up. I: So you have no problem orienting yourself to where that previous information was scrolling up? P: No, not at all. I: [Question 11] Do think having to sc roll down a page to view more content and/or to get to some features of an instructional program distracts you from focusing on the material? P: No, I don't it did at all. No. it didnt distract me at all. I: [Question 12] Given the choice in an online instructional program, do you have a preference between having to scroll down each page to view more instructional information or having to click a button to move between pages where you can see all of the pages information at once? P: I think it's be b etter to actually click, that way you could see the whole information on the page, rather than actually scrolling down and seeing that information because it puts less information on one page. That way, like I said, too much information pushes the reader a way. I think that having them on separate page gives the reader the option to look at it or not if he or she wants to. If not, it's on the page; they have to look at it, you know. I: So do you think its easier to digest if youve got that little amount o f information?

PAGE 248

233 Appendix O: (Continued) P: Right. Yeah. I: In a scrolling version, do you think that its a temptation just to scroll down you see a lot of text on a page, do you think its a temptation just to scroll past some of this stuff? Are you more likely to read all the information if its in smaller chunks, like where its kind of like a book you see the entire thing or if the text is scrolling off the page? P: Yeah, I think with the scroll, you get tempted to just like scroll and you ski p thru it and not really read it. But if it's there and it's set and you cannot scroll, then I think that you would actually read all of it and not miss anything. I: Do you agree with this statement: that if its in smaller chunks and you see all of the pages content that it's more acceptable, in terms of I can accept having to expend effort to read this, as opposed to Good God, look at whats all down here. I dont want to read all this stuff? P: Yeah, I do agree with that. I did, it is. I: Alrigh t. Do you have any general comments about the interface or any part of this study? [The participant asked what was going to be done with the study once it was complete, but I redirected him back to the last question.] P: No.

PAGE 249

About the Author Phillip E. G race has 23 years of experience in instructional design, training, and training event coordination at the University of South Floridas (USF) Louis de la Parte Florida Mental Health Institute in Tampa His last job title at the Institute was Coordinator of Education and Training Programs Phillip has developed live, Web based and CD ROM instruction. From 1999 to 2005, he developed a ColdFusion based Web learning site for mental health care providers that served 6,000 individual and group contract customers For this Web learning initiative he designed, programmed, evaluated, managed, and provided technical support for a number of fee based and free instructional programs covering a variety of mental health related topics Phillip is scheduled to receive h is doctorate in Instructional Technology at USF in spring 2006 He also holds a B.A. in anthropology and an M.A. in applied medical anthropology, both from USF Phillip now resides in Chapel Hill, North Carolina.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001790631
003 fts
005 20070622135452.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 070622s2005 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001520
040
FHM
c FHM
035
(OCoLC)145202989
049
FHMM
090
LB1601 (ONLINE)
1 100
Grace, Phillip Eulon.
0 245
Full-page versus partial-page screen designs in web-based training :
b their effects on learner satisfaction and performance
h [electronic resource] /
by Phillip Eulon Grace.
260
[Tampa, Fla] :
University of South Florida,
2005.
3 520
ABSTRACT: This is a report on research regarding the screen layout of Web-based training (WBT) programs, conducted with an eye toward providing evidence-based guidance for the design and development of WBT interfaces. Specifically, the study investigated the relative instructional benefits of two general types of WBT screen design, full-page and partial-page, in terms of both learner performance and learner satisfaction. The main hypotheses of the study were that the full-page design option would yield significantly better outcomes in both categories of interest.The study employed a mixed-method design, generating both quantitative and qualitative data. The main phase of the study was experimental, following a factorial design to explore the relationships between a single treatment variable (WBT screen design) in two treatment conditions (partial-page WBT design and full-page WBT design) and two dependent variables (learner performance and learner satisfaction). Both a full-page and ^a partial-page version of the same Web-based tutorial were created, and 129 self-selected undergraduate students who reported having little or no experience with the tutorial subject matter were randomly assigned into the two treatment groups. Performance data were collected as scores on the tutorial's 18-item, multiple choice final exam, and satisfaction data were collected via a 10-item satisfaction survey. In addition, 59 of the study participants were randomly selected to participate in post-study session interviews.The results of the study yielded no significant difference between the two treatment groups for either learner performance or learner satisfaction; thus, making it impossible to reject the null hypothesis for either of the two primary research questions. The conclusion of this study was that the presence or absence of scrolling alone is not a significant factor either in how well a person performs in a WBT program or how satisfied they are with the learning experience However, while analysis of the post-study session interview data supported this conclusion, the fact that a large majority of the interviewees stated a preference for the full-page, non-scrolling WBT interface design suggests that some elements inherent in the full-page design might warrant further consideration and/or study.
502
Dissertation (Ph.D.)--University of South Florida, 2005.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 233 pages.
Includes vita.
590
Adviser: James White, Ph.D.
653
Scrolling.
WBT.
Interface design.
CBI.
Paging.
Usability.
Instructional design.
Non-scrolling.
Computer-based instruction.
Web-based instruction.
Screen layout.
690
Dissertations, Academic
z USF
x Secondary Education
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1520