USF Libraries
USF Digital Collections

Measuring transactional distance of online courses

MISSING IMAGE

Material Information

Title:
Measuring transactional distance of online courses the structure component
Physical Description:
Book
Language:
English
Creator:
Sandoe, Cheryl
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Online learning
Distance education
Course design
Web learning
Course structure
Transactional distance
Dissertations, Academic -- Secondary Education -- Doctoral -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: Online or web-based courses have become prolific in our educational environment over the past several years. The development of these courses can be guided by systematic design models to ensure quality instructional design. Transactional distance, the theory that claims the distance an online student feels is more of a pedagogical distance than a geographic one, consists of three factors: structure, dialogue, and learner autonomy. Accurate measurement of these three factors is needed in order to substantiate its claims and to best determine the delivery implications. This study produced an instrument that measures the structure component of the transactional distance theory as it pertains to the online environment. A total of 20 online courses were evaluated using the Structure Component Evaluation Tool (SCET). Experts in the field validated the instrument and reliability was determined by calculating Cronbachs alpha as well as examining inter-rater reliability.
Thesis:
Thesis (Ph.D.)--University of South Florida, 2005.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Cheryl Sandoe.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 140 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001670342
oclc - 62285624
usfldc doi - E14-SFE0001204
usfldc handle - e14.1204
System ID:
SFS0025525:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Measuring Transactional Dist ance in Online Courses: The Structure Component by Cheryl Sandoe A dissertation submitted in partial fulfillment of the requirement s for the degree of Doctor of Philosophy Department of Secondary Education College of Education University of South Florida Co-Major Professor: James White, Ph.D. Co-Major Professor: W illiam Kealy, Ph.D. Sherman Dorn, Ph.D. Jeffrey Kromrey, Ph.D. Date of Approval: May 16, 2005 Keywords: Online Learning, Distance Educ ation, Course Design, Web Learning, Course Structure, Tr ansactional Distance Copyright 2005, Cheryl N. Sandoe

PAGE 2

i Table of Contents List of Tables iv Abstract v Chapter One 1 Statement of t he Problem 2 Purpose of the Study 3 Research Questions 4 Definitions 6 Delimitations 7 Limitations 8 Chapter Two 9 Introduction 9 Transactional Distance Theory 9 The Course Design Process 13 The Course Design Process and Structure 16 Importance of Structure in Course Design 19 Instructional Elements of an Online Course 22 Instrument Need 27 Course Structure Categorie s and Sub-categories 32 Content Organization 33 Overall 34 Syllabus 34 Sequencing 35 Course Schedule 35 Delivery Organization 36 Overall 36 Consistency 36 Flexibility 36 Course Interactions Organization 37 Student to Instructor 37 Student to Student 38 Student to Interface 38 Chapter Summary 39 Chapter Three 42 Pilot Study 44

PAGE 3

ii Field Study 45 Subject Matter Experts 45 Qualifications 46 Courses 47 Procedure 49 Survey and Instrument Development 49 Instrument Review 49 Bischoff 49 Chen 50 Huang 50 Ingram 51 Literature Review and Survey Development 52 Item Development 52 Expert Evaluation 53 Pilot Testing 54 Field Testing 55 Statistical Analysis 56 Validity of the Instrument 56 Construct Validity 57 Translation Validity 57 Face Validity 57 Content Validity 57 Criterion-related Validation 58 Convergent Validity 58 Discriminant Validity 59 Reliability of the Instrument 59 Internal Consistency 60 Inter-rater Reliability 60 Chapter Summary 62 Chapter Four 63 Item Development 63 Item and Category Sort 64 Sorting Results 64 Item Rating: Clarity and Quality 65 Pilot Study 66 Statistical Analysis 68 Content-related validation 68 Estimates of Reliability 69 Field Study 72 Item Analysis of Field Test Results 75

PAGE 4

iii Estimates of Reliability 75 Overall Scores 79 Discriminant Validity 83 Chapter Summary 84 Chapter Five 85 Discussion 85 Instrument Development and Evidence of Validity 86 Establishing Reliability 93 Internal and Inter-rater Reliability 93 Issues and Recommendations 94 Course Placement Issue 94 Usability Issues 94 Recommendations 95 Instrument Refinement 95 Potential Uses 96 Future Research 97 Conclusion 97 References 98 Appendices 104 Appendix A: Sample Email for Course Access 105 Appendix B: Sample Email for Recruitment of Experts 106 Appendix C: Sample Email to Subject Ma tter Expert 107 for Sorting Exercise Appendix D: Dimensions of Structure Measurement T ool 108 Appendix E: Process Overview 111 Appendix F: Copy of Email from US Fs IRB 113 Appendix G: Sorting Descriptors 114 Appendix H: Questions from Ingrams Instrument 120 Appendix I: Questions from Huangs In strument 121 Appendix J: Questions from Chens In strument 122 Appendix K: Questions from Bischoffs Instrument 123 Appendix L: Quality and Clarity Sorti ng Document 124 Appendix M: Item Rating Results 128 Appendix N: Final Working Instrum ent 130 About the Author End Page

PAGE 5

iv LIST OF TABLES Table 1. Pilot Study Statistics: Correlation Coefficients and Coefficients Alpha and Kappa by Category. 70 Table 2. Course Listing by Rater. 74 Table 3. Correlation Coeffi cients & Coefficients Alpha & Kappa By Category: R1 X R2. 77 Table 4. Correlation Coeffi cients & Coefficients Alpha & Kappa By Category: R1 X R2. 78 Table 5. Raters Total Score for Each Course. 80 Table 6. Percent Scores for Each Course 82 Table 7. Correlation of Sub-Cat egories with Ingrams Instrument 86 Table 8. Effect sizes of all Cat egories and with Ingram s Instrument 87

PAGE 6

v Measuring Transactional Dis tance in Online Courses: The Structure Component Cheryl N. Sandoe ABSTRACT Online or web-based courses have bec ome prolific in our educational environment over the past several years. T he development of these courses can be guided by systematic design models to ensure quality instructional design. Transactional distance, the theory that claims the dist ance an online student feels is more of a pedagogical distanc e than a geographic one, consis ts of three factors: structure, dialogue, and l earner autonomy. Accurate me asurement of these three factors is needed in order to substantiate its claims and to best determine the delivery implications. This study produced an instrum ent that measures t he structure component of the transactional distance theory as it pertains to the onl ine environment. A total of 20 online courses were evaluated using the Structure Componen t Evaluation Tool (SCET). Experts in the field validated the instrument and reliability was determined by calculating Cronbachs alpha as well as examinin g inter-rater reliability. The SCET also excelled in a comparison to ot her instruments in the field in terms of its ability to produce rich, valid information about t he structure of online courses.

PAGE 7

1 CHAPTER ONE INTRODUCTION The demand for online learning has become inescapable (Wagner 2001). This demand has been spurred due to teacher shortages, the need to attract new students to higher education, and an increas ing demand placed on employees by their employers to remain competit ive by continuously updating required workplace skills. This alone has spawne d a great demand for an increase in online course offerings in many colleges and universities. The potential to create, develop, and offer opportunities to meet these demands and to establish lifelong learners is present now more than it ever has been before with the advent and the continual advancements in technology. As a result, to expedite the development of online courses, many des igners and professors are putting their traditional classes online, by uploading all of their class notes, creating an enormous amount of shovel-ware (i.e. simp ly uploading all lecture notes void of instructional design principles). Very real learning issues that exist in a traditional classroom are consequently being transferr ed to the virtual classroom, issues such as how the course is structur ed (i.e. the structure variable), the communication that occurs throughout the duration of a course between instructors and students as well as co mmunication amongst students themselves (i.e. the dialogue variable), and individual characteristics that each learner brings to the classroom (i.e. the learner aut onomy variable). The extent that these inclusive variables are in opposition or not balanced, regardless of the delivery medium, theoretically can affect lear ners in many ways possibly leading to

PAGE 8

2 lifelong impairment of lear ning potentials or desires thus jeopardizing the very goal that was initially sought. Imbalances can occur when the stru cture and dialogue variables of a course are low, when t he dialogue and autonomy variabl es are low, when the structure and autonomy variables are low, and when the autonomy and structure variables are low. (Notice that three out of four of the above dichotomies include the structure variable.) Transactional distanc e is a construct that addresses all of these variables thus it permeates every educational program as well as addresses each one of these issues. Hence, distance is not determined by geography but by the way in which instructors, learner s, and the composition of the learning environment interact with one another. Being able to individually measure each facet of the transactional distance construct is paramount to research efforts so as to provide practitioners with the ability to assess their designed instruction for organi zation and learning delivery. Statement of the Problem Life long learning is becoming the no rm not to mention the expectation in industry. In order to provide this c ontinual learning web delivery mediums are being heavily tapped. To develop a populati on of competent learners within the online environment, educational researchers need to examine issues that affect the learners ability to adapt to the online learning environment. By so doing, learning barriers can be broken within th is medium of learning thus, benefiting the student and decreasing the frustr ation level of instructors.

PAGE 9

3 Knowing best practices to use when structuring an online learning environment are imperative to the flow and understanding of the course as well as being imperative to fostering the succe ss of the learners in this environment. Instructor and student frustration can be gr eatly minimized if the course structure communicates efficiently to the learners. How an instructor designs or lays out their course to present the content is cr itical to the student since the design or layout of information can influence how students learn the material (NC State University, 1998.) It is good practice to cl early tell the student why an activity is included, how much time they should spen d on the activity, and in what format to submit a response (Bernard, 2003.) To date, there has been no means of quantitatively measuring the st ructure of an online course. Moore, in his theory of transactional distance, has identified cour se structure as a variable that can influence the students perc eption of distance when par ticipating in an online course (Moore & Kearsley, 1996). Purpose of the Study The purpose of this study is to desi gn and develop an inst rument that can be utilized to measure the structure of an online cour se. This instrument is intended to be used for assessing the stru cture component of an online course by instructional designers and researc hers both in and out of the field of instructional design. The proposed study contains objectives for developing a creative approach to measur ing the structure com ponent of transactional distance found in online cour ses. By so doing, I anticipate that by further

PAGE 10

4 investigating into the natur e of transactional distance and examining the possible factors that contribute to high and low transactional distance will assist in guiding future research and development efforts of all modes of courses as well as illuminate the construct itself. Research Questions Three research questions have been developed in conjunction with this study. These research questions are: 1. What specific components of an online course define the structure variable of the online course? 2. What is the content-related evidence that the designed measurement is a valid measure of t he structure variable? 3. What is the estimated reliab ility of the designed measurement? The process by which these three re search questions will be answered involves examining the course design proc ess to extract the parts of the process that directly affect the structure of a course. Content-related evidence of the Struct ure Component Evaluation Tool (known as the SCET throughout the remainder of the dissertation) will be considered throughout the development process. Categories and sub-categories (listed below) were created, based on experience and search of the liter ature, to guide the development of the item specifications. The specific areas and sub areas that are being examined and in cluded in the instrument to define course structure are:

PAGE 11

5 1. Content a. Overall b. Syllabus c. Sequencing d. Course Schedule 2. Context a. Overall b. Consistency c. Flexibility 3. Interactions a. Student to Instructor b. Student to Student c. Student to Interface Item review and revision will be conducted as needed following the item writing. Four subject matter experts, the resear cher and three others, will review written items. Recommendations will be taken fo r the development of new items, modifications of current items, and modification of current categories. The instrument was pilot tested by myself and a colleague or doctoral student. I used it to evaluate the struct ure of two online courses. Necessary changes were made and the final draft will be sent to the subject matter experts for their review.

PAGE 12

6 Definitions Transactional distance the universe of teacher-learner relationships that exist when learners and instructors are separated by space and/ or time (Moore, 1993). Theory of Transactional Distance hypothesizes that distance is a pedagogical, not geographical distance. It is a distance of understandings and perceptions that can lead to a communication gap or a psychological space of potential misunderstandings between people (Moore, 1996). Structure component a variable of the transactional distance theor y that refers to how the instructional program is designed. Dialogue component a variable of the transactional distance theory that refers to the communicative transaction between and among students and teachers. Learner autonomy a variable of the transactional distance theory that refers to the characteristic of self-direction. Discriminant Validity showing that two or more measures are not related, or that relationships between measures fr om different constructs are low. Construct Validity an assessment of how well t heories or ideas translate into actual programs or measures. Content Validity extent to which a measure a ssesses all the important aspects of a phenomenon that it claims to measure. Learning Management System a means of managing learners and course content that provides the ability to keep track of a learners progress as well as managing content or learning objects that ar e served up to the right learner at the right time.

PAGE 13

7 Learner-Learner Transactional Distance refers to the p sychological distance that learners perceive while in teracting with other learners Learner-Interface Transactional Distance refers to the degree of user friendliness/difficulty that learners perceive when they use the delivery systems Learner-Instructor Transactional Distance refers to the psychological distance of understandings and communication that l earners perceive as they interact with their teacher Learner-Content Transactional Distance refers to the distance of understandings that learners perceive as they study t he course materials and the degree that the materials meet their l earning needs and expectations to the course. ADDIE is an acronym that refers to a generic model of the five phases of instructional systems design: Analysis, Design, Development, Implementation, and Evaluation. Structurally Sound Course Course is developed in conjunction with instructional designers or an instructional design team and has run live for at least one semester so that first-time errors/bugs have been found and fixed. Delimitations The intended use of the instrument is by researchers in the field of educational/instructional technology and in structional designers. The validity of the instrument that will be developed s hould not be generalized for use to a population who does not fit wit hin these parameters.

PAGE 14

8 Limitations The sampling of courses to be rated as to their structure is purposeful, which may limit generalizability of results. The integrity of the instrument is dependent upon the experts who are evaluating it, so consistency in experts, from course to course, is preferred. The sample of courses used during instrument development is localized within a single geographic region. This may raise issues in sampling of culturally diverse course content facilitated by instru ctors of diverse cultures. Therefore, a possible limitation to the analysis of st ructural components is whether or not there exists a difference in the structur al elements of transactional distance due to the culture of the instructor (subject ma tter expert) or designer.

PAGE 15

9 CHAPTER TWO REVIEW OF THE LITERATURE Introduction A review of the literature was conducted to investigate previous research concerning transactional distance. T he search was expanded to include any research regarding the structure component of the transactional distance theory. This chapter is divided into several sect ions: a brief history of the theory, the course design process, the course desi gn process and structure, the importance of structure in course design, the inst ructional elements of an online course, the need for an instrument to measure stru cture of online co urses, and the categories/sub-categories used in t he development of the instrument for measuring course structure. Transactional Distance Theory The theory of transactional distance was developed by Michael Grahame Moore from the concept of transaction derived from John Dewey and developed by Boyd & Apps. Boyd & Apps described the construct as the interplay among the environment, the indivi duals, and the patterns of be haviors in a situation between people (Boyd & Apps, 1980). Moor e expanded the theory by proposing that distance education is the transaction. He further states that distance education is the interplay between peopl e who are teachers and learners, in environments that have the special char acteristic of being separate from one

PAGE 16

10 another, and a consequent set of special teaching and learning behaviors exist (Moore & Kearsley, 1996). The theory of transactional distance seeks to isolate the elements of educational transactions that can critic ally influence learners in a distance education environment. Transactional distance exists in all educational events, even those in which learners and teacher s meet face-to-face in the same learning environment (Rum ble, 1986). Therefore, distance is not defined by geography but by the methods of interacti ons between instructor s, learners, and the learning environment and the extent to which they intera ct with one another. The degree of distance felt by a st udent is dependent on the level of autonomy present within the learner. For example, those learners with a high level of autonomy are emotionally independent of an instructor and have a self-concept of being self-directed w hereas learners with low le vels of autonomy tend to depend on the instructor for guidance thr ough course structure, communication, and tend to exhibit more dependency thr oughout the learning process (Muller, 2003). Moore has agreed that much of what we already know regarding learning and teaching can be applied to an online en vironment. One issue he noted is the fact that if the distanc e between instructor and student and student and student is great then traditional expos itory teaching can be trans formed significantly and alternative methods of teaching are needed (Kanuka, Collett & Caswell, 2002.) Moores theory hypothesizes that distanc e is a pedagogical, not geographic phenomenon. That it is a distance of understandings and perceptions that may

PAGE 17

11 possibly lead to a communication gap or a psychological space of potential misunderstandings between people (Chen, 2001). Additionally, Moore suggested that this distance had to be overcome in order for effective, deliberate, planned learning to occur. The variables that Moore uses to define his theory are: dialogue and structure (as two critica lly underlying variables) and learner autonomy (the previous two variables are in relationship to this one.) (Moore & Kearsley, 1996). The dialogue and structur e variable encompass the instructional dimension. Dialogue, for pur poses of this study, is the interaction between instructor and student as well as inte raction amongst the students themselves. The structure dimension r epresents the manner in which the course is designed and the way in which the content and constructs of the course are taught. It can and does include how and when communication (dialogue) takes place. For example, in a course syllabus the inst ructor might outline the manner and the timeframe in which he will respond to em ail, discussion postings etc. Structure expresses the rigidity or flexibility of the program or educational objectives, teaching strategies, and evaluation methods (Moore, 1996). Structure also refers to the organization and delivery of learning events and activities in a distance education environment (Kearsley & Lynch, 1996.) Learner autonomy is the extent to which in the teaching/learning process that the learner, not the instructor, determines the goals, the learning exper iences, and evaluative decisions. There exists relationships between structur e and dialogue and structure and learner autonomy. None of the variables surrounding the theory are mutually exclusive. This does not mean that each variable cannot be measured independently. On

PAGE 18

12 the contrary, to gain a complete understanding of the relationships of the variables of transactional distance each one must be defined independently. Valid and reliable measurement techni ques must be established for each variable in order to communicate the m agnitude of the variable thus allowing for their effects and inferences about their re lationships to be studied. According to Moores theory, learning env ironments that are rich with directions and guidance combined both with course design and dialogue, are said to have a low level of transactional distance. In contrast, when learners are left to their own devices, making their own decisions about strategy and have minimal dialogue the level of transactional distance is said to be hi gh. However, the above scenarios are dependent on the level of aut onomy of the learner. Fo r example, students with advanced competence as an autonomous lear ner tend to be quite comfortable with less dialogic programs with little structure, whereas, more dependent learners prefer programs with more dialogue and varying degrees of structure that are dependent on the closeness of the relationship th at the student has with the instructor. The closer the relationship with the instructor the less structure a student desires (Muller, 2003). Many online distance education courses contain a high level of transactional distance and alternative teaching strategies are needed to lessen the level of transacti onal distance. Properly utilizing tools available in the particular educational environment can potentially enhance the learning experiences. A factor analysis study conducted at the Helsinki Virtual Open University (HEVI) and the Apaja Internet Service fr om 1995-99 reported disadvantages of

PAGE 19

13 learning in a virtual environment. The factor solution for disadvantages of learning in a virtual university reported a lack of interaction with other students as the highest loading factor. Other detri mental factors were: difficulties in communication, lack of personal guidance (possibly speaking to the structure component of the transactional dist ance theory), and difficulties with the environment as a whole. Horn (1994) and Hirumi and Bermudez (1996) are researchers that have found t hat providing proper instru ctional design, distance courses can be more interactive than traditional courses and provide more personal and timely feedback to meet stude nts needs than is possible in large, face-to-face courses. Additionally, res earch has shown that both students and faculty have added responsibilities in a distance environment. Faculty have the task of altering course design and teachi ng strategies to realize benefits of technology and assure maximum interact ion. However, students must assume more responsibility for their learning by taking the initiative for requesting clarification and feedback when it is needed (Malone et al., 1997.) The Course Design Process In order to determine the components of the structur e variable needed in a course, attention turns to the process of instructional course design. There are many ISD (instructional systems design) models, but almost all are based on the generic ADDIE model, which stands fo r Analysis, Design, Development, Implementation, and Evaluation. Each step has an outcome that feeds the subsequent step. When discussing instru ctional design one must refer to the

PAGE 20

14 wisdom of Dick and Carey. They state that instruction is a systematic process in which every component is critical to the learners success (Dick & Carey, 1996). Just as the variables of the transactional distance theory are interrelated, so is the design process. This approach consists of a set of interrela ted parts that are all working together towards a goal. The pur pose of the instructional system is for learning to occur. The components of t he system are the studen t, the instructor, the content, (or course materials), and the environment (Dick & Carey, 1996). These components are present in some form or capacity in any learning environment. In the online learning environment, the inst ructor and student are often separated geographically, but due to technology the separation need not creep towards the pedagogical el ements of the environmen t. Not only are there asynchronous methods of instructing onl ine but synchronous opportunities that allow students to view thei r instructor and their instru ctor view them via a web cam, as well as providing the ability to hear voice tones through voice over IP, are becoming a realistic and prevalent mean s of instruction online as high speed broadband connections become a reality and the norm for many students. When beginning the design process, Di ck and Carey (1996) suggest that an analysis of the learni ng environment take place to determine what is and what should be. For the online envi ronment, the what is encompasses a review of what tools are available to t he facilitator for instruction. The what should be is equipment (hardware), so ftware, and resources (both for the student and for the instructor) that adequately support the online environment. Designing a course that is friendly and us able by the target audience is part of

PAGE 21

15 the process, but another equally important aspect of course design is for the course to be implemented as planned. For this to occur, the facilitator (instructor, professor, or teacher) must be a part of the design process. If facilitator support of the course is not present, than t he student(s) has an added barrier associated with the potential for learning to occur. Hence, the reason facilitators must be included in the design process. Their buy-i n to the structure of the course is imperative to how efficiently and effectiv ely the course functions and directly affects the success of the course and the subsequent su ccess of the learners. Another process that obv iously bears mentioning is Gagnes nine events of instruction; according to Robert G agne, there are nine events that activate processes needed for effective learning (Gagne, 1985.) Gagne believes all lessons should include this sequence of events: gain a learners attention, inform the learner of the objectives in the less on, stimulate recall of prior learning, present the stimulus material, provide guidance to the learner, elicit the learners performance, provide feedback to the lear ner, assess the learners performance, and enhance retention and transfer. Ev ery one of these events plays an important role in the design of online cour ses. In order to pr esent the material and provide guidance to the learner in a productive manner, aspects of the course structure need to be considered. If a student does not comprehend the layout (structure) of the course they will not know how to access the stimulus material. If the facilitator or designer does not provide necessary guidance to each learner through appropriate dialogue, the learners performance, retention, and transfer will potentially be less.

PAGE 22

16 The Course Design Process and Structure Within the context of course design, st ructure can refer to two distinct but related aspects (Scott, 2003). A distincti on that has been familiar to educators from the time of Aristotle onwards, the distinction of knowing why (theoretical, conceptual knowledge) and knowing how (p ractical, performance knowledge.) The definition of structure can apply to the layout of a course: how material is divided into segments such as units or modules, how course tools are made accessible (i.e. in a course menu bar or on an organizer page), basically how you organize the layout of all conten t, resources, and tools. Many of these decisions can be and are dictated by a computer-based authoring system that provides shells in which an instructor c an layout their course. These shells serve somewhat as a template. Initially, t hese management systems did not allow for much flexibility so course design and structure were somewhat prescribed. However, the learning management systems are becoming much more sophisticated and are providing greater flexibility for cour se design by allowing for individual customization for students via par ameters such as selective release. This particular tool allows the instructor to set boundaries for individual students that grant access to cour se materials upon successful completion of previous assignments, assessments, or readings. Another definition of structure can refer to the conceptual framework that ensures that the course is a coherent w hole. This structure determines how the content may be ordered and organized for inst ructional purposes. Included in this organization are factors such as the following: does the student easily navigate

PAGE 23

17 the course within a particular concept, and are their logical relationships between key concepts and activities? Another dimension of course structure re fers to the extent of rigidity or flexibility in the course organization and delivery. This dimension is present in both the layout of the course and the conceptual framework and addresses issues such as: Can students move ahead in a course? Is selective release of materials used in the design of the cour se so that a student must perform a particular function or assignment successfully before being able to proceed? Do students have the ability to organize c hats amongst their own group members or classmates without soliciting the assistance of the in structor? How are course tools accessed, only one way? Huang (2002) concluded in his study that online courses can provide good organization with regards to objectives, assignments, and grades, but can also deliver course cont ent in a flexible manner for learners to access and learn at their own pace. To provide for future studies, it is important that the instrument designed as part of this study be able to measure the rigidity/flexibility of the online course as well. When thinking about the course desi gn process and structure, one can refer to numerous cognitive theories; however, since the Structural Learning Theorys greatest strength is its ability to guide designers/instructors in the selection of content and sequencing requi rements so as to provide only the particular instruction needed by the learner (course structure), I have chosen to highlight this theory to include in the discussion of structure (Scandura & Stevens, 1987.)

PAGE 24

18 The Structural Learning Theory (SLT ) derived by Joseph M. Scandura (Scandura & Stevens, 1987) is a theory, derived from one of Scanduras earlier cognitive theories of learning, that focu ses on deciding what to teach. In this theory, all knowledge is represented by ru les. These structural learning rules include both declarative and procedural forms of representation (Scandura & Stevens, 1987.) Each rule contains three components: dom ain, range, and operation. According to Scandura, the dom ain component is made up of internal cognitive structures that correspond to the total of all relevant environmental learning elements of a lear ning situation. In other words, the domain is the content upon which a learner operates to pr oduce the results that are specified in the objectives (Scandura & Stevens, 1987.) If there has been error when developing the domain componen t (structuring the content) and it fails to function as intended (usability issues) due to this conceptual error, then learner operations will be deficient. If a student c annot follow a particular layout of a course (usability) and cannot determine which action to take next when participating in an online course (a procedural form), there has been a breakdown somewhere within the domain element. Hence, the structure component found in a distance learning env ironment has not been cultivated so learner success can be in jeopardy. When par ticipating in an online environment, students must travel through various navigat ional paths defined within the course structure. If a learner cannot follow a particular path because the structure of the course is poor in either content or lay out, again, the students success is at risk.

PAGE 25

19 With the advent of the newer lear ning management system s, courses with customized lessons are a reality. Importance of Structure in Course Design In 1990, the American Library Asso ciation Presidential Committee on Information Literacy endorsed the value of information literacy as a means of correcting social and economic inequi ties (Goetsch and Kaufman 1998). The report stated that people who are informati on literate are those people that have learned how to learn and they are prepar ed for lifelong learning because they can always gather information for any ta sk or decision. The report continued to emphasize that informational ly literate people master the learning construct because they know how knowledge is organized, how to find information, and how to use that information. As previous ly stated, the structure dimension in a course represents the manner in which the course is designed and the way in which the content, constructs, and information flow is communicated to the learner. The course structure should identify what information is needed and how the learner is to go about finding, usi ng, and managing the information. By failing to structure an online course effectively, the course can potentially fail the learner to the extent in which it promotes atta inment of information literate skills in addition to distancing the student from the entire online experience. In contrast, by structuring a course effectively, information competency can be encouraged and pedagogical distance minimized.

PAGE 26

20 John Biggs (1999) wrote that Learning is the result of the constructive activity of the student. Teaching is effect ive when it supports those activities appropriate to understanding the curriculum objectives. According to this view, for the learner to achieve the stated outco mes two factors come into play. One is, the assessments or activities must allow the learner to demonstrate understanding and secondly, the learning process around which the course is built (course structure) must support the students approach to satisfying the course outcomes which also means that the student grasped the course objectives. To prevent a st udent from becoming a passive learner, it is important to make clear to the student the purpose of the activities included in the course. The student should be told why an activity is included, how much time should be spent on the activity, and what form of response is required. Aligning learning, teaching and assessment demands consistency (course structure) in producing course objectives, learning activities, and outcomes, and providing a teaching process to support the student(s) (Ha ll 2002.) Whitston ( 1998) stated that effective use of educational media depends upon curriculum design and the Chic (Courseware for History Implementation Consortium) projects findings suggest that in order for the use of new media to be meaningful, it must be driven by curriculum design (Hall 2002.) Hall (2002) furt her states that t he learners ability to make sense out of a learning experience depends upon the course structure, mediated through the instructor as facilitator. A high degree of structure must be present in a distance education program (Kearsley & Lynch 1996.) Moore an d Kearsley (1996) agree that many

PAGE 27

21 important issues exist in distance education but those having to do with curriculum structure are the most f undamental. Curriculum structure is the component that distinguishes formal from informal learning experiences. Students can acquire information from various sources independent of an instructor by browsing the Internet or searching through books and journals in a library. However, by including a structure component to learning and organizing the information and the activities into a course offering a valuable educational experience is created. In a traditional classroom, structure is at least implicitly understood whereas in the on line environment, it is much less clear due to the newness of the medium and the multiple ways in which it can be accomplished. In the online environment, one of the more important design aspects is to set and communicate clear expectations to help students keep track of their learning. These expectations can be communicated by having the course and each units objectives stated clearly fo r the students, specifying criteria that will allow students to assess their own proficiency, and clearly communicating assignments and schedules. Statement of the expectations will lay the groundwork for construction of a learning experience that explicitly links performance with the objectives and the criteria. The advantage of online learning fails to exist when the structure of the course is inadequate. Speaking to the st ructure of the dial ogue component in a course, the student must understand when, where, or how to communicate with their instructor or classmates to mainta in a sense of belonging or community in the course. Should this communication mechanism become impaired because

PAGE 28

22 confusion exists on how or when to comm unicate, the student must rely solely on the layout of the course to find answer s to any questions. Without the structure boundaries of the communication tool, mi sconceptions cannot be shared in dialogues amongst learners and teachers. Formative feedback regarding their performance on learning activities and su mmative feedback on how well they are meeting the learning outcomes of the course is lost as well. In a nutshell, instructional design provides structur e to the student's process of working through course material and directs students on how and where to access and receive assistance when needed. Instructional Elements of an Online Course To effectively design a course, the l ogical and conceptual structure of a course must be exposed (Scott, 2003). Organizing a course is a necessary task when developing online, but the most vital components in the course are the content and how the content is accessed or usability. How the course is designed or laid out to present this information (NC State University, 1998) as well as the content can both influence how and if t he student learns the material. Ingram (2002) suggests that the stru cture of a course web site will affect the sites usability. He further states that no info rmation or activity can teach anything if students cannot find them or respond to them correctly. By examining the research on web site usability, we c an begin to determine the instructional elements needed to structure online cour ses. Jakob Nielsen (1993) defines usability of any technological system as consisting of five major characteristics:

PAGE 29

23 learnability, efficiency, memorability, er ror rates, and satisfaction. Learnability refers to the ease and speed with whic h beginners can learn the system. Efficiency refers to the ease and speed wit h which one can use the system after it has been learned. The memorability of a system is the ease with which one can return to the system after a period of ti me and still remember how to use it, and error rates refer to how often the learner makes mistakes with the system and how easily they recover from the mistake. Lastly, satisfaction is a subjective measure that quantifies whether users lik e using the system and if they believe that they were able to benefit from the system. When designing an online course all five characteristics need to be considered. However, it is not likely t hat all can be met equally in all areas. Any design will involve compromises among the five goals. To design web usability for a course one should observe students performing representative tasks (Rubin, 1994.) Overall, a good instructional site should be easy to learn: a new student should be able to find their way around the site, figure out the structure of the site and the location of various types of information. The course should also be efficient for the experienced online learne r. Memorability is not much of an issue in an online course site since students access the course regularly; the need for them to remember is reduced. However, should an institution develop online courses using a particular management system, it would help their students if certain tools (such as the di scussion tool, the email tool, etc) were found to be consistent both in use and in location amongst courses the college offers, thereby increasing the course e fficiency and making it easier for a student

PAGE 30

24 to learn to navigate the course. By doing so, it may help to increase the satisfaction of the learner with the overall online exper ience at the institution especially if the learner enrolls in more than one online course. As far as errors go, maintaining a working site is of ut most importance when facilitating an online course. Frequently checking to make sure your links, programs, and scripts are in working order will help with student usability and will cut down on unnecessary frustrations. On the other si de of error rates, it is im portant that the designer do their best to prevent a student from havi ng to look or search blindly for any element of the course. Fo r a student to have to do so speaks volumes about the structure component or lack t hereof in the course. Course navigation in a course must be explained or obvious to the learner. Specific components of usability shoul d be present in a good instructional website. Simple step-by-step instructions provided with the course can aid in alleviating student anxiety related to the technology; Ingram (2002) states that information or activities cannot teach anything if students cannot find them or respond to them correctly. Components such as the site should be easy to learn: A new student in the course should be abl e to find their way around the site and figure out the structure of the site as well as locations of various types of information without difficulty. Next, the site should be efficient for the experienced students: Quickly and easily students should be able to locate the information, activities, and tasks they require to be succe ssful in the course. Finally, students should not be unnecessarily distract ed by information they needed at the beginning of the course but no longer require (Ingram, 2002.)

PAGE 31

25 Michigan Virtual University stat es that usability standards deal with function as it supports an optimal learning environm ent. These standards are: interface consistency, learner support, navigational effectiveness and efficiency, functionality of graphics and multimedia, and integration of communication (Distance Education Report, 2002.) Suggested elements found in the literature include but are not limited to: a hom epage, intro page, or overview page, a syllabus, an area that ident ifies assignments, a quizzi ng or assessment page, course content/materials or notes page, resource pages, and study guides. One research article defines the study guide as the students main reference to the course content, structure, and activities (Carr-Chellman & Duchastel, 2000.) No matter the name of the file (many instru ctors would call this the syllabus), according to Carr-Chellman and Duchastel (2002), the document must include the traditional elements of good instru ctional design, particularly a clear description of the instructional aims and learning objectives of the course. Additionally, the document should also include a list of learning resources (i.e. textbook chapters to read, associated articles to consult, supplementary readings, and a guide containing websites of interest), a list of assignments or projects along with due dates and assessment criteria, preferably linked to the learning objectives or outcomes. Also, pages that address frequently asked questions that specifies where a student can find out how to get help with a problem they encountered and an area specif ically designed to assist students in navigating through the course can be inva luable. These online documents must provide a level of detail that is sufficient to allow the learner to proceed in the

PAGE 32

26 course without substantial personal interaction or clarification from the instructor. Clear descriptions and directions are a must within this document. Jeris and Ann (2002) state that online syllabi serve as an advanced organizer of the content and processes that unfolds during an online course. Key elements included in the Illin ois Online Network Program (IONP) outline (2003) consist of content that has been converted to fit the online environment by organizing course content into modules with clear deadlines for all assigned work within the unit. This coul d take form as an online calendar or a course schedule. The outline further stat es that clear, achi evable goals with learning objectives relevant to th e learning needs of t he students are sought while promoting maximum dialogue am ong the participants. The program suggests that instructors give clear and simple assignments, reduce lectures and compensate with open-ended remarks that elicit comments and encourage varying viewpoints, and provide a focus on application of knowledge to the real world while fostering critical thinking sk ills so as to promote an interchange of ideas among students and the facilitator. The final component reported by the IONP to produce a successful online program is technical support. They state that the technology used to deliver in struction must acco mmodate the lowest common denominator in the class. Mini mum requirements are necessary to participate but not the latest and greatest system that is on the market at the time. Experiential findings using web te chnology in another study showed that web support personnel should be consulted regarding any material distributed to

PAGE 33

27 students. Students should also be given information on how and where to contact web support personnel (McAlpine, Lo ckerbie, Ramsay, & Beaman, 2002.) Instrument Need The availability of current literat ure investigating t he measurement of transactional distance is minimal with acce ss to an instrument that focuses on measuring only the structure component non-existent. The studies that are available have measured only pieces of th e construct, such as the interactive component, or have limited the m easurement to a particular form of a course (i.e. interactive television) thus, hampering ex ternal validity of the study. Most instrumentation used in these stud ies has identified limitations. Bischoff (1996) conducted an exploratory study that examined the effect of transactional distance on the education of health professionals in an interactive television learning envir onment. Student volunteers (n =221) in thirteen public health and nursing graduate courses at the University of Hawaii at Manoa responded to a 68-item investigator-devel oped questionnaire (on a 5 point Lickert scale) regarding elements of dialogue, structure, and transactional distance in their courses. Principal components and internal consistency reliability analyses verified the presence of three factors: structure, dialogue, and transactional distance. Internal consistency reliability was assessed using Cronbachs alpha to test instrument reliability. Content valid ity was obtained through consultation with experts in the field of education and with those familiar with interactive television as an instructional delivery medium. The purpose of this research study was to fill

PAGE 34

28 gaps between theory and practice by gathering empirical data about the variables of the transactional distance theory by comparing these elements in two learning environments: a distance forma t (two-way interactive television) and a traditional formal (face to face). This study included the dialogue and structure component of transactional distance and stated that no one instrument or methodology has been established for me asuring transactional distance and its individual components. The omission of the student autonomy component (a known variable) in the measurement of transactional distance prompted many unanswered questions as to the effect of the components that were studied and their relative effect(s) on the transactional distance of the courses since that distance is very much a function of the expectations that a student has upon entering into the learning process and t hose expectations emanate from the learners internal skills they have dev eloped from previous learning and life experiences. Saba and Shearer (1994) c onducted a study that explored the idea of transactional distance using a system dy namics model. Their instrument was adapted from a classroom interaction analysis and was limited to the desktop videoconferencing context where single individuals interacted with the instructor. Excluded measures of transactional distance were a structure component, the learner autonomy component, and other forms of interactivity that would make up aspects of the dialogue component. Chens (2001) study focuses on the in teractivity component as well. This researcher proposed to m easure the components of tr ansactional distance using

PAGE 35

29 an instrument with a five point Lickert sca le that attempts to describe and analyze all situations facing a learner. It contai ned 23 items describing all the situations facing learners including all aspects of communication in the online environment as well as interaction with the learning materials and the delivery medium used. Using seventy-one learners experienc es with the World Wide Web, Chen examined the postulate of Moores theory and identified the factors constituting transactional distance. Four types of inte ractions were evaluated: learner-learner, learner-interface, learner-i nstructor, and learner-conten t. Exploratory factorial analysis using a principal axis fa ctor method was conducted and it was concluded that this concept repres ented multifaceted ideas. Transactional distance, perceived by learners, consisted of four factors: (dimensions) learnerlearner transactional distance referred to t he psychological distance that learners perceive while interacting with other learners, learner-interface transactional distance referred to the degree of user friendliness/difficulty that learners perceive when they use the delivery syst ems, learner-instructor transactional distance involves the psychological distance of understandings and communication that learners perceive as they interact with their teacher, and learner-content transactional distance refe rred to the distance of understandings that learners perceive as they study the course materials and the degree that the materials meet their learning needs and expectations to the course. (Chen, 2001).This study focused on all components of transactional distance perceived by the learner in the World Wide Web environment. A suggestion that was made

PAGE 36

30 in the conclusion of the study was to fully address transactional distance; additional items that lie within the factors must be identified. Ingrams (2002) study focused on the usability of two different course organizations (content organization vs. assignment organization.) Ten subjects were tested with each course organization All information was available on each site; only the organization varied. The test subjects first responded to a short questionnaire on their prior knowledge and experience using the Internet and the Web and their knowledge of the subject matter of the course. There were 11 tasks all of which were things that woul d likely be required that a student be able to do in a course itself. The subjects then were asked to complete a second questionnaire to assess their satisfac tion with using the system. The results suggested that instructi onal websites should be designed from a studentcentered and assignment-oriented point of view. Many times a design is approached with a bias towards the struct ure of the content itself; whereas students attend a course wanting to find out what they have to do and how to do it. This study was task driven and did not account for any methods of instruction or address the learning proc ess within the structure of the course. However, the study did highlight some useful informa tion that could be incorporated into an instrument measuring t he structure component. Dr. Hsiu-Mei Huang (2002) from the National Institute of Technology in Taiwan conducted a study on student perceptions in an online mediated environment and found that interaction, course structure, and learner autonomy were correlated to each other because they had the same causal variable, the

PAGE 37

31 interface or delivery system. He also found that learners must possess the necessary skills to peruse the learning environment before they can be successful. Because course structure has a causal variable of delivery method, my study will focus on measuring structur e in the online environment only. The primary elements that frame the structure of an online course (i.e. syllabi, study guides, course format (any mandatory face -to-face meetings), etc., are included in the developed instrument ation. Huangs study attempted to develop an attitude scale to measure student perceptions on online courses, explore any relationships between student perceptions and demographic or general variables (e.g. age, gender, online course experi ence, computer skills, etc ;) and investigate the relationsh ips between interface and interaction (Huang, 2002.) This study mainly employed correlational research design and conducted descriptive, correlational and multiple r egressions statistics. His study had a small sample size of (n=31) collecting data over two quarters. Huang operationally defined structure as the extent of rigidity or flexibility in the course organization and delivery. As his study attempted to explore any possible relationships between interface and interaction course structure and learner autonomy dimensions, this study will attempt to narrow t hat focus to include only those dimensions that measure the st ructure component of an online course. Additionally, the section in his instrum ent that addresses course structure is geared for students response and contains only two categories, course organization and course delivery. Huang calls for future research to explore more variables (descriptors) for each dimension of transactional distance. The SCET

PAGE 38

32 developed in this study will attempt to meet that request by designing an instrument that will measur e the structure component of transactional distance and all of its various dimensions that c an also be used as a guide for designers and instructors in the creat ion of their online courses. It will contain three categories of organization: content organization, delivery organization, and course interactions organization. The proposed study contains objectives for developing a creative approach to measuring the structure co mponent of transactional distance found in online courses. A measurement of this type is needed to enable future researchers in determining effects such as: increasing structure on a course with low dialogue, increasing st ructure in a course that contains students with characteristically low autonomy, decreas ing structure and providing for greater flexibility for students that have a prof ile of high autonomy, or analyzing whether or not an increase in dialogue would compensate for a minimum amount of structure in a course. By so doing, I antic ipate that by further investigating into the nature of transactional distance and examining the possible factors that contribute to high and low transactional dist ance will assist in guiding future research and development efforts of all m odes of courses as well as illuminate the construct itself. Course Structure Categories and Sub-Categories The following categories and sub-categor ies form the criteria that were used to guide the development of items included in the instrument. The categories were determined from exami nation of the ADDIE course design

PAGE 39

33 process and Gagnes nine events of inst ruction. The content organization category was developed considering bot h the analysis and the design phases of the ADDIE model as well as considering Gagnes first five events of instruction. The delivery organization category wa s developed considering the design, development, and implementation phases of the ADDIE course design model and Gagnes events of gaining a learners attention, informing the learner, presenting the stimulus material, pr oviding feedback, and enhancing retention and transfer (through the flexibility subcategory.) The course interactions organization addresses the development, im plementation, and evaluation phases of the ADDIE model and addresses se veral components of Gagnes events: providing learner guidanc e, eliciting performance, providing feedback, and assessing performance. An explanation of what is contained within a category fo llows. From these explanations, items were written to address each area of structure for the purpose of measuring the structur e component of online courses. The overall sub-category that is found in the content and delivery organization categories was used as a means to encompass the main, general features of the particula r category. It does not e liminate the need or the importance of the ot her sub-categories contained therein. Content Organization This categorys components are created from the analysis and design phase of the course design process where the target audiences characteristics

PAGE 40

34 are examined as well as determining ho w the course can meet the audiences needs. Overall. When examining the overall content of an online course, the instructor should be cognizant that the content is written for the intended target audience. Additionally, all goals or objectives should focus on what students should learn from the course. Any necessary supplement al references or materials need to be clearly stated and easily retrieved from anywhere within the course. The course needs to also provide a general FAQ sect ion that provides the students with general directions for operat ing within the framework of the course structure. Included in this section are items such as : how to submit an assignment, how to post and reply to a discussion posting, how to send emails to classmates and to the instructor, and also course specific questions can be answered in this area such as: How do I install the supplem ental software needed for this course? Or how do I view and interact with the multimedia contained within the course? Syllabus. The syllabus found in an online course should be focused on what your students will learn rather than what materi als you will cover. In so doing, your focus when designing the syllabus will be directed appropriately so as to maximize your students learning in the course (Bragg, 1998.) By focusing on student learning, issues such as consistency among your course rationale, course content, objectives, student activi ties, and assessment will be taken into consideration. The syllabus page should detail subject areas to be covered,

PAGE 41

35 required projects or assignments, tests, readings, college policies, accommodations, grading policies, etc. Sequencing. Sequencing refers to the manner that the material is presented. Within each unit or module, the material should be presented in a logical order that transcends throughout each module. Each unit or module should be all inclusive in that the student will be able to access content information and will know what the expectations for the particular unit are and how to meet those expectations. The sequencing should be consistent within the course across each unit of instruction. Course Schedule. The purpose of the course schedule is to provide to the student one page that can be printed that out lines the deliverables for t he entire semester that the course is running. Additionally, the course schedule should contain due dates along with directions on where to subm it assignments and readings required to prepare the student to complete the assignm ent. By providing such specifics in one general area, the instructor need only update this one page from semester to semester as far as dates and possibly page numbers are concer ned. Its function is similar to the course calendar that is provided in many learning management systems however, by providing one single page with all of this information, the student can print it and have it handy for quick review at any time as opposed to scrolling through an online calendar.

PAGE 42

36 Delivery Organization Overall. The overall context of an online cour se speaks to the courses ease of use. The homepage of the course needs to be clear and simple, uncluttered. The initial layout page should provide the student with access to all tools needed while participating in the course as well as intuitiveness as to where the student might go to find anything that is needed. The navigation layout of the course should communicate to the student where they are within the course and where they need to go next. Consistency. The usability of the course s navigation must remain consistent throughout the course. As the student moves fr om page to page inside of a course, a navigation scheme such as a course m enu bar should be visible at all times allowing the student to travel back or fo rward to certain areas in a non-linear fashion. By providing this consistency within the course it prevents the student from becoming lost and allows them to retu rn to areas for additional directions or just a refresher on how to do something. Flexibility. Flexibility refers to the extent that the learner has control over their learning environment and experience. If the course provides a good amount of flexibility, the student can skip pages, re turn to a previously viewed page, or move ahead if the material is easy for a student. The extent that a course is

PAGE 43

37 flexible speaks to how well the course c an adapt to the learner or whether or not a student can proceed at their own pace. Additionally, any multimedia components included within a co urse should allow the student to control the play, rewind, stop, and pause functions so that they can maximize their viewing/listening experience. Course Interactions Organization Student to Instructor. By addressing the structure of the student to instru ctor we can determine how well these interactions have been struct ured within a course, if at all. This communication is necessary in that if di rectly affects the students expectations. There are many times that t hese types of interactions will not be struct ured or will be unplanned. However, there ar e components about these unplanned interactions that speak to the stru cture component and need to be addressed. For example, is it stated by the instruct or in the course how they will respond to an email, or a discussion posting? The in structor should communicate to the student how long before they can expect a response from the instructor to an email or to a class discussion post. Some instructors may prefer to just monitor the discussion areas and leave the posti ngs mainly to the students with an occasional comment to keep the discussion in line or on tract. If this is the case, the instructor needs to communicate this policy up front to their students. Also, can students phone the instructor, if so, are there any time constraints? Is the instructor available for a face to face meeting if the student is in the local area?

PAGE 44

38 Student to Student. This area should address how students are expected to communicate with each other. Does the course have gr oup assignments? If so, how will the students communicate with group members? Also included in this area should be guidelines for students to follow that address appropriate online communication behaviors. Also, any other guidelines r egarding student to student interactions should be identified such as if there is a meeting offline or by phone, a transcript should be provided to the gr oup so that those who coul d not attend are informed. Additionally, an instructor might want to r equest copies of all such transcripts as well. Student to Interface. This area addresses the usability aspec ts of the course. Can the student find the needed information or activities to participate effectively in the course? When the student first enters into the virtual classroom do they know where to begin without having to send an email to the instructor for clarification? Once a student understands the tasks that need to be done, can they interact with the environment to accomplish those tasks (i.e. submit assignments, download materials, take online quizzes, etc)?

PAGE 45

39 Chapter Summary Communication possibilities and opportuniti es to offer online programs are now easy realities, globally as well as locally. Asynchronous text-based Internet communication tools and course cont ent management systems are rapidly becoming the technologies of choice. They can better support interpersonal interaction and sustain two-way communications as well as providing a means to organize and present course materials. They have other advantages such as not being time or place bound and they ar e more cost-effective than other communication tools that are availabl e (Kanuka, Collett & Caswell, 2002.) As with any new tool available to educators unique instructional issues invariably follow their introduction. Research is needed in this area to better understand the existing issues, how the issues impact the learning environment, and what can be done to improve educational practices. Fu rther investigation into the nature of transactional distance and the factors t hat contribute to close and remote transactional distance are needed to further illuminate the construct and provide suggestions for more effect ive teaching and learning strategies as well as the need to refine existing instruments us ed to measure the components of transactional distance (Bischoff, 1996). Jung (2001) requests that studies be conducted that discuss what is already known about learning and teaching with different communication technologies as well as examining pedagogical features of web-based instruction in various teaching and learning contexts in the attempt to make firmer conclusions on pedagog ical features. Also needed is more vigorous data on pedagogical features of web-based instruction in various

PAGE 46

40 teaching and learning contexts to make strong conclusions on these pedagogical features of web-based in struction (Jung, 2001). Specific questions that are suggested include: Does the ext ent of rigidity or flexibi lity in the structure of a web-based instruction course affect dialogue and transactional distance? Other suggested questions for research encompass the effects of different types of interaction on learning and satisfaction in web-based instruction and how it can be designed to provide meaningful di alogue among participants through various types of interactions (Jung, 2001). Additionally, it has been suggested that successful distance education programs will tend to have a high level of structure to produce effective learning, and programs that fail do so because they lack sufficient structure (Kearsley & Lynch, 1996.) All of the above questions will need an inst rument that tenably measures the structure component as it relates to transactional distance in order to effectively conduct the research. M easuring the structure component of transactional distance in a pur e format (i.e. measuring the primary elements that structure comprises) has yet to be chal lenged. In my proposed study, the primary elements of structure that contribute to one of the variables which measure transactional distance as defined by Michael Moore (i.e. struct ure, dialogue, and learner autonomy) will be developed and studied for its ability to predict the structure of a course as it relates to tr ansactional distance. To the extent that prediction of the structure co mponent as it relates to transactional distance, using the designed instrumentation, is show n to be tenable, this study will aid designers and instructors in the future in an effort to create courses that possess

PAGE 47

41 as low of a transactional distance as pos sible. By being able to measure the structure component of a c ourse, possible design and teac hing strategies can be realized in an effort to increase student achievement, thus possibly increasing student satisfaction, and minimizing dr op-out rates in the online environment.

PAGE 48

42 CHAPTER THREE DEFINING THE STRUCTURE VARIABLE This study sought to develop an instru ment that measur ed the structure component as related to the transactional distance theory. The content of the instrument is defined by a survey fr amework designed to serve three main functions: 1. To measure the structure com ponent of the transactional distance theory. 2. To support future research in the area of transactional distance. 3. To assist designers and instru ctors in developing online courses. To date, there is no instrument that solely measures this variable of the theory in the online environment. The development of the SCET foll owed a criterion-referenced design procedure. The defensibility of this instrument is dependent upon the content validity of the items incl uded. Content validation involved establishing a relationship between each descriptor and inst ructional design practices as well as content experts review of eac h item and their pl acement within the instrument. Additionally, each item withi n a specific category should correlate highly with one another. In an attempt to ensure that t he SCET measured a construct that is different from those m easured by other instruments available in this area, discriminant validity analyses were conducted.

PAGE 49

43 Specifically, the study addresses the following research questions: 1. What specific components of an online course define the structure variable of the online course? 2. What is the content-related evidence that the designed measurement is a valid measur e of the structure variable? 3. What is the estimated reliab ility of the designed measurement? The flowchart shown in Appendix E pr ovides an overview of the procedure that was followed in order to answer thes e three research questions. Initially, I created items based on review of the resear ch in instructional design of online courses and experience (four years wo rking with faculty designing online courses) that was included as part of a survey to measure structure of online courses. Once the items were deve loped, I studied and reviewed them for possible grouping into categories. At th is point, categories were formed that described each subset of item s. Next, I enlisted the as sistance of three subject matter experts in the field of instruct ional design for the online environment to sort descriptors into provided categories in order to ensure (Appendix G) accuracy of items and groupi ng and had them perform an item analysis on each descriptor. The item analysis for each it em within each category was analyzed for its clarity, and its quality. (Quality addressed the descrip tors ability to contribute to the definition of the proposed category.) This was done using a Semantic Differential scale where 0 is no clarity or no quality, 1 is minima l clarity or minimal quality, 2 is moderate clarity or moderate quality, and 3 is maximum clarity or maximum quality.

PAGE 50

44 The sort produced three separate draft instruments, one from each expert. I analyzed each category of descriptors from the experts drafts for percent of agreement with my pre-defi ned instrument for each ca tegory analyzing both the main and the sub category placements. The goal was to obtain a minimum of 75% agreement with my pr e-defined categories. Should 75% not have been obtained from the sorting exercise, the researcher would have utilized item analysis to assist in determining whether or not a descriptor was unclear creating a placement discrepancy or whether the item was of poor quality overall and needed to be eliminated. The mean for each items clarity and the mean for each items quality were compared to assist with this decision. If the mean for a particular descriptor was below 2 (moderatel y clear or of moder ate quality) then the descriptor was reviewed for the possibi lity of revision or re-placement to another category. Once 75% agreement was ac hieved with the researchers draft the instrument was considered ready to us e in the pilot study. Initial results showed that 88% or 44 out of the 50 items demonstrated some form of agreement with either the main or sub-category placement. Pilot Study A purposeful sample of four courses was examined by myself and an IT doctoral student to test the revised inst rument. Courses used for the pilot study were online courses from an accredited higher educational institution. The sample for the pilot study consisted of two courses that were created with minimal assistance from an instruct ional designer (assumed to be less structurally sound), and two courses that were created with regular assistance of

PAGE 51

45 an instructional designer or instructional design team (more structurally sound.) All courses had run live or been piloted for a minimum of one semester. This was done so that most first time erro rs/bugs were found and fixed. The researcher chose the courses to be used in the pilot study. Field Study Upon successful completion of the pilot study, the actual study consisted of myself and two additional colleagues; one colleague is an IT doctoral student, who evaluated 10 online courses each t hat were purposively sampled (5 structurally sound courses and 5 less struct urally sound courses, further defined below.) Once the data were collected, statistical analysis was conducted to determine the correlation of each category of items along with the calculation of Cronbachs alpha to measure the surveys internal consistency as it pertained to each category. Additionally, inter-rater reliability was estimated. Discriminant validity was conducted using the SCET as compared to parts of other instruments in existence to show differ ence, and to show that current existing instruments do not solely measure t he structure component of transactional distance. Subject Matter Experts Three subject matter experts were recr uited to participate in the process of survey development. In order to enhance va lidity, a stratified sampling strategy was employed to secure a nationallyknown expert on Transactional Distance, a practicing faculty member who is expe rienced in the online environment, and a

PAGE 52

46 distance learning administrat or. See Appendix B for a sa mple of the email used to solicit expert reviewers. Qualifications. Each subject matter expert had know ledge of the instru ctional systems design process. The faculty exper t teaches web-based and web-enhanced courses in community health nursing. Her courses incorporate the use of technology to facilitate student learning through web and library searches to access scientific evidence for nursing practice. The distance learning administrator brings ext ensive and practical expe rtise in computer-based learning, instructional design and distance learning, and was one of the initiators of Web-based education on the University of South Florida campus. She has continued to support Web-based education on the campus since its inception ten years ago. Her research interests involv e distance education with a specialization in online and synchronous learning. She is currently writing her dissertation on Synchronous Online Learning and has presented at many national conferences in the areas of teaching, technology and distance education. Her publications are varied and include a book on Electroni c Marketing as well as papers and presentations on instructional tec hnology and distance education. The researcher has been affiliated at his University with the ISD-Training and Development program since 1995. He serv ed as the director for the Training Systems Graduate Programs through 2001. Previously, he directed the Center for Teaching and Technology at Georgetown University, where he also worked as the Assistant Director for the Academic Computer Center. His chief research

PAGE 53

47 interests are related to distance education and online learning. He is a prolific and widely published author of texts and journal articles on this topic. All experts are familiar with the use of content management systems such as BlackBoard, WebCT, Desire2Learn, or Angel, just to name a few, and have knowledge of how content management systems can be used to design and present courses. Courses Permission for use of 20 courses ( 10 structurally sound courses and 10 less structurally sound courses) was obtained. Courses obtained for use in the field study were online courses from an accredited College. An email (sample can be found in Appendix A) was sent to co llege administrators in order to secure use of the institutions courses. The institution and all pertinent parties were informed that the use of t he instrument required no hu man interaction and that the IRB at USF determined that I did not need to file with them (A copy of their email response is in cluded in Appendix F.) The 20 courses purposively select ed for measuring the structure component were of two types. Ten course s were considered structurally sound and 10 were considered less structurally sound. By structurally sound I mean developed in conjunction with instructi onal designers or a design team, and run or piloted, for at least one semester/quar ter. All courses were offered by an accredited higher education in stitution. Courses were offered via a learning management system. There were no course us ed in this study that were offered via an html/web format however; the instrument was designed to accommodate this form of delivery as well. Permissi on for evaluation was obtained from the

PAGE 54

48 institution. See Appendix A for a sample of the email that was sent out to appropriate parties in order to gain approv al for use of the institutions online courses in the study.

PAGE 55

49 Procedure Survey Instrument development A review of the literature and existing instruments were used in this study to assist in the writ ing of the items. Instrument Review After searching the literature for in struments that measured any part of transactional distance, a total of four instruments were found and reviewed. Any pertinent information that assisted in hi ghlighting aspects of the structure component of Moores theory was identified and extracted for use in this study. None of the instruments described below provide a full compilation of measures needed for evaluation of the structure component as it relates to transactional distance in an online environment. Bischoff (1996). The instrument designed by Bischoff ( 1996) includes minimal measures of the structure variable and was designed fo r student response. This study was designed for comparison of traditional inst ruction to the interactive television learning environment theref ore the external validity fo r measuring online courses is compromised. In contrast, the SC ET will measure sole ly the structure component of online courses. Seven questions were identified to be used in evaluating discriminant validity. Many of the questions chosen are worded for student response. For my purposes, I will use the questions from the viewpoint of the researcher evaluating each item for presence. They are listed in Appendix K.

PAGE 56

50 Chen (2001). This study focused primarily on the inte ractivity variable as it relates to transactional distance by ex amining the postulate of Moores theory. The study identified the dimensions constituting tr ansactional distance and concluded that the concept represented multi-faceted i deas. Upon writing to Yau-Jane Chen for a copy of her instrument, she sent me the survey she used of learning experiences in videoconferences (Picture Tel.) She stated that the instrument used in the article that I read is in Ch inese and she did not have a translation and felt that the survey she sent would su ffice. Therefore, the obtained instrument measures an interaction, a level of flex ibility, an autonomy va riable, and student perception as to the transactional distanc e students felt as it pertains to the interactivity variable. One area from Chens instrument wa s used in evaluating discriminant validity. Seven questions were used from the part of the instrument measuring course flexibility. Because this in strument was designed for a videoconference class, many questions were not applicable. The questions are included in Appendix J. Huang (2002). Huangs study produced an instrument titled Student Perceptions of Online Courses. This instrument was divi ded into two sections. The first section consisted of demographic and general information. The second section evaluated student perception of online c ourses and contained four sub-sections: Interaction, Course Structure, Learner Autonomy, and Interface. The course

PAGE 57

51 structure sub-section contai ns six items. These items are very generally worded. The overall idea presented with each item was considered for relevancy in my study. If the idea appeared re levant based on past experience and the literature review, it was extracted and detai led for use in my instrument. The six questions from t he structure sub-section will be used in performing discriminant validity. They are included in Appendix I. Ingram (2002). The purpose of Ingrams study was to find out how people find their way around instructional web sites so as to make them easier and more effective to use. This study specifically examined usability of online courses by asking the participants to perform a series of tasks in random order. The participants were given two questionnaires to complete in order to assess their satisfaction with using the course. The first asks students about their computer and web experience. The second questionnaire asks students about their experience navigating and performing tasks in an instruct ional web site. Mainly, the types of questions he posed made me consider some issues students could experience while taking an online class in respect to its usability. I took this concept, usability, into considerati on when developing the SCET. Seven questions from his instrument will be used in performing discriminant validity. They are included in Appendix H.

PAGE 58

52 Literature Review and Survey Development This development phase entailed review ing the literature with respect to structural issues as they related to distance education. This process identified components that existed in defining structure. Through this review, the idea of developing categories in order to devel op questions to target measuring the structure component was conceived. T he initial categories created included sequencing, presentation, planned inte ractions, and defined evaluations and were more concisely defined throughout the development process. After the basic categories were identified, criter ia to include within each category were more easily recognized. Examples of such criteria include course objectives, deadlines/timelines (exam and assignment), c ontact information, etc. By design, use of the instrument requires an expert to examine a syllabus from each course being evaluated to assist in the analysi s of the structure component to extract any structure relevant information. The expert also thoroughly reviewed the actual online course elem ents for structure components. All course information was evaluated using the created instrument. Item Development A total of 53 items were written in the initial stages of the instrument development. These items were gr ouped into categories then broken down further and grouped into sub-categories. Instrument categories/sub-categories and components were created solely by the researcher based on past experience and examining and studying t he research for relativity. The

PAGE 59

53 researcher created three categories: Content Organization, Delivery Organization, and Course Interactions Organization. Within each one of these categories, sub-categories were creat ed. Overall, syllabus, sequencing, and course schedule are the sub-categories created within the C ontent Organization category. Overall, consistency, and fl exibility are the s ub-categories created within the Delivery Organization category. La stly, student to instructor, student to student, and student to interface were the sub-categories created within the Course Interactions Organization ca tegory. The initia l 53 items and 10 categories, after revisions, regroupings, and editing became a total of 50 items and 8 categories. Expert Evaluation During the last phase of development, the instrument was distributed to content experts for content validation. Each expert was told of the purpose of the study. The experts were provided cat egories and descriptors contained in the instrument. They were asked to sort the descriptors and align them with the category they felt provided the best fit. T hey were also asked to provide an item analysis as to the clarity and quality of each item once the compiled draft was completed (See Appendix L.) Once I receiv ed the sorted items from each expert, I incorporated their input into a draft in strument for each expert. Each category was examined for the number of descriptors that agree with the researchers predefined instrument. A mean of the number of items in agr eement was derived from the three expert drafts for each category. A 2/3 agreement rate (for each

PAGE 60

54 descriptor for each category) was desired for their inclusion into a particular category (main and/or sub.) For example if 2 out of 3 experts agreed with the main and sub category placemen t then the descriptor remained as is. If 2 out of 3 experts agreed with the main only and agr eed amongst themselves with the sub category then the descriptor was moved to the sub category agreed upon by 2/3 of the experts. If t here was no agreement or le ss than 2/3 agreement the descriptor was considered for revision or eliminated. Additionally, an overall agreement rate of 75% was desired. (Ove rall agreement rate is calculated by a descriptor having any form of agreement wit h the researcher main and/or sub.) Additionally, to assist wit h descriptor placement, each expert was asked to rank, using a Semantic Differential scale, each descriptor as to the clarity/quality of each in describing and defining the cat egory as it pertains to the structure component of an online course (detailed earlier). A mean score (all experts) for each descriptors clarity and quality (one score for each) in regards to a particular category as it pertains to the structure component was ca lculated. If the mean for a particular descriptor is below 2 (moderat ely clear or of m oderate quality) then the descriptor was reviewed for the possibility of revision, new placement, or elimination. Refer to Appendix M for the Item Rating Results. Pilot Testing The revised instrument was pilot tested by myself and a colleague/doctoral student to me asure the structur e of four online courses at an accredited University or College (2 stru cturally sound courses and 2 courses that

PAGE 61

55 are less structurally sound.) Success of the instrument was determined based on the ease of use of the inst rument and its ability to in clude all elements and subelements of structure. As the courses were evaluated we made note of how well the instrument related to the online cour se and the structure evaluation process. I solicited feedback from the expert who assist ed with the pilot study in regards to the performance of the instrum ent during the pilot testing. Additionally, the applicable parts of the four instrum ents from other researchers were used on the four cour ses in the pilot study by me and the expert, to establish inter-rater reliability for use in the field study. The instrument performed as expec ted. A duplicate descriptor was discovered and deleted with no other revi sions or modifications needed. The colleague/doctoral student rate r found the instrument to be clear and easy to use. Field Testing Upon completion of the pilot test, the instrument was used to evaluate the structure of 20 online courses. The on line courses were selected from the institutions total online offerings. In order to ensure inter-rater reliability, the researcher and two other exper ts/practitioners in the field evaluated the courses. These experts were myself, one docto ral student from the IT area and one experienced instructional designer in t he field. Each expert evaluated 10 online courses each (5 structurally sound courses and 5 less structurally sound courses.) Because evaluating each course is a time-consuming process, the researcher evaluated all 20 of the courses using the developed instrument. To

PAGE 62

56 evaluate the courses, the ex pert must first familiarize themselves with the layout of the course, the navigation process, and how the content is organized which can require a minimum of 30 minutes per course. Due to the length of time required and the fact that the experts were volunteers that didnt have time to devote to analyzing 20 courses each, t he researcher evaluated all 20 courses and comparisons were made to the 10 courses evaluated by each expert. Another notable piece of information is t hat no expert other than the researcher knew the significance of the naming schem a used to identify the courses. Only the researcher was aware of the IS/NIS categorization. The researcher also evaluated the courses using parts of the four comparison instruments that are current ly available in the literature. These evaluations were for gathering data to perform statistical tests of discriminant validity. The instrument produced scores fo r each descriptor. A mean was computed for each category and a total score was computed by summing the means from each category. All categories are weighted equally in the instrument. Therefore, the highest possi ble score is 24 (8 categories with a mean of 3 for each category.) Statistical Analysis Validity of the Instrument Because an instrument that measur es solely the structure component of an online course has not been developed prior to this research study, the validity of the operationalization of this construct needs to be evaluated. Since the study

PAGE 63

57 purports to translate the construct of structure into a functioning and operating reality it is imperative that the study evaluates how well the translation was done. Construct Validity Translation Validity The way the construct was translat ed or operationalized as evidenced by face and content validations (Trochim, 2000). Face Validity. Face validation addresses whether or not on its face the instrument appears to be an accurate translation of the construct. The face validity of this instrument was verified by the subject matter experts retained to participate in this study once consensus was reached r egarding categories and their items. The experts, using their own expert judgm ent, addressed whether or not the items and the categories included in the instrument were clear and of quality representation as well as defining of t he structure component of transactional distance as it pertains to online course s, and whether or not the instrument provided a logical tie between the item s and the instruments purpose of measuring the structure com ponent of online courses. Content Validity. Content validation is based on the ext ent to which a measurement reflects the specific intended domain of conten t. For the researcher to accurately represent that which constitutes a relevant domain of content in order to measure the structure component of transactional distance, t he expert opinion of three

PAGE 64

58 researchers was solicited. Their task was to first, sort given items into provided categories then to evaluate each item for its clarity and its quality. Upon completion of the sort, the experts determined if the proposed categories contained within the instrum ent are entirely represent ative of the structure construct and if not, they provided recommendations. Additionally, once consensus is reached regarding inclusion of items and their placement into categories, the experts assessed if the it ems contained in each category were all conclusive of descriptors measuring t he structure component of online courses within that particular sub-category. Criterion-related Validation. Criterion related validity seeks to check the per formance of the operationalization. For example, the convergent validity will show high correlations amongst the items in a giv en category and the discriminant validity will show low correlations amongst the other instruments discussed in this paper. Convergent Validity. In order to establish c onvergent validity, the instru ment needs to show that items that should be related are in reality related. For in stance, each item within a category that purports to measure that area of stru cture should exhibit high intercorrelations with other items in t hat same category. We should see item correlations for all item pairings to be very high. This will show that all items are converging on the same construct.

PAGE 65

59 Discriminant Validity. In order to establish di scriminant validity, the st udy showed that the SCET was not related, or differed, from the other four instruments described in this dissertation. The researcher evaluated all 20 courses using the applicable parts (listed as appendices) of the other four instruments discu ssed in the body of this paper in order to gather data to evaluate fo r discriminant validit y. Additionally, a doctoral student (the student t hat assisted with the pilot study) evaluated the four courses in the pilot using the four instruments in an effort to establish reliability in my obtained scores from the four instrum ents. Reliability was established with a 92% correlation between my scores and the doctoral students scores with the Ingram instrument, 94.6% correlation with the Huang instrument, 97% with the Bischoff instrument, and 100% with the Chen instrument. Once re liability of my scores was established, my ratings of t he 20 courses using the four instruments were justified to be used in the calculati ons of discriminant validity in the field study. Reliability of the Instrument Reliability is the extent to which an experiment, test, or any measuring procedure yields the same re sult on repeated trials (Troch im, 2000.) It allows for researchers to be able to make clai ms about the generalizability of their research.

PAGE 66

60 Internal Consistency. Internal consistency is the extent to which tests or procedures assess the same characteristic, skill, or quality (Palmquist, 2004.) Cronbachs alpha measures how well a set of items measures a single latent construct. A Cronbachs alpha will be obtained for each category within the instrument as well as computing a Cronbachs alpha fo r the entire instrument. Inter-rater Reliability. Inter-rater reliability measures the ex tent to which two or more raters agree (Palmquist, 2004.) It addresses the c onsistency of the implementation of the rating system. It is dependent upon the ability of two or more individuals being consistent in their evaluations. This study had a total of three subject matter experts (myself and two others) evaluating the SCET. A detailed des cription of what the expert is to do when evaluating the SCET was provided to improve the inter-rater reliability. See Appendices C and D for an exam ple of the email and accompanying directions/tool that was sent to each expe rt evaluator. Additionally, the minimum qualifications of the expert were descr ibed above so as to have consistency within raters thus, increas ing inter-rater reliability. Each expert rated a total of 10 courses, 5 structurally sound and 5 less structurally sound. I rated all 20 courses. To determine inter-rater reliability as it pertains to using the instru ment for measuring the stru cture component of online courses, three researchers respons es along with my responses for each category were examined. Each rat ed category, by an expert, produced a

PAGE 67

categorical mean. A correlation coefficient using the experts response and my response were calculated for each pair of categorical means. The method of examination was an overlap as depicted in the diagram below. For example, the categorical mean of Participant As responses to 5 structurally sound and 5 less structurally sound courses were correlated with my response and the categorical mean of Researcher Bs response were correlated with my response also. (The diagram shown below was repeated twice for a total of 20 rated courses.) A A A A A B B B B B C C C C C C C C C C 61

PAGE 68

62 Chapter Summary The methods of validation and reliability described above ensure that sound survey construction principles were employed. Subject matter experts were used to determine the content validit y of the instrument and to ensure that instructional design principles were fo llowed as closely as possible. Their feedback was incorporated into the revision of the instrument until all experts were satisfied, providing a sound and accurate development of an instrument designed to measure the st ructural component of the transactional distance theory. Pilot testing of the prototype instrument was conducted prior to conducting the field research. In so doing, final verification of the functionality of the instrument was tested. The pilot testi ng was followed by a second item review and revision. Field-testing using a sa mple of 20 online courses was then conducted. Each expert evaluated 10 online courses. Each course was from an accredited higher-education institution. Statistical analy ses of the field-testing included convergent and discriminant validation as well as internal and inter-rater reliability measures.

PAGE 69

63 CHAPTER FOUR EVIDENCE OF VALIDITY & RELIABILITY The method utilized for the development and validation of the SCET for measuring the structur e component of online courses was based on sound principles of survey cons truction. Specifically, the method that was used for the development of the SC ET addressed the following research questions: 1. What specific components of an online course define the structure variable of the online course? 2. What is the content-related evidence that the designed measurement is a valid measur e of the structure variable? 3. What is the estimated reliab ility of the designed measurement? The process of survey development requires that design steps be followed in a sequential order. Ther efore, the research questi ons stated above will be answered in the order listed above and the evidence that they have been adequately addressed will be apparent throughout this chapter. Item Development The method used to determine the item s that define structure for an online course addressed the first research question: What specific components of an onl ine course define the structure variable of the online course? Initially, a review of the existing li terature identified components that exist in defining course structur e. From that review and my experience with designing

PAGE 70

64 and facilitating online courses, a total of 53 items were generated. After reviewing the items, ten ca tegories emerged. Item and Category Sort Conducting the sort and clarity/quality ratings addressed the second research question: What is the content-related evidence that the designed measurement is a valid measur e of the structure variable? Experts were recruited and were ema iled a document whose purpose was to provide them with a means of sorting items into representative categories (Appendix G). The initial document did not contain columns for rating the clarity and quality. For the first distribution, the experts only sorted the items by assigning them to one of the categories provided. Sorting results. The results of the sort were: 18% or 9 out of the 50 items had a minimum of 2 of the 3 experts agreem ent with the main category only. For these items my pre-derived sub-category was used know ing that another level of expert evaluation was due to occur where they woul d evaluate all item placements as to their clarity and quality. Furt her, 26% or 13 out of 50 of the items had two out of three expert agreement with both the ma in and sub-categories. If I had a particular item in the agreed upon main category but not the sub, I moved the item to the experts agreed-upon sub-category. If I did not have that item in either of the agreed main or sub-categories, the item wa s moved to the categories agreed upon by the experts. Finally, 56% or 28 out of the 50 items contained a

PAGE 71

65 minimum of 2 out of 3 experts agreement with both the main and the subcategories so no movement of these items were neces sary. Overall, 88% or 44 out of 50 items contained some form of agreement with either my initial placement of the main category and/or s ub-category. As a resu lt of the initial sorting process 3 items and 2 categories were deleted. The items were deleted due to no consensus at all amongst t he experts and the categories were eliminated because after the sort no it ems were placed in them. The deleted items were: Each course unit/module clearly communicates where to submit assignments due. Course/unit module provides a su mmary of the presented material. Course provides directions on how to use all course tools. The deleted categories were: Course Organization: Sequencing Course Interaction Organization: Student to Interface Item Rating: Clarity and Quality The next step in validating the instru ment was for the experts to rate the items and their placement as to clarity and quality using a Sem antic Differential scale from zero (not evident) to three (f ully evident). See Appendix L to view the document that was distributed. For t hose items that had a quality rating under two, the experts were asked to make a recommendation to either move the item, re-write the item, or to discard the item. A detailed summary of each items results to this rating can be found in Appendix M.

PAGE 72

66 Item 5 of the Course Organization Over all category, Course objectives are present, received a quality mean of 1.67. The lower score was due to a concern as to whether the descriptor should be placed in the Course Organization Overall category or the Course Organization Syllabus category. After discussing this concer n with the experts, it was agreed that it was better placed in the Course Organization Overall category. Item 7 of the Course Organization Overall category, Course provides detailed directions on how to submit each assignment or activity, received a quality mean of 1.33. The experts felt this descriptor was better located in the Course Organization Sylla bus category so it was moved. Item 6 of the Course Organization Course Schedule category, Course has a menu that remains constant as the student moves within the course, received a quality mean of 1.33. The expe rts felt this descriptor was better located in the Delivery Organization Consistency category so the descriptor was moved. Once these changes were made and the results were compiled the working instrument for determining structure fo r online courses was assembled. (See Appendix N.) Pilot Study A preliminary tryout of this instrum ent was conducted after all revisions by the three content experts had been made. After the pilot study was conducted, statistical analyses were run in order to address the third research question: What is the estimated reliabilit y of the designed measurement?

PAGE 73

67 To conduct the pilot study, four cour ses were selected for evaluation. Myself and one other doctoral student in t he field evaluated the four courses using the SCET. Two of the courses we re deemed structurally sound while two others were not based on the criteria listed in Chapter Three. One nonstructurally sound course (titled BSC100 5I_NIS for purposes of this study) received overall scores from each rater of 14.4 and 15.6 out of a possible 24 or 60% and 65% respectively. The other cour se (titled CGS1100I_NIS for purposes of this study) that was categorized as not structurally sound produced much higher scores than expected, 22.2 and 22.8 (93-95%) respectively. Further investigation revealed that, although this course was not designed initially with the aide of an instructional designer, this particular instructor did attend a few training courses over the pas t year or so and re-devel oped parts of his course. The two courses that were categoriz ed as structurally sound courses did in fact show to be structurally sound based on the overall scores. However, one course did enjoy an overall higher score than the other even though they were both developed with the assistance of an instructional designer. One of the courses was true to expectations and re sulted in scores of 21.75 and 21.23 respectively or 88% and 91% of the inst ruments total score. The other course was designed a few years ago with the help of an instructional designer. However, upon further invest igation it was discovered that the faculty member had eliminated a few items from the or iginal design and made some other modifications. The scores for this course were 18 and 17.6 respectively or 75% and 73% of the instruments total score. These scores are not low enough to

PAGE 74

68 deem this course not structurally (usi ng an overall cut score of 50% or 12/24) sound but they are lower than was expected. The pilot study showed that the pr ovisional criteria for categorizing courses didnt perform well. It appears that some courses may have been modified by faculty after their initial development and that ot her faculty may have done some independent st udy to educate themselves about effective instructional design. To address this i ssue in the field study, the provisional criterion for course placement was refined as follows: after courses were placed using the provisional crit eria, the researcher will perform an express expert review of each courses placement in consideration of broadly accepted instructional design standards. If the placement does not appear to be accurate, then an appropriate placement was dete rmined and a detailed explanation will be given as to explain action taken. No other concerns with the manner in which the instrument performed were identified in the pilot study. The ca tegories and descriptors functioned as intended and no clarifyi ng changes were needed. Statistical Analysis Content-related Validation. The purpose of a content validation procedure is to ensur e that the items adequately represent the specific construct of interest (Crocker & Algina, 1986). The content-related evidence was acquired throughout the item categorization/sorting process and thr ough the quality/clarity ratings by the experts. Overall, 88% or 44 out of 50 items contained some form of agreement

PAGE 75

69 with either my initial placement of the main category and/ or sub-category. Additionally, once the descriptors were sorted for placement in categories, experts rated each descriptors placement as to its clarity and quality within that particular category and adjustments were made if necessary. Detailed results are provided in the appendices as mentioned above. Estimates of Reliability. The reliability of the instruments use in the pilot study was estimated by computing a coefficient alpha for each categor y of items to estimate the internal consistency of each raters scores as well as computing a Pearson correlation coefficient for each rater and each course at the category level. Additionally, at the item level, a Kappa coefficient was calculated for each of the 8 categories. See table 1 below for a listing of the ment ioned statistics. (Refer to Appendix G for the definition of each categor y and its associated acronym.)

PAGE 76

70 Table 1 Pilot Study Statistics Correlation Coefficients and Coeffici ents Alpha and Kappa by Category Category Pearson Correlation of Raters Coefficient Alpha Coefficient Kappa Content Org: Overall 0.99287 0.995998 0.8152 Content Org: Syllabus 0.97180 0.980165 0.8876 Content Org: Course Schedule 0.94388 0.949153 0.6629 Delivery Org: Overall 0.95784 0.978417 ----Delivery Org: Consistency 0.97714 0.985782 0.9200 Delivery Org: Flexibility 0.66967 0.800457 0.7377 Course Int. Org: Student to Student 0.96397 0.931818 ----Course Int. Org: Student to Instructor 1.00000 1.000000 1.0000

PAGE 77

71 As is evidenced by the coeffici ent alpha scores above, the internal consistency of each category portrays high reliability (usually 0.7 and above is acceptable) (Nunnally, 1978.) Additionally, Pearson Correlation coefficients of the two raters shows a high correlati on amongst their rating values in all categories (r=0.7 or above.) The delivery organization flexibility category is only slightly under the desirable value of 0.7 but the coefficient alpha for this category is above the 0.7 cutoff supporting a high degree of internal consistency for this category. Coefficient Kappa is reported above to assess the rater level of agreement. Caution must be exercised not to view these scores independent of the other coefficients reported. One use of Kappa is to quantify a level of agreement amongst raters excluding the proportion of chance (or expected) agreement (i.e. as an effect-size measure.) This term is only relevant when raters are statistically indepe ndent which in most cases, including this one, is not met. Therefore, viewing this statistic in c onjunction with the others reported is highly advisable. In lieu of this, the reader can see that most categories enjoy a high level of agreement (above 0.7) amongst rate rs according to the Kappa statistic. Two of the categories, Delivery Organization: Overall (DOO) and Course Interaction Organization: Student to St udent (CIOSS), did not produce a Kappa statistic. This is because SAS only computes Kappa for tables that are symmetric. However, these two categorie s enjoy a high Pearson correlation and Coefficient Alpha. Additionally, the Course Organization: Course Schedule (COCS) category produces a Kappa stat istic of 0.6629, a minimal amount under

PAGE 78

72 the desired level of agreem ent of 0.7. However, upon examining the correlation of the two raters, 0.94, and the internal consistency of the category, 0.95, the reliability of this category holds. Field Study The field study was conducted and statistical analyses were run in order to show that the instrument can distinguish a structurally sound course from one that is less structurally sound, and to demonstrate that the instrument developed for this study differs from any others found in current research. The field test was conducted using a sample of 20 courses from an accredited Community College. The sele cted courses represented varying departments and genres of courses availa ble. They were divided as to structurally sound and not-struc turally sound using the prev iously stated criteria. I verified the course placement using an express expert review as stated above. As a result, 10 courses were placed in the structurally sound category and 10 courses were placed into the non-structur ally sound category. My expert express review did not identify any placement concerns. The courses were further randomly divided for distribution to t he experts for evaluation. Each expert received access to 5 structurally s ound courses and 5 not structurally sound courses for a total of 10 total courses to evaluate. The SCET contains a total of 8 categories and 8 sub-categories. Each sub-category contains a varying number of descriptors that are to be rated using a Semantic Differential scale from 0 to 3 where 0 is not evident, 1 is minimally

PAGE 79

73 evident, 2 is moderately evident, and 3 is fully evident. To determine a value for each descriptor, the expert must first thoroughly review each course both from the viewpoint of an instruct or and from the viewpoint of a student prior to using the instrument for evaluation purposes. On ce the experts familiarized themselves with the course they began using the instrument as part of the evaluation process. Field test administration was cond ucted via email. Two experts were recruited to participate in the field study Each expert was told the purpose of the study and their role was communicated so they understood the commitment required. The two experts in the field that were chosen and agreed to evaluate 10 courses each (5 structurally sound and 5 not structurally sound) using the SCET to measure the structure component of the sample courses were emailed the URL address and log in instructions complete with a user ID and password for accessing the courses. Also contained in the email were an attachment of the instrument and the abstract of the study. They were asked to respond when they had received the email with an estimated time of completion. It was also communicated to them that they could submit their results via email as attachments and to notify me of any questi ons or concerns. Refer to Table 2 for the list of courses assigned to each particu lar rater. The re searcher (rater 3) rated all 20 courses.

PAGE 80

74 Table 2 Course Listing by Rater Rater 1 Courses Rater 2 Courses Micro Comp Apps 1100 C Micro Comp Apps 1100_IS Comp Concepts 1000_IS Adv. Micro Comp Apps 2108_IS Intro to Comp Prog.1000_I S Intro to Psych 1012_IS Educational Tech 2040_IS Intro to Public Speaking 2600_IS Intro to Internet Res. 2004_IS Intro to Sociology 2000_IS LifeSpan Dev.2004_NIS Intro to Education 1005_NIS British Literature 2012_NIS Medical Terminology 2531_NIS Intro to Biology 1005_NI S Composition II_1102_NIS Intro to Psychology 1012_NIS Drug Calculations_NIS Intro to Statistics 2023_NIS In tro to Networking 2263_NIS

PAGE 81

75 In addition, before the expe rts made their final submission of results, they were asked to review each completed in strument to ensure that the proper course title and the rater name were include d and to also verify that they marked a rating for each descriptor in each category. Item Analysis of Field Test Results Statistical analysis of the result s of field testing was computed to determine estimates of reliabi lity, discriminant validity, and to determine based on overall total score how well the SCET di stinguishes a structurally sound course from one that is not. Estimates of Reliability. Reliability estimates of the field study results were computed by the use of two methods in order to address the third research question: What is the estimated reliabi lity of the designed measurement? First, internal consistency of the in strument was comput ed by use of the Cronbachs alpha statistic for each rater where the categorical means were compared to rater 3 (the researcher.) Cronbachs alpha was also computed on the overall scores for raters 1 and 2 as compared to the researcher and yielded a Coefficient Alpha of .989 (raw and standar dized) for rater 1 and a Coefficient Alpha of .987 (raw) .994 (standardized) fo r rater 2. Also, a Kappa statistic was computed for each item in each category to ve rify inter-rater reli ability. Each rater was compared with the third rater (the researcher.) The delivery organization category shows some lower Kappa values than would be preferred. However, the correlation statistics are desirable. Due to the lower inter-rater reliability in this

PAGE 82

76 category training or communication to the raters using the instrument is needed to detail the meaning/purpose of each de scriptor. Tables 3 and 4 detail the results.

PAGE 83

77 Table 3 Correlation Coefficients and Coeffici ents Alpha and Kappa by Category R1 X R3 Category Correlation Alpha (Cat.) Kappa (Item) Cat. Mean Item Raw Standardized Simple Weighted Content Org: Overall .92035 .79305 .955404 .958525 .5581 .6978 Content Org: Syllabus .90994 .78968 .935013 .952849 .6141 .7210 Content Org: Course Schedule .75419 .70330 .853900 .859876 .5300 .7900 Delivery Org: Overall .75329 .90225 .859284 .859289 .6276 .7967 Delivery Org: Consistency .91530 .81475 .954337 .955777 .6654 .7451 Delivery Org: Flexibility .89616 .79769 .940172 .945237 .5711 .7022 Course Int. Org: Student to Student .92422 .89706 .958196 .960619 .6296 .8062 Course Int. Org: Student to Instructor .90757 .81367 .945728 .951548 .4834 .6797

PAGE 84

78 Table 4 Correlation Coefficients and Coeffici ents Alpha and Kappa by Category R2 X R3 Category Correlation Alpha (Cat.) Kappa (Item) Cat. Mean Item Raw Standardized Simple Weighted Content Org: Overall .94515 .79686 .943108 .971803 .4970 .6540 Content Org: Syllabus .90390 .81138 .847673 .949523 .5519 .7048 Content Org: Course Schedule .97465 .88523 1.00000 .987163 .7961 .8370 Delivery Org: Overall .86723 .94819 .941112 .928893 .8694 .9164 Delivery Org: Consistency .94755 .72052 .966589 .973070 .5100 .6600 Delivery Org: Flexibility .87720 .67429 .882893 .934584 .4012 .5452 Course Int. Org: Student to Student .98342 .95935 .971223 .991638 .8393 .9086 Course Int. Org: Student to Instructor .91924 .87638 .929124 .957920 .5078 .6986

PAGE 85

79 Overall Scores. Each evaluated course produces a composite score that is determined by summing the means of each category. The total possible score is 24 (8 categories times the highest possible mean score from each category, 3.) Table 5 below details the computed score per c ourse by rater (the researcher will be known as Rater 3 throughout the re mainder of the dissertation.)

PAGE 86

80 Table 5 Raters Total Score for Each Course Course Rater 1 Rater 2 Rater 3 Micro Comp Apps 1100C_IS 17.81 17.22 Comp Concepts 1000_IS 17.84 17.60 Intro to Comp Prog. 1000_IS 14.78 17.10 Educational Tech 2040_IS 21.24 21.69 Intro to Internet Re s. 2004_IS 20.44 19.94 Micro Comp Apps 1100_IS 17.67 18.37 Adv. Micro Comp Apps 2108_IS 13.39 14.88 Intro to Psych 1012_IS 17.44 17.95 Intro to Public Speaking 2600_IS 21.35 21.54 Intro to Sociology 2000_IS 17.00 17.90 LifeSpan Dev. 2004_NIS 11.22 8.73 British Literature 2012_NIS 4.17 4.39 Intro to Biology 1005_NIS 7.28 8.58 Intro to Psychology 1012_NIS 8.00 8.37 Intro to Statistics 2023_NIS 7.46 7.15 Intro to Education 1005_NIS 7.33 8.15 Medical Terminology 2531_NIS 13.37 12.55 Composition II_1102_NIS 10.30 9.86 Drug Calculations_NIS 8.78 8.19 Intro to Networking 2263_NIS 8.51 8.97

PAGE 87

81 Specifically, mean scores of each rater for each type of course, IS (structurally sound) and NIS (not struct urally sound), yielded mean scores for rater 1 of 18.42 and 7.63 and for rater 2 of 17.37 and 9.66 respectively. The researchers mean scores were 18.42 for IS courses and 8.50 for NIS courses. Additionally, percent scores for each course and mean percent scores/rater for each type of course were computed to assist with determination of how well the instrument delineated a stru cturally sound course from a not structurally sound course. These percent values are show in Table 6 below.

PAGE 88

82 Table 6 Percent Scores for Each Course Course Rater 1 Rater 2 Rater 3 Micro Comp Apps 1100C_IS 74% 72% Comp Concepts 1000_IS 74% 73% Intro to Comp Prog.1000_IS 62% 71% Educational Tech 2040_IS 89% 90% Intro to Internet Res.2004_IS 85% 83% Micro Comp Apps 1100_IS 74% 77% Adv. Micro Comp Apps 2108_IS 56% 62% Intro to Psych 1012_IS 73% 75% Intro to Public Speaking 2600_IS 89% 90% Intro to Sociology 2000_IS 71% 75% 47% 36% British Literature 2012_NIS 17% 18% Intro to Biology 1005_NIS 30% 36% Intro to Psychology 1012_NIS 33% 35% Intro to Statistics 2023_NIS 31% 30% Intro to Education 1005_NIS 31% 34% Medical Terminology 2531_NIS 56% 52% Composition II_1102_NIS 43% 41% Drug Calculations_NIS 37% 34% Intro to Networking 2263_NIS 35% 37%

PAGE 89

83 Discriminant Validity. For the purposes of establishing inter-ra ter reliability of the other four instruments mentioned in this study, duri ng the pilot study the doctoral student rated the four courses using my instrument and the four other partial instruments: Bischoffs instrument, Chens instru ment, Huangs instrument, and Ingrams instrument. The correlation of the researchers ratings and the doctoral students ratings are as follows: Bischoffs instrument produced a .970 correlation, Chens instrument produced a 1.00 correlation, Huangs instrument produced a .946 correlation, and Ingrams instrument produced a .924 correlation. Having established inter-rater reliability, the res earcher solely rated all 20 courses using each of the other four in struments. Correlation stat istics were computed to determine the degree of difference, if any, the SCET was from the parts of the other instruments. The correlations to t he SCET are as follows: Bischoffs partial instrument produced a .156 correlation, Chens partial instrument produced a .010 correlation, Huangs partial instru ment produced a .273 correlation, and Ingrams partial instrument produced a .711 correlation. None of the other instruments compare well with the SCET. Ingrams instrument moderately compares to the SCET as a whole but does not compare in distinguishing structurally sound courses from those that are not. This difference is by design as Ingrams instruments purpos e is web site development not instructional course design. This difference is clearly visi ble upon examination of the instrument content (see Appendix H.)

PAGE 90

84 To further examine the r eason for the correlation of Ingrams instrument with the SCET, correlations were calc ulated comparing eac h sub-category with the total score of the Ingram s partial instrument. A priori predictions were that Ingrams instrument would correla te higher with the SCETs Delivery Organization sub-categories (i.e. Delive ry Organization Consistency, Delivery Organization Overall, and Delivery Organiza tion Flexibility.) The results appear in Table 7. The largest correlation does appear in the Delivery Organization category, specifically, t he flexibility subcategory. There is not enough of a difference between the other SCET categor ies and Ingrams to provide any clear conclusion regarding particular correlations with the SCETs subcategories. This could be in part due to the fact that Ingr ams instrument meas ures the quality of web sites and the SCET measures quality of online courses. However, Ingrams partial instrument does not detail any of the components of instructional design for online courses as identified by the SCET and is not specific to any type of site, such as an instructional one, but si mply addresses any form of a website in any context. To demonstrate that the SCET distinguishes instructionally sound courses from courses that are not inst ructionally sound, effect sizes were computed amongst all of t he categories and an effect size was computed using the results of Ingrams parti al instrument as well, to show that it does not distinguish the level of inst ructional quality as well as the SCET. Refer to Table 8 for the computed effect sizes. As is shown, every category of the SCET has a larger effect size than the overall effect size of Ingrams instrument detailing that the SCET does in fact distinguish instru ctionally sound courses from those that

PAGE 91

85 are not. The effect size of Ingrams instrument (1.57) although under most circumstances may be considered robust is not anywhere close to the effect sizes present with the use of the SCET. Additionally, the overall effect size of the SCET is 4.8.

PAGE 92

86 Table 7 Correlation of Sub-Categorie s with Ingrams Instrument Category Correlation Content Organization Overall 0.55475 Content Organization Schedule 0.45774 Content Organization Course Schedule 0.64798 Delivery Organization Overall 0.67179 Delivery Organization Consistency 0.54134 Delivery Organization Flexibility 0.76358 Course Interaction Organization Student to Instructor 0.59785 Course Interaction Organization Student to Student 0.62013

PAGE 93

87 Table 8. Effect sizes of all Categorie s and with Ingrams Instrument Category Correlation Content Organization Overall 3.39 Content Organization Schedule 2.35 Content Organization Course Schedule 3.71 Delivery Organization Overall 3.00 Delivery Organization Consistency 1.89 Delivery Organization Flexibility 2.20 Course Interaction Organization Student to Instructor 2.43 Course Interaction Organization Student to Student 1.84 SCET 4.80 Ingrams Instrument 1.57

PAGE 94

88 Chapter Summary The selection of the 20 online courses that were evaluated in the field study were selected based on availability. From those that were available, courses were deliberately chosen to ens ure an equal number of structurally sound and not structurally s ound courses. These selections were based on predetermined criterion that was revised to include an expert express review following findings of the pilot study. The content-related validit y of the instrument was assured by use of both judgmental and empirical data analysis. Th ree subject matter experts who have varying backgrounds in online course design and instruction were used in the validation process. Additionally, statistical analysis of pilot and field test data for estimates of reliability, overall scores, and discriminant validity were conducted.

PAGE 95

89 CHAPTER FIVE DISCUSSION Online course design is occurring in all institutions across all parts of the world at an exponential rate. The purpose of this project was to develop a valid and reliable instrument that measures t he structure component of online courses. The method employed for the development of this instrument was based on sound survey and instructional design proc edures. The 47 item instrument called the Structure Component Evaluation Tool can be used by instructional designers as a course development guide that can be shared with faculty and as a strong formative and summative measure to determine how well the structure of a course is defined. This chapter is organized around t he research questions asked and will serve to summarize the met hods and results that have led to the development of the SCET in its present form. Specifically, a summary of the method used for the development of the instrument will be reviewed and the evidence that the SCET is a valid measure of the structure vari able will be addressed. Next, the results of the procedure used for establishing the re liability of the instrument will be discussed including issues that arose durin g pilot testing. Finally, suggestions for refinements to the instrum ent will be proposed, usability issues will be examined, and possible future implications for the SCET will be discussed.

PAGE 96

90 Instrument Development and Evidence of Validity The method used to develop the inst rument and the content-relative evidence that the SCET is a valid measure was addressed by the first and second research questions: What specific components of an onl ine course define the structure variable of the online course? What is the content-related evidence that the designed measurement is a valid measure of the structure variable? To develop any instrument that meas ures the product of a process, a review of the tasks that make up t he process must be considered. This consideration required t hat the instructional syst ems design processes be analyzed for best practices in curriculum desi gn. This required that a review of all tasks necessary for designing an online course be conducted. The researcher, using her experience in designing online courses, began reviewing the tasks by breaking down the online instructional design process she routinely follows. Additionally, a review of various systematic instructional design methods was conducted to ensure that all aspects of the instructional design processes were considered and that all parts needed to cr eate an instructionally sound course were identified. After a draft instrument was compiled, the researcher recruited three subject matter experts to assist in validating the inst rument. The experts were asked to sort the provided descripto rs into categories that were given to them. Once these results were compiled, another draft was sent to the experts asking them to now rate the placement of each descriptor as to its quality and

PAGE 97

91 clarity. Results of the ex perts efforts produced a valid instrument to be used in the pilot study. Pilot testing of the instrument wa s then conducted using a sample of 4 courses and one doctoral student in the field. The four courses were divided into categories of structurally sound and not structurally sound for the purposes of showing if the developed instrument coul d delineate a structurally sound course from one that was not. Statistical a nalyses were computed. Specifically, statistical analyses to determine inter-rate r reliability, internal reliability, and overall scores of each course resulting from the use of t he instrument were calculated. Additionally, this tryout allowed the researc her to use the instrument herself on courses and to discuss usability issues of the instrument with another colleague in the field and receive feedba ck for possible improvements to the instrument. Only a repetitive descriptor was identified during the pilot study and a revision to the instrument was made. T he pilot study also identified possible issues with the manner in which courses were being segregated in to structurally sound and not structurally s ound categories. Adjustme nts to how the courses would be placed were determined prior to the commencement of the field study. Also, during the pilot study, the doctoral student evaluated the other four partial instruments identified in this study and statistics were computed to establish inter-rater reliability with the researcher Specifically, correlational statistics between the colleague and the researchers ratings usin g the other instruments were computed.

PAGE 98

92 Field testing of the instrument usi ng a sample of 20 online courses from an accredited institution wa s then conducted. Statistical analyses of these results were computed. Particularly, statistical analyses to determine in ter-rater reliability amongst each rater with the researcher, inte rnal reliability of the instrument, and comparisons of overall scores of each course resulting from the use of the instrument were calculated to determine whether or not the instrument could accurately and reliably distinguish a struct urally sound course from one that is not structurally sound. Finally, the researcher us ed the other four partial instruments on each of the 20 courses to determine if their inst rument measured a similar construct. Inter-rater reliability for use of thes e instruments by the researcher was conducted and verified as part of the pilot study. Three of the partial instruments returned very small correlations provi ng they are not similar to the SCET. Ingrams partial instrument returned a correlation of 0.71. Upon examination of Ingrams instrument it is concluded that the reason for the moderate correlation is that this instrument measures usability of web sites. The instrument contains questions that are pertinent to the clarity of web site development. Ingram wrote questions that pertain to the organi zation of a site and how one navigates through a particular site. Because the c ourses that were analyzed are all online courses utilizing the web, there may be so me similarities due to site navigation and overall layout of the course. Additio nally, as shown by the effect sizes, Ingrams instrument does not distinguish a structura lly sound course from one that is not structurally sound as the SCET does.

PAGE 99

93 Establishing Reliability The methods used to establish reliabili ty of the SCET addressed the third research question: What is the estimated reliab ility of the designed measurement? Internal and Inter-rater Reliability Cronbachs alpha statistics were calculated for each category of the SCET by comparing each raters categorical mean and for the overall internal consistency of the SCET by compari ng total scores. The total scores were computed by summing the mean of each category. There was no alpha below .85 for any category and the overall alpha was .98. Lower Kappas were found with the Deli very Organization category. Upon examination of the Ite m Rating Results (Appendix M) no problems with the quality or clarity with the descriptors in any area of the Delivery Organization category are apparent.

PAGE 100

94 Issues and Recommendations Course Placement Issue The course placement iss ue that emerged as a result of the pilot study will be discussed. After receiving results back from the pilot study as part of the evaluation process overall scores for eac h course were computed. One of the courses that was thought to not be st ructurally sound scored high using the SCET. The researcher, upon examination of the course, noted that the course in question, on its face, appeared to have been re-designed with the assistance of an instructional designer or someone that had such knowledge. After speaking with the facilitator of the course, it wa s learned that the instructor had been recently educated as to sound instructional design processes and had made significant changes to his course. Upon lear ning of these sorts of possibilities and having no means to control for such vari ables (i.e. continual changes/updates to courses and/or the current fac ilitator of a course may not be the original facilitator and numerous changes could have taken place since initial development) the researcher decided to augment the pl acement algorithm by performing an express expert review of each courses placement in consideration of broadly accepted instructional design standards. Usability Issues To effectively use the Structure Co mponent Evaluation Tool to assess an online course the evaluator needs to familiarize themselves with both the designer and the students view of the c ourse. This can take a considerable amount of time. Another alternative use of the instrument is to allow it to guide

PAGE 101

95 the development of an online course The instrument was designed and developed with sound instructi onal design processes in mind therefore to use it as a guide would be appropriate. By doing so, the course designer can ensure that many important pieces of an onli ne course are included and by using the instrument re-design time could be reduced. One caveat, although this instrument can be utilized by those not formally educated in the area of instructional design as a guide to devel oping an online course, the instrument was developed for persons with this background. Recommendations Instrument Refinement. A suggested refinement for the SCET is to provide a means of denoting applicability of a particular descriptor for a particular course. For example, some courses may not contain any video or audio components. This would not necessarily translate into the course not being structurally sound but presently the only way to denote the absence of such a component is to enter a 0 for that particular descriptor thus lowering the cour ses overall score. Percent scores for each course were computed by dividing the total score by the total possible score and multiplying by 100. Theref ore, if a course is reporting a lower score as a result of the Structure Co mponent Evaluation Tool (50% or below) the evaluator will need to perform a cursory review of the values for each descriptor to determine whether or not revi sions to the course need to be made. Also, in an effort to increase inter-rater reliabilit y with the Delivery Organization Category I would recommend a survey from those w ho have used the instrument as to how

PAGE 102

96 each descriptor was interpreted. Once t he results of this survey are received some clarification to the de scriptors may be appropriate. Additionally, caution needs to be exercised when evaluating the flexibility component of the online c ourses with the SCET. The DOF category reported lower Kappa scores indicating that the interpretation of the descriptors were somewhat subjective. Defining what is meant by flexible or adaptable learning routes and learner control prior to using the SCET is desirable. Again emphasizing the importance of using t he SCET along with the expert help of an instructional designer when st ructuring an online course. Potential Uses. Student performance measures collect ed from an online course may be analyzed in relation to its course struct ure. For example, the researcher can examine student satisfaction, student su ccess, time spent on-task, etc., in relation to the score the online course re ceived from the SCET. I would expect to see success and course structure or score received on the SCET to be directly proportional. Another potential use for the SCET would be to use it to perform causalcomparative research especia lly as it relates to Michael Moores theory of transactional distance. Currently, a m easure would need to be identified for measuring the dialogue component of online courses but once that piece has been developed studies may be conducted to det ermine the relationships of all three of the variables found in Moores theory.

PAGE 103

97 Future Research. In an effort to increase external validity, it is recommended that additional instructional designers perform analyses on other online courses to determine the value and robustness of the SC ET under varying conditions. Conclusion The Structure Component Evaluation T ool is an instrument containing 8 categories and 8 sub-categories made up of 47 descriptors that will be used by instructional designers as a tool for m easuring the structure component of online courses and may also be used as a guide for developing and designing online courses. A course scoring below 50% or less than 12/24 can be considered to be not structurally sound with a course scoring 51% and above considered to be structurally sound. Caution must be tak en when evaluating a course solely on the overall score produced by the Structur e Component Evaluation Tool. The overall score should serve as a red flag to the designer that the course needs a more in-depth review. Each categorical score must be reviewed to determine where the discrepancy may be occurring in a non-stru cturally sound course in order to evaluate the overall significance of the lo wer rating produced by the instrument. The researcher is confident that t he instrument development processes described in this paper have provided eviden ce of initial validation and reliability and that sound instrument development procedures have been followed.

PAGE 104

98 REFERENCES Biggs, J. (1999). Teaching for Quality Learning at University Open University Press/Society for Research in Higher Education, Buckingham. Bischoff, W.R., Bisconer S.W., Kook er, B.M., & Woods, L.C. (1996). Transactional Distance and Interact ive Television in the Distance Education of Health Professionals. The American Journal of Distance Education. 10 (3). Retrieved March 4, 2003, from http://www.csusm.edu/ilast/vcyear 3/transactional/Bischoff.htm Boyd R. & Apps J. (1980). Redefining the Discipline of Adult Education. San Francsico:Jossey-Bass. Bragg, N. (1998). Designi ng your Syllabus to Pr omote Student Learning. Retrieved March 31, 2003, from http://www.cat.ilstu.edu/pdf/catdec98.pdf Carr, A. M. & Chad S. ( 2000). Integrating Instruct ional Design in Distance Education. Retriev ed March 31, 2003, from http://ide.ed.psu.edu/idde/ Carr-Chellman, A. & Duchastel, P. (2000). The ideal online course. British Journal of Educational Technology, 31(3), 229-241. Chen, Y. (2001). Dimensions of transacti onal distance in the World Wide Web learning environment: a factor analysis. British Journal of Educational Technology, 32(4), 459-470. Crocker, L. & Algina J. (1986). Introduction to classical and modern test theory. New York: Harcourt Brace Jovanovich College.

PAGE 105

99 Desmond Keegan (Ed.): Theoretical prin ciples of distance education. London, New York: Routledge 1993, p. 22-38. Gagne', R.M. (1985). The conditions of learning a nd theory of instruction. New York: Holt: Rinehart & Winston. Goetsch, L. A. & Kaufman P. T. (1998) Readin, writin , arithmetic, and information competency: adding a basic skills component to a universitys curriculum. Campus-Wide Information Systems 15(5), 158-163. HRDE: Human Resource Development Ente rprises. Retrieved March 31, 2003, from http://www.hrdenterprise s.com/inventory.html Hall, R. (2002). Aligning learning, teaching and asse ssment using the web: an evaluation of pedagogic approaches. British Journal of Educational Technology 33(2), 149-158. Huang, H. (2002). Student perceptions in an Online Mediated Environment. International Journal of Media 29(4), 405-422. Illinois Online Network and the Board of Tr ustees of the University of Illinois (2003). Key Elements of an Online Program. Retrieved November 10, 2003, from http://www.ion.illinois.edu/IONresources/onlineLearning/elements.asp Ingram, A. L. (2002). Us ability of Alternative Web Course Structures. Computers in the Schools, 19(3/4), 33-47.

PAGE 106

100 Jeris, L. & Poppie, A. ( 2002). Screen to Screen: A Study of Designer/Instructor Beliefs and Actions in Internet-based Courses. Paper presented on May 24-26, 2002 at the annual meeting of the Adult Educational Research Conference, Raleigh, NC. Jung, I. (2001). Building a theoretical fr amework of web-based instruction in the context of distance education. British Journal of Educational Technology 32(5), 525-534. Kanuka, H., Collett, D., & Caswell, C. (2002) University Instructor Perceptions of the Use of Asynchronous Text-Bas ed Discussion in Distance Courses. American Journal of Distance Education, 16(3), 151-67. Kearsley, G., & Lynch, W., (1996). Struct ural Issues in Distance Education. Journal of Education for Business 71, 191-95. Lorenzetti, J. P. (2002). Practical C ourse Assessment St andards from MVU. Distance Education Report 2-4. Malone, B., Malm, L., Loren, D., Nay, F., Oliver, Saunders, N., & Thompson J., (1997). Observation of Instruction via distance learning: The need for a new evaluation paradigm. Paper presented on October 1997, at the Annual Meeting of the Mi d-western Educational Research Association, Chicago, Illinois. McAlpine, H., Lockerbie, L., Ramsay, D. & Beaman, Sue. ( 2002). Evaluating a Web-based Graduate Level Nursing Ethics Course: Thumbs up or Thumbs Down? The Journal of Continuing Education in Nursing, 33(1), 12-18.

PAGE 107

101 Moore, M.G., (1996). Theory of Transactional Distance. Retrieved March 4, 2003, from http://www.jou.ufl.edu/facult y/mleslie/spring96/moore.html Moore M.G. & Kearsley G. (1996). T heory of transactional distance. In D. Keegan (Ed.) Theoretical Principles of Distance Education (22-38). Routledge:New York. Moore M.G. & Kearsley G. (1996). Distance Education: A systems view. Belmont, CA: Wadsworth. Muller, C. (2003). Transactional Distance. Retrieved March 4, 2003, from http://tecfa.unige.ch/staf/sta f9698/mullerc/3/transact.html Nielsen, J. (1993). Usability Engineering New York: Academic Press. Nunnally, J.C. (1978). Psychometric theory (2 nd Ed.). New York: McGraw-Hill. North Carolina State University Computing Services (1998). Instructional Elements of an Online Course. Retrieved November 10, 2003, from http://www.ncsu.edu/it/edu/ta lks/components/essential.html Palmquist, M. (2004). Writi ng Center at CSU. Retrieved May 2, 2004, from http://writing.colostate.edu/references/research/relval/pop2a.cfm Reeves, T. C. (2000). Enhancing the Worth of Instructional Technology Research through Design Experiments and Other Development Research Strategies, Paper presented on April 27, 200 0 at session 41.29 at the annual meeting of the Amer ican Educational Resear ch Association, New Orleans, LA, USA.

PAGE 108

102 Richey, R. C. & Nelson, W. A. (1996). Experimental research methods. In Jonassen, D. (Ed.). Handbook of research for educational communications and technology. New York: Macmillan. Roblyer, M.D., Leticia E. (2000). How In teractive are YOUR Distance Courses? A Rubric for Assessing Interaction in Distance Learning. Retrieved March 10, 2003, from http://www.westga.edu/~distance/roblyer32.html Ross, S. M. & Morrison, G. R. (1996). Experimental research methods. In Jonassen, D. (Ed.). Handbook of research for educational communications and technology. New York: Macmillan. Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests. New York: Wiley. Rumble, G. (1986). The Planning and Management of Distance Education. Croom Helm, London. Saba F. & Shearer R.L. (1994). Verifyi ng the key theoretical concepts in a dynamic model of distance education. The American Journal of Distance Education, 8(1), 36-59. Scandura, J. M. & Stevens, G. H. (1987). A Lesson Design Based on Instructional Prescriptions from t he Structural Learning Theory. In Reigeluth, Charles M. (Eds.), Instructional Theories in Action: Lessons Illustrating Selected Theories and Models (pp. 161-180). New Jersey: Lawrence Erlbaum Associates, Inc.

PAGE 109

103 Scott, B. (2003). Online Learning Knowled ge Garden. Retrieved March 10, 2003, from http://barrington.rmcs.cranfield.ac.uk /coursedesign/coursestructure/index. shtml Thurmond, V.A., Wambach, K., Connors, H. R. & Grey, B.B. ( 2002). Evaluation of Student Satisfaction: Determining the Impact of a Web-Based Environment by Controlling for Student Characteristics. American Journal of Distance Education, 16(3), 169-89. Trochim, W.M.K. (2002). Measurement Vali dity Types. Retrieved April 9, 2004, from http://trochim.human.co rnell.edu/kb/measval.htm Tu, C. & McIsaac, M. (2002). The Rela tionship of Social Presence and Interaction in Online Classes. American Journal of Distance Education. 16(3), 131-46. Wagner, J. G. (2001). The Newsletter of the National Business Education Association. Assessing Online Learning, 11(4).

PAGE 110

104 Appendices

PAGE 111

105 Appendix A: Sample Email for Course Access To: Administrator From: Cheryl N. Sandoe Re: Dissertation research study I am a doctoral candidate in the College of Education. My dissertation proposes to measure the structural component that exists in every online course. To date, there is no instrument that measures only this component. To perform and complete this study I would like to exam ine your online cour se for the degree of structure present. In order for me to measure this co mponent, I will need access to online courses at the instructor level. No changes to any course regarding its organization or content or in any other way will be m ade. Additionally, no student or faculty information need be present. The courses will need to maintain their syllabi but the faculty information can be removed. After the analysis is complete, I will share the results of my study with you. I sincerely appreciate your help and support as I complete this study. Sincerely, Cheryl N. Sandoe Doctoral Candidate University of S. Florida

PAGE 112

106 Appendix B: Sample Email for Recruitment of Experts To: Potential Expert From: Cheryl N. Sandoe Re: Participating as an expert in a research study Salutation: I am currently a doctoral candidate at the University of South Florida. Dr. James White, Ph.D. is my major professor. My dissertation is based on Michael G. Moores Theory of Transact ional Distance. I am devel oping an instrument to measure the structure component of this theory. To ensure inter-rater reliability of my instrument, I am in need of two s ubject matter experts to review my instrument and provid e feedback as to its content. I have attached a copy of my proposal for your review. Y ou will note that I have used m any of your articles in my literature review so I would be very interested in your participation as you are the experts in this area. Also, if you are interested in participating, as soon as the instrument has been revised and agreed upon amongst all experts (there will be a total of three, myself included) will be asked to ev aluate 10 online courses using the instrument. The evaluation process should take no longer than 30-45 minutes per course. I would sincerely appreciate your ex pert knowledge and assistance with my dissertation process. Please let me know as soon as possible if you are interested in participating. Sincerely, Cheryl N. Sandoe Doctoral Candidate University of South Florida

PAGE 113

107 Appendix C: Sample Ema il to Subject Matter Exper t for Sorting Exercise To: Subject Matter Expert From: Cheryl N. Sandoe Re: Sorting Descriptors Salutation: Attached are the descriptors and directions on performing the sorting exercise along with information regarding the evalut ion of each descriptor for clarity and quality. When complete, please attach to an email and send back to me. Thanks for your participati on with my study. Sincerely, Cheryl N. Sandoe Doctoral Candidate University of South Florida

PAGE 114

108 Appendix D: Dimensi ons of Structure Measurement Tool Each item should be rated to the degree to which the elements are present. The scale is: 0 not evident 1 minimally evident 2 moderately evident 3 fully evident Content Organization Overall The course: 1) content/instruction is appropriate for the target audience 2) objectives match the course exams 3) provides a glossary or additional references 4) utilizes media (graphic, animations, diagrams, video, and audio) that are relevant to the course 5) contains a course calendar that includes important course dates Syllabus The syllabus contains: 1) faculty contact information 2) course description 3) course objectives 4) information about any pre-requisi tes or entry-level skills needed 5) information where students can contact technical support 6) information regarding student support services 7) information regarding the instructors grading policies 8) information regarding par ticipation requirements 9) information regarding course policies (i .e. late assignments, make-up policies, etc)

PAGE 115

109 Appendix D (C ontinued) Sequencing Each course unit or module contains: 1) a clear overview of the material to be presented 2) clear objectives of t he material to be presented 3) a page that clearly communicates all activities to be completed 4) clearly communicates how to submit assignments due 5) a summary of the mate rial that was presented Course Schedule 1) Assignments by week (or other ti me unit) (includes calendar dates.) 2) Point value of all assignments. 3) All assignments, including assigned reading. 4) All due dates for assignments 5) All exam or assessment dates. 6) Suggested assignment beginning dates. Delivery Organization Overall 1) A layout screen that is clear, clean, and well organized. 2) On screen instructions that are simple, clear, and concise. 3) The ability for the student to bookmark areas of the course. 4) The ability for students to access archived discussions (i.e. synchronous chats or desktop conference meetings.) 5) On screen navigation (i.e. breadcrumbs) that tell a learner where they are, where they have been, and where they can go. 6) FAQs or the equivalent to addre ss functional aspects of the course. 7) Clear exit/logoff paths. Consistency 1) Having a course menu that remains constant as a student moves throughout the course. 2) Each content module or unit is accessed in the same manner as a student moves throughout the course. 3) The module/unit layout is presented consistently (in the same manner) in each unit.

PAGE 116

110 Appendix D (C ontinued) Flexibility The learner: 1) has control over the rate of presentation 2) can review previous frames of information as often as desired 3) can skip on screen instructions if they have already been viewed 4) can proceed at their own pace 5) has flexible or adaptable learning routes 6) can pause or re-play any audio or video segment as often as desired Course Interactions Organization Student to Instructor Instructor provided: 1) a statement as to their timeliness of responses to email and student inquiries. 2) a statement as to what type(s) of communications are required (i.e. discussion, email) 3) discussion information: such as a lin k and time of discussion (if synchronous); criteria expectations (length of posts), quantity of participation required (if asynchronous) 4) their availability for phone or F2F conferencing 5) guidelines for all communication Student to Student Student to student communication 1) methods were communicated clearly 2) guidelines were communica ted clearly (i.e. netiquette) 3) guidelines regarding all offline meetings was communicated (i.e. posting a transcript of offline meetings for the entire group) Student to Interface The course provided detailed directions on how to: 1) submit each assignment or activity 2) use all course tools

PAGE 117

Appendix E: Process Overview Development of category specifications: Write items for each Review of Literature and Experience: Initial creation of items for instrument A Item review and sort: items are sorted by subject matter experts and analyzed for clarity and quality Item Re-writing: Researcher re-writes items Review of items and revision: Researcher reviews items written and revises for clarity and accuracy Item Specifications: Map each item to course design principles 111

PAGE 118

Appendix E (Continued) Field testing: enlist SME to use instrument on 10 courses each Statistical Analyses: Estimate reliability by correlating two researchers responses on items and Cronbachs alpha procedures Estimate validity by calculating correlations within each category Estimate validity by calculating discriminant validity Estimate validity by comparing experts placement with researchers for 75% agreement A Pilot testing: Informal tryout using 4 courses Item review and revision: make item revisions based on pilot testing 112

PAGE 119

113 Appendix F From: Wittenberg, Trudy TWITTENBERG@RESEARCH.USF.EDU To: Cheryl Sandoe Sandoe@phcc.edu Date: Mon, May 17, 2004 10:47 AM Subject: RE: Filing with the IRB? Email per our phone conversation Hi Cheryl, From your description of t he project, the Chair indicat ed that it doesnt appear to involve human subjects (the experts are not subjects) as defined by the federal rules on human subject protections and thus the IRB proc ess is not needed. However, please note the following: --Even though the activities are not subject to the federal rules, you should still follow the applicable ethical standards of your profe ssion including the implementation of an informed c onsent process if indicated. --If procedures change significantly, please contact the IRB office again so that we might work with you to reassess t he applicability of the federal rules. Let me know if you have any questions. Good luck with your project! Thanks, Trudy

PAGE 120

114 APPENDIX G Sorting descriptors into categories and evaluating descriptors: The proposed instrument contains three main categories: Content Organization, Delivery Organization, and Course Interactions Organization. Each main category consists of sub-categorie s and each sub-categor y is made up of descriptors. The categories and sub-categories along with their acronyms are listed below. Below the cat egories and the acronyms, are the lists of descriptors. Next to each descriptor is a place for y ou to enter the letters representing the category and sub-category of the area wh ere you believe each descriptor best fits. Review each descriptor and enter the appropriate acronym for its placement. After the instruments have been collected from all three experts, the results will be compiled and a new instrument will be di stributed where you will rate each item for its clarity and qualit y as it pertains to the category you assigned using the following semantic differential scale: 0 no clarity or no quality 1 minimal clarity or minimal quality 2 moderate clarity or moderate quality 3 maximum clarity or maximum quality NOTE: Keep in mind that once the draft instrument is compiled the presence of each descriptor in a course will be evaluat ed using a Semantic Differential scale with rating criteria of: 0 not evident 1 minimally evident 2 moderately evident 3 fully evident

PAGE 121

APPENDIX G (Continued) Content Organization Sub-Categories Acronyms Overall COO Syllabus COS Sequencing COSeq Course Schedule COCS Delivery Organization Sub-Categories Acronyms Overall DOO Consistency DOC Flexibility DOF Course Interactions Organization Sub-Categories Acronyms Student to Instructor CIOSI Student to Student CIOSS Student to Interface CIOSI 115

PAGE 122

116 APPENDIX G (Continued) Listing of Descriptors Descriptor Placement Clarity Quality Content/instruction contained in course is appropriate for the target audience. Each course unit/module contains a clear overview of the material to be presented. Course has a menu that remains constant as the student moves within the course. Course unit/modules are presented consistently throughout the course. Course provides FAQs or equivalent. Instructor grading policies are present. Participation requirements are provided. Instructor provides expectations regarding discussion posts or other class interactions (synchronous or asynchronous.) All assignments including assigned reading is available for access. Contains a course calendar that includes important course dates. Contains information regarding course policies (i.e. late assignments, make-up policies, etc.) Course contains due dates for assignments. Course provides detailed directions on how to submit each assignment or activity. Suggested begin dates for each unit/module are provided. Ability to access archived discussions (i.e. synchronous chats or desktop conference meetings) are provided. Course objectives are present.

PAGE 123

117 Appendix G (Continued) Descriptor Placement Clarity Quality Guidelines were provided regarding all offline student communication (i.e. posting transcripts of offline meetings for a group.) Students can proceed at their own pace. Course provides on screen navigation (i.e. breadcrumbs) to let the learner know where they are in the course. Technical support contact information is provided. The course contains flexible or adaptable learning routes. Student to student communication methods were clearly communicated. Course contains assignments by week (or other time unit, including calendar dates.) Each course unit/module clearly communicates where to submit assignments due. Media such as graphics, animations, diagrams, video, and audio that are utilized are relevant to the course. Point value of all assignments is available. Course description is present. Objectives match the course exams. Learner has control over the rate of presentation of material. Course unit/module provides a summary of the presented material. Information regarding student support services is available in the course. Instructor is available for phone or F2F conferencing. Faculty contact information is present.

PAGE 124

118 Appendix G (Continued) Descriptor Placement Clarity Quality Students can review previous frames of information unlimited times. All exam or assessment dates are provided. Student has the ability to bookmark areas of the course. Course provides directions on how to use all course tools. Student can pause or re-play any audio or video segment as desired. Each course unit/module contains a single page that communicates all activities to be completed. Instructor provides guidelines for all student communication. Each module/unit is accessed in the same manner throughout the course. Previously viewed on screen instructions can be skipped. Student to student communication behaviors are clearly communicated. Course provides a layout screen (homepage) that is clear, clean, and well organized. Course provides on screen instructions that are simple, clear, and concise of how to begin. Glossary or additional references are provided. Course provides clear exit/logoff paths. Faculty provides information as to their timeliness of responses to email and student inquiries.

PAGE 125

119 Appendix G (Continued) Information about any pre-requisites or entry-level skills needed is present. Each course unit/module contains clear objectives of the material to be presented.

PAGE 126

120 APPENDIX H Questions from Ingrams instrument used to evaluate discr iminant validity. 1. What kind of organization did this site have? Hierarchy Task-oriented Schedule Other No organization 2. How clear were the goals of this Web site? Not clear at all Somewhat Clear Neutral Clear Very Clear 3. How clear were the tasks you did today? Not clear at all Somewhat Clear Neutral Clear Very Clear 4. How easy was it to use this Web site? Not easy at all Somewhat Easy Neutral Easy Extremely Easy 5. How well organized was the material in this Web site? Not organized at all Somewhat organized Neutral Well-organized Extremely well organized 6. Did you feel lost in this Web site? Almost never Sometimes Often Almost always If you did feel lost, please describe what you were doing at the time (one incident is enough.) 7. Did you know where to cli ck to navigate around the site? Almost never Sometimes Often Almost always

PAGE 127

121 APPENDIX I Questions from Huangs instrument us ed to evaluate discr iminant validity. These items were rated as to 1 (s trongly disagree) to 7 (strongly agree). 1. I believe online course sy llabus is well presented. 2. I believe assignments are reasonable. 3. I believe grading criteria are clear. 4. I am able to access cour se materials at any time. 5. I can actively participate in the learning process. 6. I believe course materials will meet my needs.

PAGE 128

122 APPENDIX J Questions from Chens instrument us ed to evaluate discriminant validity. These questions were rated as to flexibility in the class: 1=Extremely Rigid 2=Very Rigid 3=Rigid 4=Moderate 5=Flexible 6=Very Flexible 7=Extremely Flexible 1. Learning activities used in class 2. Pace of the course. 3. Attendance 4. Objectives of the course 5. Choice of readings 6. Course requirements 7. Deadline of assignments

PAGE 129

123 APPENDIX K Questions from Bischoffs instrument used to evaluate discr iminant validity. 1. Were you provided with a syllabus/out line at the beginning of this course? Yes No 2. If you received a syllabus/outline, sele ct the description that most closely resembles your syllabus: Topics and assignments with dates Topics and assignments no dates Tentative topic list Suggested topics and assignments opt ions for student directed topics Topics and assignments selected by students The next four questions are ans wered using a Likert scale: 1=Strongly agree to 5=Strongly disagree 3. I have input into what information/content is covered in this course. 4. I have a say in what assignments and other learning activities I want to do in the course. 5. I have a say in how my grade is determined. 6. I have the freedom to choose the deadlines for my assignments and/or exams. 7. I have a teacher who directs my learning

PAGE 130

124 APPENDIX L Rating descriptors for qualit y and clarity after sort: The following organization is the result of the initial sort process by three experts. Please rate each descriptor as to its clarity and its quality as it pertains to the category. Evaluate each descriptor for its clarity and quality using a Semantic Differential scale with rating criteria of: 0 not evident 1 minimally evident 2 moderately evident 3 fully evident Listing of Descriptors Descriptor Rating Content Organization Clarity Quality Overall Media such as graphics, animations, diagrams, video, and audio that are utilized are relevant to the course. Objectives match the course exams. Glossary or additional references are provided. Each course unit/module contains clear objectives of the material to be presented. Course objectives are present. Course provides FAQs or equivalent. Course provides detailed directions on how to submit each assignment or activity. Content/instruction contained in course is appropriate for the target audience. Syllabus Clarity Quality Instructor grading policies are present. Participation requirements are provided. Contains information regarding course policies (i.e. late assignments, make-up policies, etc.)

PAGE 131

125 Appendix L (Continued) Technical support contact information is provided. Point value of all assignments is available. Information regarding student support services is available in the course. Faculty contact information is present. Instructor provides guidelines for all student communication. Information about any pre-requisites or entry-level skills needed is present. Instructor provides expectations regarding discussion posts or other class interactions (synchronous or asynchronous.) Guidelines were pr ovided regarding all offline student communication (i.e. posting transcripts of offline meetings for a group.) Course description is present. Each course unit/module contains a clear overview of the material to be presented. Course Schedule Clarity Quality Course contains due dates for assignments. Course contains assignments by week (or other time unit, including calendar dates.) All exam or assessment dates are provided. Suggested begin dates for each unit/module are provided. Contains a course calendar that includes important course dates. Course has a menu that remains constant as the student moves within the course. Delivery Organization

PAGE 132

126 Appendix L (Continued) Overall Clarity Quality Course provides a layout screen (homepage) that is clear, clean, and well organized. Course provides on screen instructions that are simple, clear, and concise of how to begin. Student has the ability to bookmark areas of the course. Course provides clear exit/logoff paths. Consistency Clarity Quality Course has a menu that remains constant as the student moves within the course. Course provides on screen navigation (i.e. breadcrumbs) to let the learner know where they are in the course. Each module/unit is accessed in the same manner throughout the course. Each course unit/module contains a single page that communicates all activities to be completed. Course unit/modules are presented consistently throughout the course. Flexibility Clarity Quality All assignments including assigned reading is available for access. Ability to access archived discussions (i.e. synchronous chats or desktop conference meetings) are provided. Students can proceed at their own pace. The course contains flexible or adaptable learning routes. Students can review previous frames of information unlimited times. Student can pause or re-play any audio or video segment as desired.

PAGE 133

127 Appendix L (Continued) Previously viewed on screen instructions can be skipped. Learner has control over the rate of presentation of material. Course Interactions Organization Student to Student Clarity Quality Student to student communication behaviors are clearly communicated. Student to student communication methods were clearly communicated. Student to Instructor Clarity Quality Faculty provides information as to their timeliness of responses to email and student inquiries. Instructor is available for phone or F2F conferencing.

PAGE 134

128 APPENDIX M Item Rating Results Descriptor Item Number Clarity Mean Quality Mean Notes Course Organization Overall 1 2.33 2.67 2 3.00 3.00 3 3.00 3.00 4 3.00 3.00 5 2.67 1.67 Discussed syllaubs vs. overall category placement 6 3.00 2.00 7 3.00 1.33 Moved to syllabus category 8 3.00 3.00 Syllabus 1 3.00 3.00 2 3.00 3.00 3 3.00 3.00 4 3.00 3.00 5 3.00 3.00 6 3.00 3.00 7 3.00 3.00 8 3.00 3.00 9 3.00 3.00 10 3.00 3.00 11 3.00 3.00 12 3.00 3.00 13 2.67 2.33 Course Schedule 1 3.00 2.67 2 3.00 3.00 3 3.00 3.00 4 3.00 3.00

PAGE 135

129 Appendix M (Continued) 5 3.00 3.00 6 2.33 1.33 Moved to Delivery Consistency Delivery Organization Overall 1 3.00 3.00 2 3.00 3.00 3 3.00 3.00 4 3.00 3.00 Consistency 1 3.00 2.33 2 3.00 3.00 3 3.00 3.00 4 3.00 3.00 5 2.67 3.00 Flexibility 1 2.67 2.33 2 3.00 3.00 3 3.00 2.33 4 2.67 2.67 5 3.00 3.00 6 3.00 3.00 7 3.00 3.00 8 2.67 2.33 Course Interaction Student Student 1 2.00 2.33 2 3.00 3.00 Student Instructor 1 3.00 2.00 2 3.00 2.00

PAGE 136

130 Appendix N Final Working Instrument Structure Component Tool Course Title: ________________ ________________________ ______________ Rater: ________________________ _____________________ _______________ Rate each item as to the degree which the elements are present in the online course. 0 not evident 1 minimally evident 2 moderately evident 3 fully evident Listing of Descriptors Descriptor Rating Content Organization Overall Media such as graphics, animations, diagrams, video, and audio that are utilized are relevant to the course. Objectives match the course exams. Glossary or additional references are provided. Each course unit/module contains clear objectives of the material to be presented. Course objectives are present. Course provides FAQs or equivalent. Content/instruction contained in course is appropriate for the target audience. Syllabus Instructor grading policies are present. Participation requirements are provided. Contains information regarding course policies (i.e. late assignments, make-up policies, etc.) Technical support contact information is provided.

PAGE 137

131 Appendix N (C ontinued) Point value of all assignments is available. Information regarding student support services is available in the course. Faculty contact information is present. Instructor provides guidelines for all student communication. Course provides detailed directions on how to submit each assignment or activity. Information about any pre-requisites or entry-level skills needed is present. Instructor provides expectations regarding discussion posts or other class interactions (synchronous or asynchronous.) Guidelines were pr ovided regarding all offline student communication (i.e. posting transcripts of offline meetings for a group.) Course description is present. Each course unit/module contains a clear overview of the material to be presented. Course Schedule Course contains due dates for assignments. Course contains assignments by week (or other time unit, including calendar dates.) All exam or assessment dates are provided. Suggested begin dates for each unit/module are provided. Contains a course calendar that includes important course dates. Delivery Organization Overall

PAGE 138

132 Appendix N (C ontinued) Course provides a layout screen (homepage) that is clear, clean, and well organized. Course provides on screen instructions that are simple, clear, and concise of how to begin. Student has the ability to bookmark areas of the course. Course provides clear exit/logoff paths. Consistency Course has a menu that remains constant as the student moves within the course. Course provides on screen navigation (i.e. breadcrumbs) to let the learner know where they are in the course. Each module/unit is accessed in the same manner throughout the course. Course has a menu that remains constant as the student moves within the course. Each course unit/module contains a single page that communicates all activities to be completed. Course unit/modules are presented consistently throughout the course. Flexibility All assignments including assigned reading is available for access. Ability to access archived discussions (i.e. synchronous chats or desktop conference meetings) are provided. Students can proceed at their own pace. The course contains flexible or adaptable learning routes. Students can review previous frames of information unlimited times. Student can pause or re-play any audio or video segment as desired.

PAGE 139

133 Appendix N (C ontinued) Previously viewed on screen instructions can be skipped. Learner has control over the rate of presentation of material. Course Interactions Organization Student to Student Student to student communication behaviors are clearly communicated. Student to student communication methods were clearly communicated. Student to Instructor Faculty provides information as to their timeliness of responses to email and student inquiries. Instructor is available for phone or F2F conferencing. Copyright 2004, Cheryl N. Sandoe

PAGE 140

134 About the Author Cheryl Sandoe received a Bachelors Degree in Marketing from the University of South Florida in 1987 and a M. S. in Education from the University of South Florida in 1997. She began her career as a Technology Specialist in the Pinellas County, Florida public school system. While teaching she began pursuit of her Doctorate. She made a job change to a Research Assistant for Educational Outreach at t he University where she assisted the department in researching current theorie s and aspects of distance le arning. After about two years working for Educational Outreach she became an Instructional Designer for the College of Nursing. Upon completion of all of her doctoral coursework she took a job as the Director of Instruct ional Technology at a local community college.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001670342
003 fts
005 20051216093303.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 051116s2005 flu sbm s000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001204
035
(OCoLC)62285624
SFE0001204
040
FHM
c FHM
049
FHMM
090
LB1601 (Online).
1 100
Sandoe, Cheryl.
0 245
Measuring transactional distance of online courses
h [electronic resource] :
b the structure component /
by Cheryl Sandoe.
260
[Tampa, Fla.] :
University of South Florida,
2005.
502
Thesis (Ph.D.)--University of South Florida, 2005.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 140 pages.
3 520
ABSTRACT: Online or web-based courses have become prolific in our educational environment over the past several years. The development of these courses can be guided by systematic design models to ensure quality instructional design. Transactional distance, the theory that claims the distance an online student feels is more of a pedagogical distance than a geographic one, consists of three factors: structure, dialogue, and learner autonomy. Accurate measurement of these three factors is needed in order to substantiate its claims and to best determine the delivery implications. This study produced an instrument that measures the structure component of the transactional distance theory as it pertains to the online environment. A total of 20 online courses were evaluated using the Structure Component Evaluation Tool (SCET). Experts in the field validated the instrument and reliability was determined by calculating Cronbachs alpha as well as examining inter-rater reliability.
590
Adviser: James White, Ph.D.
Co-adviser: William Kealy, Ph.D.
653
Online learning.
Distance education.
Course design.
Web learning.
Course structure.
Transactional distance.
690
Dissertations, Academic
z USF
x Secondary Education
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1204