USF Libraries
USF Digital Collections

The effectiveness and user perception of 3-dimensional digital human anatomy in an online undergraduate anatomy laboratory

MISSING IMAGE

Material Information

Title:
The effectiveness and user perception of 3-dimensional digital human anatomy in an online undergraduate anatomy laboratory
Physical Description:
Book
Language:
English
Creator:
Hilbelink, Amy JoAnne
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Distance learning
Stereo-imaging
Dissection
Nursing
Spatial relationships
Mental models
Dissertations, Academic -- Instructional Technology -- Doctoral -- USF   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: The primary purpose of this study was to determine the effectiveness of implementing desktop 3-dimensional (3D) stereo images of human anatomy into an undergraduate human anatomy distance laboratory. User perceptions of 2D and 3D images were gathered via questionnaire in order to determine ease of use and level of satisfaction associated with the 3D software in the online learning environment. Mayer's (2001, p. 184) principles of design were used to develop the study materials that consisted of PowerPoint presentations and AVI files accessed via Blackboard. The research design employed a mixed-methods approach. Volunteers each were administered a demographic survey and were then stratified into groups based upon pre-test scores. A total sample size of 62 pairs was available for combined data analysis.^ Quantitative research questions regarding the effectiveness of 2D versus the 3D treatment were analyzed using a doubly-multivariate repeated measures (Doubly- MANOVA) design. Paired test scores achieved by undergraduates on a laboratory practical of identification and spatial relationships of the bones and features of a human skull were used in the analysis. The questionnaire designed to gather user perceptions consisted of quantitative and qualitative questions. Response frequencies were analyzed for the two groups and common themes were noted. Results revealed a statistically significant difference in group means for the main effect of the treatment groups 2D and 3D and for the variables of identification and relationship with the 3D group outperforming the 2D group on both dependent variables. Effect sizes were determined to be small, 0.215 for the identification variable and 0.359 for the relationship variable.^ ^Overall, all students liked the convenience of using PowerPoint and AVI files online. The 3D group felt their PowerPoint was more realistic than did the 2D group and both groups appreciated the detailed labeling of the online images. One third of the volunteers in the 3D group indicated that "eye strain" was what they liked least about working with the 3D images. Results indicate that desktop, stereo imaging may be incorporated effectively into online anatomy and physiology courses, but that more work needs to be done to ensure less eye strain.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2007.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Amy JoAnne Hilbelink.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 194 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001921061
oclc - 190861291
usfldc doi - E14-SFE0001876
usfldc handle - e14.1876
System ID:
SFS0026194:00001


This item is only available as the following downloads:


Full Text

PAGE 1

The Effectiveness and User Perception of 3-Dimensional Digital Human Anatomy in an Online Undergraduate Anatomy Laboratory by Any JoAnne Hilbelink A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Secondary Education College of Education University of South Florida Major Professor: Ann Barron, Ed.D. Jeffrey Kromrey, Ph.D. Karl Muffly, Ph.D. James White, Ph.D. Date of Approval: March 9, 2007 Keywords: distance learning, stereo-imaging, dissection, nursing, spatial relationships, mental models Copyright 2007, Amy JoAnne Hilbelink

PAGE 2

This work is dedicated to my amazingly supportive husband Don and my beautiful and remarkable children Hannah and Garrett. Each of them sacrificed a great deal to permit me to fulfill my dream. They are the reason for everything.

PAGE 3

Acknowledgements There are many people who have contributed to my success in achieving this doctorate degree. My committee stands at the top of that list. Together they offered advice and guidance while at the same time increasing my confidence in my scholarly abilities. Dr. Ann Barron continued throughout to offer her support, guidance, motivation and enthusiasm for my research. Dr. Muffly offered support and his unique insight into the field of my research – human anatomy. I want to thank Dr. White for his continued professionalism and for his depth of interest and understanding of the theory behind my research. Dr. Kromrey was outstanding in his patience, knowledge, direction and support the many times I came to him for advice. I have gained immensely from the expertise of all members of my committee. To the Consulting Office for Research in Education (CORE) Group, I wish to thank Heather Scott and Bethany Bell for their knowledge and assistance while assisting with my data analysis. I want to thank two dear friends who consistently encouraged me to persist, understood my frustrations, and presented with me at national meetings. I know I could not have gotten through the program without the assistance, care and friendship shown to me by Ms. Shauna Schullo and Ms. Melissa Venable – two life-long friends. I also wish to thank family and neighbors that supported me and my family throughout this process and who, each in their own way, made it possible for me to continue to strive for my goal. Specifically, I want to thank my Mom for always showing an interest in what I was trying to achieve. My greatest thanks go to my husband and children who have sacrificed so much during this challenging time. It’s only because of their constant support and strength that I was able to achieve this goal.

PAGE 4

i Table of Contents List of Tables__________________________________________________________iv List of Figures_________________________________________________________vii Chapter One___________________________________________________________1 Introduction__________________________________________________________1 Statement of Problem________________________________________________1 Purpose of the Study_________________________________________________2 Theoretical Basis of Study____________________________________________2 Background________________________________________________________4 Three-Dimensional Products__________________________________________5 Research Questions__________________________________________________7 Significance of the Study_____________________________________________9 Limitations_______________________________________________________10 Why the Skull?____________________________________________________11 Definition of Terms_________________________________________________12 Chapter Two__________________________________________________________14 Review and Synthesis of the Related Literature_____________________________14 Current State of Allied Health Courses_________________________________14 Human Anatomy; History____________________________________________18 Human Anatomy; Public Opinion_____________________________________19 Human Anatomy; Lack of Qualified Instructors__________________________20 Human Anatomy; Logistics Problems__________________________________21 Human Anatomy; Expense___________________________________________22 Human Anatomy; Lack of Material____________________________________22 Alternative Methods________________________________________________23 Multimedia Approaches_____________________________________________24 Weaknesses of Studies______________________________________________26 Student Perceptions of Alternative Methods_____________________________26 What is Stereo Imaging?_____________________________________________28 When is Stereo Imaging Used?________________________________________29 Current Research___________________________________________________29 Eye Strain________________________________________________________30 Different Types of 3D_______________________________________________31 3D Learning Environments___________________________________________32 3D Learning Environments and Complex Relationships____________________34 Spatial Relationships and 3D_________________________________________36 Mental Models____________________________________________________37 Summary_________________________________________________________42

PAGE 5

ii Chapter Three_________________________________________________________44 Method____________________________________________________________44 Purpose and Research Questions______________________________________44 Design Changes Due to Pilot Data_____________________________________45 Mayer’s Criterion__________________________________________________46 Mayer’s Principles of Design_________________________________________49 Sequence of Procedures_____________________________________________49 Variables_________________________________________________________51 Instruments_______________________________________________________51 Demographic Questionnaire__________________________________________52 Pre-test Baseline___________________________________________________53 List of Structures and Relationships____________________________________54 PowerPoints______________________________________________________54 Identification Examination___________________________________________56 Relationship Examination____________________________________________57 User’s Perspective Questionnaires_____________________________________58 Design___________________________________________________________58 Required Steps____________________________________________________59 The Pre-test_______________________________________________________60 Acquiring 3D Glasses_______________________________________________61 Additional Requirements____________________________________________61 The Practical Examination___________________________________________62 Sample Size_______________________________________________________62 Data Analysis for Questions 1 and 2____________________________________63 Data Analysis for Question 3_________________________________________63 Chapter Four__________________________________________________________65 Results_____________________________________________________________65 Demographics_____________________________________________________65 Attrition__________________________________________________________71 Observations of Students____________________________________________72 Quantitative Results________________________________________________75 Assumptions for Doubly Multivariate Repeated Measures Analysis___________78 Doubly Multivariate Repeat ed Measures Results__________________________79 Effect Size________________________________________________________81 Qualitative Results_________________________________________________84 Chapter Five__________________________________________________________95 Discussion__________________________________________________________95 Problem Statement_________________________________________________95 Purpose__________________________________________________________95 Research Questions_________________________________________________95 Sample___________________________________________________________96 Instrumentation____________________________________________________97 Threats to Internal and External Validity________________________________98

PAGE 6

iii Summary of Findings_______________________________________________99 Results for Research Questions 1 and 2_________________________________99 Conclusion______________________________________________________103 Practical Significance______________________________________________106 Implications for Practice and Policy___________________________________107 Recommendations for Further Research________________________________108 List of References_____________________________________________________112 Appendices__________________________________________________________121 Appendix A: Demographic Questionnaire__________________________________122 Appendix B: Study Guide List of Structures and Relationships – Human Skull_____124 Appendix C: Answer Key for Identification and Relationship Questions__________128 Appendix D: User Perspective Questionnaire_______________________________130 Appendix E: Fall 2005 Pilot Study Results_________________________________132 Pilot Data Results_________________________________________________134 Demographic questionnaire_________________________________________134 User Perspective Questionnaire______________________________________137 Descriptive Statistics_______________________________________________144 MANOVA Evaluation_____________________________________________148 Observations_____________________________________________________149 Summary________________________________________________________153 Appendix F: Spring 2006 Pilot Study Results_______________________________154 Delivering the 3D glasses___________________________________________155 Creating the PowerPoints___________________________________________156 Study Guide_____________________________________________________159 Results__________________________________________________________160 Descriptive Statistics_______________________________________________161 MANOVA_______________________________________________________164 Qualitative Themes________________________________________________164 Summary________________________________________________________168 Appendix G: Spring 2006 Pilot Addendum________________________________169 Descriptive Statistics_______________________________________________171 Doubly MANOVA Repeat ed Measures________________________________174 Appendix H: Summer 2006 Pilot_________________________________________177 Demographic Survey______________________________________________177 Results__________________________________________________________182 Doubly MANOVA Repeated Measures – Summer_______________________183 Qualitative Themes________________________________________________184 Appendix I: Informed Consent Form for IRB_______________________________189 About the Author_________________________________________________End Page

PAGE 7

iv List of Tables Table 1. Treatments and measures for questions 1 and 2 9 Table 2. Mayer’s criteria for mental models 48 Table 3. Sequence of procedures 50 Table 4. Instruments, tools and groups used in study 52 Table 5. Cronbach coefficient alpha for pre-test baseline 54 Table 6. Cronbach coefficient alpha for identification and relationship questions 58 Table 7. Demographic results for section .05; groups 2D and 3D 67 Table 8. Demographic results for section .001; groups 2D and 3D 69 Table 9. Demographics on volunteers dropped from study 73 Table 10. Descriptive statistics on test scores by group 76 Table 11. Wilks’ lambda, F value and degrees of freedom 80 Table 12. Effect sizes 82 Table 13. Confidence intervals 83 Table 14. Themes from qualitative open-ended questions 86 Table 15. Miscellaneous comments for the question, “What did you like most?” 87 Table 16. Miscellaneous comments for the question, “What did you like least?” 87 Table 17. Level of agreement frequencies from questionnaire for both groups 89 Table 18. Describe how you felt while working with the powerpoint images. 92 Table 19. Which method would you prefer to use to learn human anatomy? 93 Table 20. Compared to what you may have anticipated, this task was… 93

PAGE 8

v Table 21. Do you feel the powerpoint added to your ease of learning human anatomy?93 Table 22. Pearson correlation coefficient for pre-test and identification scores 97 Table 23. Pearson correlation coefficient for pre-test and relationship scores 97 Table E1. Demographic questionnaire results 136 Table E2. User perspective questionnaire results 139 Table E3. Open –ended question responses and themes. 141 Table E4. Open –ended question responses and themes. 143 Table E5. Descriptive statistics (pilot) for the three measures by group 145 Table E6. Cronbach’s coefficient alpha 145 Table E7. Pearson correlation coefficients by group. 149 Table E8. New sequence of procedures 152 Table F1. Mayer’s criterion 158 Table F2. Descriptive statistics for overall scores 161 Table F3. Descriptive Statistics for ID scores 162 Table F4. Descriptive statistics for relationship scores 162 Table F5. Themes from qualitative questions 165 Table F6. Additional information from questionnaire 165 Table F7. Additional information from questionnaire 166 Table F8. Preferred method to learn human anatomy 167 Table F9. Task rate 167 Table F10. Did powerPoint add to ease of learning human anatomy? 167 Table G1. Information on volunteers deleted from study. 170 Table G2. Descriptive statistics for identification scores 172

PAGE 9

vi Table G3. Descriptive statistics for relationship scores 172 Table H1. Demographic survey results – summer 179 Table H2. Information on volunteers deleted from study. 181 Table H3. Descriptive statistics for identification scores 182 Table H4. Descriptive statistics for relationship scores 183 Table H5. Themes from qualitative questions 185 Table H6. Additional information from questionnaire 185 Table H7. Additional information from questionnaire 186 Table H8. Preferred method to learn human anatomy 187 Table H9. Task rate 188 Table H10. Did powerPoint add to ease of learning human anatomy? 188

PAGE 10

vii List of Figures Figure 1. Univariate plot of identification scores for 2D and 3D 77 Figure 2. Univariate plot of relationship scores for 2D and 3D 78 Figure 3. Visual display of differences between means. 81 Figure E1. Boxplots for pretest by group 146 Figure E2. Boxplots for identification practical by group 147 Figure E3. Boxplots for relationship practical by group 148 Figure F1. Plots for identification scores by group 163 Figure G1. Boxplots for 2D and 3D identification scores 173 Figure G2. Boxplots for 2D and 3D relationship scores 174 Figure G3. Graph of group differences spring 176 Figure. H1.Graph of group differences summer 184

PAGE 11

viii The Effectiveness and User Perception of 3-Dimensional Digital Human Anatomy in an Online Undergraduate Anatomy Laboratory Amy JoAnne Hilbelink ABSTRACT The primary purpose of this study was to determine the effectiveness of implementing desktop 3-dimensional (3D) stereo images of human anatomy into an undergraduate human anatomy distance laboratory. User perceptions of 2D and 3D images were gathered via questionnaire in order to determine ease of use and level of satisfaction associated with the 3D software in the online learning environment. Mayer’s (2001, p. 184) principles of design were used to develop the study materials that consisted of PowerPoint presentations and AVI files accessed via Blackboard. The research design employed a mixed-methods approach. Volunteers each were administered a demographic survey and were then stratified into groups based upon pretest scores. A total sample size of 62 pairs was available for combined data analysis. Quantitative research questions regarding the effectiveness of 2D versus the 3D treatment were analyzed using a doubly-multivariate repeated measures (DoublyMANOVA) design. Paired test scores achieved by undergraduates on a laboratory practical of identification and spatial relationships of the bones and features of a human skull were used in the analysis. The questionnaire designed to gather user perceptions consisted of quantitative and qualitative questions. Response frequencies were analyzed for the two groups and common themes were noted. Results revealed a statistically significant difference in group means for the main effect of the treatment groups 2D and 3D and for

PAGE 12

ix the variables of identification and relationship with the 3D group outperforming the 2D group on both dependent variables. Effect sizes were determined to be small, 0.215 for the identification variable and 0.359 for the relationship variable. Overall, all students liked the convenience of using PowerPoint and AVI files online. The 3D group felt their PowerPoint was more realistic than did the 2D group and both groups appreciated the detailed labeling of the online images. One third of the volunteers in the 3D group indicated that “eye strain” was what they liked least about working with the 3D images. Results indicate that desktop, stereo imaging may be incorporated effectively into online anatomy and physiology courses, but that more work needs to be done to ensure less eye strain.

PAGE 13

1 Chapter One Introduction Statement of Problem There is currently a large demand for undergraduate students in all health professions to be trained in human anatomy. Students enrolled in schools of Nursing, Physical Therapy, Speech Disorders, Wellness Programs, and Pre-Medical programs often must take at least one course in Human Anatomy as part of their required curriculum. Many programs also require that students take an anatomy laboratory as part of their coursework. To fully understand anatomy, students must understand the 3-dimensional (3D) spatial relationships that exist among the structures. Studying anatomy from a 2dimensional representation, such as from a text or a PowerPoint presentation, may not adequately permit students to learn the many spatial relationships that exist within human anatomy. With the advent of commercial 3D human anatomy visualization programs as well as the technology for developing one’s own stereo-imaging, it is now possible to include human anatomy laboratories as part of a distance learning course. Human Anatomy visualization programs can be delivered online or in a CD ROM format. The digital anatomy within many contemporary programs can be detailed, spatially correct, clinically relevant, relatively inexpensive, safe to use, and fairly simple to incorporate by

PAGE 14

2 instructors with little actual human anatomy laboratory training (ADAM Online Anatomy, 2005, Neotek, 2004, Primal Pictures, 2004). Purpose of the Study The primary purpose of this study was to examine the effectiveness of implementing desktop 3D stereo images of human anatomy into an undergraduate human anatomy distance laboratory. In addition, user perceptions of the 3D images were gathered via questionnaire in order to determine ease of use and level of satisfaction associated with the 3D software in the online learning environment. Theoretical Basis of Study Human Anatomy is a 3-dimensional area of study. Many relationships within the body must be seen in association to be understood. This is true when, for example, learning the anatomy of the skull. Much of it can not be fully appreciated until one performs an actual dissection in order to understand the complex relationships that exist within this region. The organization of nerves, bones, and foramen within the skull is extremely complex. Understanding the origin and termination of each of the 12 cranial nerves, for example, is generally a focus of anatomical education in any health-related field. The study of human anatomy is concerned with not only learning individual structures but also learning the spatial relationships that exist between those structures. Students must be able to visualize this 3D organization in their mind to fully understand the workings of and relationships that exist within the human body (Shaffer, 2004). This has been the historical goal of the human dissection laboratory. Mental Model Theory

PAGE 15

3 addresses the issue of how students learn such complex systems. According to Jonassen, (1994, p. 1) “mental models are the conceptual and operational representations that humans develop while interacting with complex systems.” Bayman and Mayer (1984) describe mental models as referring “to the user’s conception of the ‘invisible’ information processing states and transformations that occur between input and output.” Mental model theory has its basis in cognitive psychology. It has been a challenge for instructional designers to find ways of helping students form appropriate mental models within web-based environments. Mayer lists seven criteria he believes should be contained within instructional materials in order to increase the chances students will build appropriate and good mental models and therefore, understand complex systems. According to Mayer’s review (1989, p. 59) a “good model is: (a) Complete –it contains all the objects, states, and actions of the system, (b) Concise-it contains just enough detail, (c) Coherent-it makes ‘intuitive sense’, (d) Concreteit is presented at an appropriate level of familiarity, (e) Conceptual-it is potentially meaningful, (f) Correctthe objects and relations in it correspond to actual objects and events (g) Considerate-it uses appropriate vocabulary and organization.” With appropriate mental models, a student is able to understand causal relationships that exist within a complex system, even if they are not explicitly taught. The use of 3-dimensional models should permit better mental modeling than 2dimensional images primarily because they resemble to a greater extent the real anatomy. 3D models allow the learner to observe relationships among structures and to form appropriate lasting mental models of the relationships.

PAGE 16

4 Background The incorporation of gross anatomy laboratories into undergraduate nursing school and allied health curricula is generally seen as a cost prohibitive endeavor, particularly because these programs are typically not funded to the same degree as medical schools (American Association of Colleges of Nursing, 2003). In the vast majority of allied health programs, common ways to learn anatomy include text books, 2dimensional images, and the dissection of species such as cats or dogs. Although dissecting a cat or dog does expose the student to dissection skills, the spatial relationships that exist within those species may be very different from those found within the human body. Allied health courses are being offered more and more as distance learning courses. This is being done to accommodate students who are working on degrees while continuing to work at full-time jobs (American Association of Colleges of Nursing, 2003). Students in human anatomy laboratories are generally tested on their identification of anatomical structures by identifying which structure is labeled on a laboratory practical examination. Laboratory practical exams consist of labeled structures on a human cadaver specimen. Students may work in groups of 4 to 5 to learn the anatomy, and then are responsible on an individual basis for accurately identifying and spelling the anatomical structure that is indicated. Additional questions can be incorporated to determine if students are able to apply that information to relationships between and among the anatomical structures they have studied. Questions regarding spatial relationships that exist between structures are often considered “second level” or “higher order” questions, as students must be able to

PAGE 17

5 integrate what they are viewing into some sort of context, or mental model. To the extent that students can or can not see the relationships that exist between and among structures, one can then determine the value or effectiveness of the imaging method. Three-Dimensional Products Three-dimensional imaging has been available for many years. It has evolved from a rather simple technique, known as stereo-imaging to a high-end technology, virtual reality that is utilized by researchers in many areas such as Engineering, Medicine and Physics. Because of the way our eyes are positioned, humans naturally view the world in 3D. Each eye sees a slightly different perspective of an image; the brain then combines those images into one image with depth. Our eyes can distinguish what is near from that which is further away, and this visualization results in a realistic 3D image. There are a number of commercially available 3D software Human Anatomy programs available for Faculty and students alike, but many also have real limitations. Primal Pictures™ http://www.primalpictures.com/Index.aspx is one such source of human anatomical 3D imaging. The cost of Primal Pictures’ program may be prohibitive for many universities to offer their students. The cost of the total anatomy 9-CD Rom series is approximately $900.00. If a university licenses the online version, the cost to students can be as low as $99.00 per student for online access. This significantly decreases the cost to students but not to the university system. Another source of Human 3D imaging is ADAM™ Online Anatomy, http://www.adam.com/Our_Products/School_and_Instruction/Educators/High_School/ao

PAGE 18

6 a.html This program contains many images, but often of a simplistic nature. It is geared more toward the K-12 audience rather than undergraduates entering health related fields. The cost is approximately $250.00 for the ADAM Online Anatomy version. Neotek ™, is another example of a digital 3D anatomy program that can be administered online, http://www.neotek.com however its total cost can be prohibitive for institutions and students alike. In order to use the Neotek Human Anatomy laboratory, students must purchase the lab materials that cost $245.00, as well as a set of $100.00 liquid crystal glasses. Also, images obtained from Neotek or developed with Neotek software can only be viewed on a CRT monitor. With the increase in use of laptop computers by students, CRT monitors are not as common for students to have access to as in past years. Finally, with any commercial product, the end user must deal with either yearly contract renewals or else the knowledge that the product may not be available for long-term use. The technology is, however, available for developing one’s own stereo images for a fraction of the cost of commercially available images. In order to create stereo images, one needs only a digital camera, a “camera lens focal length” chart which helps determine the distance one needs to be from the image for the two images that will be made into a stereo image, and inexpensive stereoscopic software. The software permits the merging of the two images into a stereoscopic image that can be viewed on any computer monitor with a set of inexpensive red/blue glasses that will change the light entering each eye. The result is an inexpensive, stereo image of anything the user likes. For the purposes of this study, labeled stereo-images were produced using software designed by Pokescope Pro (2005) of prosected materials commonly studied in

PAGE 19

7 undergraduate anatomy laboratories. Images taken from The Bassett Stereoscopic Atlas (1952) were utilized when appropriate. These structures included the skull bones and features. Students were then given a laboratory practical examination on a prosected specimen. Research Questions This study sought to determine whether desktop 3D stereo-imaging of human anatomy is more effective than 2D images in an online anatomy course. It did this by asking whether or not students using 3D stereo-images performed significantly better than those using 2D images of the skull on two independent measures; identification and spatial relationships. A second goal of this study involved a user perspective questionnaire to measure ease of use of the digital 3D imaging, overall user satisfaction as well as to gather user perspectives on the 2D images and 3D stereo images employed in the study. The three research questions developed for this study were as follows: 1. Does the use of 3D stereo images result in significantly higher scores for undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of identification? The null hypothesis for question number one is: There will be no significant difference in mean student examination scores between the groups of undergraduate students (using 2D or 3D stereo materials) when given the laboratory practical examination for the various structures. 2. Does the use of 3D stereo images result in significantly higher scores for

PAGE 20

8 undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of spatial relationships? The null hypothesis for question number two is: There will be no significant difference in mean student examination scores between the groups of undergraduate students (using 2D or 3D stereo materials) when given the laboratory practical examination for the various relationships between structures. 3. Are the 3-dimensional digital stereo-images of human anatomy easy to use and to comprehend, and what are the students’ perceptions of them, as determined by a questionnaire in a sample of undergraduates? Refer to Table 1 for a visual description of the treatments for questions 1 and 2.

PAGE 21

9 Table 1. Treatments and measures for questions 1 and 2 Groups treatments Pre-test Materials Measure 1 Measure 2 A Simple identification 2D PowerPoint and AVI identification examination relationships examination B Simple identification 3D PowerPoint and AVI identification examination relationships examination Significance of the Study In addition to traditional identification questions anatomists typically use, questions were included regarding relationships between labeled structures. An example of one such spatial relationship question would be to ask the student to indicate which foramen of the skull a particular cranial nerve exits. Student performance regarding understanding the 2D and 3D images were tested with prosected materials. This was done because an actual dissection is considered the “gold standard” for anatomical identification testing. Although an actual dissection can not be delivered online, it was important to determine whether or not 3D imaging was significantly better than 2D

PAGE 22

10 imaging with the skull, so that the best possible online anatomy laboratory experience can be constructed. Because the study of human anatomy is complex (primarily due to the relationships that exist within the human body) and because the study of it is being done online in more disciplines, there is value in determining whether or not a 3D laboratory should be incorporated at a distance. It is also relevant to determine if 3D is more or less effective than a 2D version. If there is no statistical significance determined, the findings will still be important to the field of Human Anatomy. Anatomy laboratories could nonetheless be offered at a distance without regard for whether or not 3D should be incorporated. In addition, the research also has very real and practical significance for the many medical and allied health students who must take an anatomy/physiology class during the course of their education, but who can only take the course at a distance. This study may have direct implications for the future delivery of human anatomy content in medical and allied health schools across the nation. Limitations There were a number of limitations to this study that should be noted. The sample was a diverse mix of undergraduate nursing students and wellness program students, as well as other allied health students. Future studies could involve the analysis of one type of student, either nursing or allied health, for example, in order to make the results more specific and perhaps generalizable to that population. In addition, one specific region of the human anatomy, the human skull, was used for testing purposes for this study. It is likely that utilizing a different region of human

PAGE 23

11 anatomy could lead to different results, in that each region has unique spatial relationships associated with it. In addition, since the materials to be learned were completely online, it is difficult to know precisely how much effort the students put into learning the material. Volunteers were surveyed to get their overall perspectives, but they were not observed or interviewed regarding this aspect of the study. Finally, a major limitation had to do with how seriously the undergraduates did or did not take the study, and how their attitudes may have influenced their scores on the various aspects of the laboratory practical examination. During the portion of the study in which the students were to study the materials on their own, approximately 90 emails from the students of the two sections were received. Questions consisted of requests for clarification of the study, such as “did they need to take the test”, did “the test count as part of their grade”, if they missed the test, when “could they take a make-up”, “what if they chose not to participate?”, as well as explanatory comments that they were “too busy to participate”, their computer “froze up”, or that they had family emergencies that kept them from participating. Why the Skull? The skull was chosen as the portion of anatomy for this study for a number of reasons. The human skull has a plethora of bones and features that interdigitate and demonstrates depth, and that can be featured on laboratory practical examinations of identification and relationships. Once it is dissected from the body, the human skull does not need to be kept in toxic chemicals in order for it to retain its shape and structure; therefore a laboratory practical examination can be set up in a classroom rather than the

PAGE 24

12 gross anatomy laboratory if necessary. The basic skull structure, in terms of overall familiarity with shape and major features, is familiar to most people, whether or not they have had a prior human anatomy course. Finally, because of the familiarity of the skull to most people, viewing it for the first time should not be as shocking to the sensibilities of the undergraduate students as perhaps looking at a dissected chest cavity or a forearm of a cadaver. Definition of Terms 2-Dimensional imaging These are the images one sees when looking at pictures in a book, or online. The images do not have depth and are flat because they take up two dimensions in space. 3-Dimensional imaging – These images take up three dimensions, or directions, in space and consequently have depth to them. It is the way our eyes typically view our surroundings because each eye looks at a slightly different view of our world and the brain combines those images into one that has depth and space to it. Cadaver – A dead human body, typically one intended for dissection or medical research. (dictionary.com, 2005) Dissection – A detailed analysis of the human body that involves the taking apart of the cadaver specimen. Gross Anatomy – The medical study of the human body and its form and function. It is typically taught by region of the body, e.g., head and neck or upper or lower limb, and involves a human cadaver specimen.

PAGE 25

13 Mental Model – These are the “conceptual and operational representations that humans develop while interacting with complex systems,” (Jonassen, 1994, p. 1). Mixed Methods – A type of research design in which both quantitative and qualitative methods are employed in order to answer the research question(s) of interest. Prosection – A dissection technique in which the material is dissected prior to viewing. Students do not perform the actual dissection, but rather the material is dissected for them in order to reduce dissection error and save time. Stereo imaging – This type of imaging involves the overlap of two images in space so that it appears one is viewing an image with depth. It is a trick to the eyes and it forces the eyes and the brain to combine both images into something it can understand. The child’s toy, the stereo viewmaster is a good example of the physics behind this imaging. Virtual Reality (VR) – “A state produced in a person’s mind that can, to varying degrees, occupy the person’s awareness in a way similar to that of real environments.” (Keppel & MacPherson, 1997, p. 2)

PAGE 26

14 Chapter Two Review and Synthesis of the Related Literature This chapter provides information regarding the following topics that relate to this research study: the current status of anatomy/physiology instruction in allied health courses; the history of human gross anatomy and how contemporary issues have helped shape its current status; alternative methods to traditional gross dissection and student perceptions of these alternatives, a definition of 3D imaging, and how research in this area has contributed to the study of human anatomy and spatial relationships; and how the Mental Model theory can add to our knowledge and understanding of spatial relationships. Current State of Allied Health Courses A common pre-requisite for admission to nursing programs as well as other allied health programs such as wellness, nutrition and physical therapy is an undergraduate course and laboratory in Human Anatomy and Physiology. Allied health personnel are described online by the National Library of Medicine (2005) as “Health care workers specially trained and licensed to assist and support the work of health professionals. Often used synonymously with paramedical personnel, the term generally refers to all health care workers who perform tasks which must otherwise be performed by a physician or other health professional.” Categories of allied health professionals include

PAGE 27

15 dental assistants, home health aides, physician assistants, medical secretaries, and ophthalmic assistants. Generally the undergraduate anatomy and physiology courses are structured so that students learn basic functions of organisms, identification of organ systems, key physiological concepts as well as basic anatomical terms, structures and functions (Hillsborough Community College, 2005 and University of South Florida, 2005). Thousands of students across the country must take these courses in order to apply or to be accepted into their respective program of study. According to the Bureau of Labor Statistics (2005), “employment of registered nurses is expected to grow faster than average for all occupations through 2012.” It is also reported that employers are experiencing difficulty attracting and retaining RN’s. This is partly due to the fact that those currently in the nursing profession are aging while enrollment in nursing schools is not keeping up with demand. It is also due to an aging population in the U.S. The same statistics are found for those in other health fields, such as physical and occupational therapy, respiratory therapy, and physician assistants (Bureau of Labor Statistics, 2005). Students in each of these fields must pass a course in Anatomy/ Physiology either before or during the course of their studies. Within each allied healthcare discipline, there are a multitude of schools a student can attend. Many are accredited, and many are not (Commission on Accreditation of Allied Health Education Programs, 2005). When searching for “online/distance education” programs within the CAAHEP site for example, six schools are found that meet the criterion. Of these six, one had lost its accreditation since last year.

PAGE 28

16 There is a shortage of allied health professionals in the work force, and consequently schools that train these specialists must accommodate an increase in students in some way. According to an article that appeared in the St. Louis Business Journal (RehabCare, 2005) a chief executive with a Missouri-based rehabilitation corporation stated, “We need to take action now and collectively determine how to proliferate the field of allied health.” One way schools are working to increase enrollment is to attract and then accommodate those students who must attend class to gain a degree but who must also work either part or full-time (American Association of Colleges of Nursing, 2004). Consequently, distance delivery of Anatomy and Physiology courses and labs are becoming much more common (American Association of Colleges of Nursing, 2003). In a separate bulletin (American Association of Colleges of Nursing, 2000, p.1 ) it was stated, “Distance education also helps to counter the nation’s mounting nursing shortage by bringing nursing careers to people who wouldn’t otherwise follow that path because they lack access to a campus, or because work, family, or economic considerations preclude a full-time, on-site education.” This bulletin also found that distance education tends to attract students from across the country. In an American Association of Colleges of Nursing (AACN, 1999) white paper on distance technology it was noted, “Distance education technology has provided some nursing schools an advantage in recruiting students and is increasing competition among institutions”( p. 1 ). A number of recommendations regarding distance education in nursing were made in the white paper. They include, but are not limited to, “increasing funding for creation and evaluation of distance education courses …continued definition and clarification of what constitutes a distance education program… continuing education of nursing faculty in the

PAGE 29

17 area of distance education and the use of technology in education… and use of technology to promote quality nursing education through collaboration among institutions and sharing of schools’ specific niche expertise” (p. 1). According to the AACN bulletin (2000), there are a number of advantages to incorporating distance education courses in nursing schools in particular. Distance education courses were found to change the relationship that currently exists between faculty and students for the better. Faculty who teach distance learning courses tend to become more of a coach rather than the sage on the stage, for the students. In addition, students who work within virtual environments tend to “participate in the process to a much larger degree” (p. 2), than do those in a typical face-to-fact lecture format. Distance technology can also be more cost-effective for smaller, more specialized classes. It is a challenge to offer Anatomy/Physiology laboratory courses at a distance. In order to understand the internal structure and function of the human body, one must be able to peer inside it and to visualize the interrelations that exist. The study of human anatomy is a 3-dimensional field of study. Within the medical school curriculum, students are able to work with actual dissected human materials. This is due partly to tradition and history and partly due to space and funding issues. Within nursing schools and other allied health fields, students do not typically observe or participate in actual human dissection. Even in a face-to-face laboratory exercise, dissections for nursing and other allied health students tend to consist of cat, rat or sheep dissection. When allied health students are exposed to human dissected material in anatomy and physiology labs,

PAGE 30

18 it is often as prosected material (Harrison, Nichols and Whitmer, 2001), rather than materials dissected by the students. Human Anatomy; History The study of Human Anatomy has been of interest to students of medicine for many years. Cynthia Klestinec (2004) investigated the history of anatomy theater by analyzing journal entries of the students present at the time. One of the first dissections recorded was that by Andreas Vesalius, in Bologna, Italy in the year 1540. He dissected a live dog (vivisection) for a group of students to demonstrate that when the recurrent laryngeal nerves of the dog was cut, the dog would cease barking. The dog quickly died after the procedure, and when Vesalius was questioned as to what the students should gather from the experiment, he told them, “I do not want to give my opinion; you yourselves should feel with your own hands and trust them.” (Klestinec, 2004, p.376). The students actively took part in the vivisections in order to understand the workings of the body. Anatomy theaters did not always involve hands-on experiences however. Later in the same century as Vesalius, one of his own students, Hieronymus Fabricius of Aquapendente, received complaints from students that his lessons were “inexact” and did not involve student participation. During the sixteenth Century, demonstrations in anatomy theaters vacillated between the study of the anatomy structure and function as we know today and natural philosophy studies that were discussions and lectures of the philosophical uniqueness of the human form. Students of anatomy complained that Fabricius focused on particular areas of the body and did not address the entire anatomy.

PAGE 31

19 The debate as to how best to teach human anatomy continued for most of the sixteenth Century in Padua, and continues today in the 20th Century. Dyer and Thorndike (2000) explored the history of anatomy education over the past 500 years relying on subjective commentary and objective data. Within the title of their paper is the phrase, “Quidne Mortui Vivos Docent? “which means, “what do the dead teach the living?” They stated that while the study of anatomy is on the decline, “dissection is currently enjoying a revival as a vehicle for teaching humanistic values in medical school” (Dyer and Thorndike, 2000, p. 969). They feel that the actual experience of dissection is ripe with social and psychological value, and can not be substituted, although they acknowledge that the way it is being taught is changing. They do not offer a reason for the change, except to state that “at this moment in history a confluence of forces seems to be changing the way medical education approaches the emotional content of gross anatomy.”(Dyer and Thorndike, 2000, p.979). Gregory and Cole (2002) attribute the change in approach to dissection as one with more of a balance between learning a necessary skill and keeping humanistic values. Human Anatomy; Public Opinion A contemporary shift has occurred in the way gross dissection laboratories are viewed by both the public and by health professionals in the wake of recently publicized cases of cadaver tampering. A LexisNexis Academic search of the word “cadaver” for the months of February and March of 2004, reveal the following headlines: UCLA suspends body-donor program after alleged abuses; Medical school’s actions follow accusations that cadavers have been sold illegally to outsiders ( Ornstein & Zarembo, 2004), Tulane

PAGE 32

20 stops cadaver delivery after bodies used in mine test (Burdeau, 2004), The logistics of the cadaver supply business, (Newman, 2004), Cutting out the cadaver; Dissecting human bodies in medical school anatomy labs, long a gruesome rite of passage for doctors, is going the way of house calls (Zarembo, 2004a), Demand for cadaver tissue fuels illegal activity, (Jablon, 2004), Surgeons fear effects of scandal on training, (Zarembo, 2004b), and The case for and against cadavers (Zuger, 2004). Human Anatomy; Lack of Qualified Instructors Another contemporary concern that arises when one discusses gross anatomy laboratories and who will teach them is the lack of qualified instructors in the field of Anatomy (McCuskey, Carmichael, and Kirch, 2005; American Association of Anatomists, 2005; Association of American Medical Colleges, 1984). McCuskey et al, discuss in their article the history of why there are few faculty to teach gross anatomy. One reason is an emphasis on sponsored research grants in years past which eroded the numbers of students willing to pursue the teaching of anatomy. McCuskey et al. (2005) also mention that an American Association of Anatomists’ survey determined that the teaching of Anatomy involved a much greater time commitment than other basic science courses. To successfully teach Anatomy required a time “commitment of 160 contact hours per academic year.” Anatomy laboratory contact time was subsequently reduced in medical schools therefore offering fewer hours for graduate students to teach anatomy laboratories. This has resulted in fewer and fewer post graduates learning enough anatomy to appropriately teach it.

PAGE 33

21 In his survey results of 28 anatomy programs in the United Kingdom for the year 1999 2000, Heylings (2002, p. 708) stated that “it is worrying that there are more parttime teachers than full-time and that the majority of clinically trained staff are employed on a part-time basis.” In the May 2003 report of the American Association of Colleges of Nursing, it was reported that nursing admissions were lower than necessary in the previous year due to a lack of qualified instructors to teach the required courses. Human Anatomy; Logistics Problems Shaffer (2004) discusses logistical problems that are currently associated with cadaver dissection. A few of the problems that are mentioned are storage, public perception, the fact that a careful dissection is time-consuming while anatomy curriculum across the country is being reduced, and cadavers commonly used display anatomical differences unlike that of the atlas or other images. As Shaffer discusses the pros and cons to dissection, she does state that “insofar as dissection has been perceived as an initiation rite that sets doctors apart from other caregivers, its use may be undesirable in a health care environment that emphasizes interdisciplinary teamwork.” She discusses the use of virtual environments including haptics, and comes to the conclusion that “Virtual dissection is much more complex, requiring three dimensions and ideally including tactile information. In certain specialties such as radiology and surgery, virtual methods are unlikely to replace dissection in the near future. However, developments in computer capabilities and data processing offer the potential for more realistic and educationally valuable experiences than ever before.”

PAGE 34

22 Human Anatomy; Expense Outfitting a contemporary gross anatomy lab in any medical or nursing school can be a cost-prohibitive endeavor. The University of Arkansas (2005) for example has approximately 6800 square feet dedicated to gross anatomy. This square footage requirement is obvious when one realizes that 150 medical students will need to have access to approximately 40 cadavers per year. In addition, the cadavers must be kept under lock and key and appropriately stored when not in use. Additionally, space within a contemporary lab usually consists of computer monitors, projection screens and worktables for students. Within contemporary nursing schools, gross anatomy laboratories are not often found. In fact, newer nursing schools tend to put more financing into the technological aspects of their programs, such as computer laboratories and online courses with links to websites such as A.D.A.M. or Primal Pictures, along with anatomical tutorials. Actual dissections tend to consist of rat or cat dissection and sheep brains if they are available at all. Human Anatomy; Lack of Material As stated by Cosman, Hutchins and Cregan (2001) in their letter to the Editor of the ANZ Journal of Surgery, “decreased access to dissection is inevitable”. Because cadaver material is in short supply and more difficult to obtain, they go on, Instructors must become creative in how they teach the material that is required in Anatomy and Physiology courses. They must look to surgical simulators or virtual reality in order to change the way they teach human anatomy. A survey of 103 Physical Therapy programs was conducted in 1993 by Mattingly and Barnes (cited in Bukowski, 2003). It was

PAGE 35

23 determined from the survey that at that time cadaver procurement costs increased 64% over the previous three years. Cadavers were routinely used for more than one year and for multiple courses to contain costs. The same survey (Bukowski, 2003, p. 153) also determined that at that time “anatomic models were being used by 73.8% of the programs, visual aids by 62.1%, and computer-assisted instruction by 18.4%.” Alternative Methods Because of the many contemporary issues mentioned, including cost, difficulty in procurement of materials, space allocations, and negative public perception, the traditional study of human anatomy is undergoing a dynamic shift. There are various alternative methods used in the study of human anatomy, not all of them include technology. Robinson, Metten, Guiton, and Berek (2004) advocate the use of fresh rat tissue dissection to teach anatomy during clinical years of medical school. This is important in order to see the real colors of tissues. The method however does not transfer directly to human material, and would be inconvenient for allied health students. Waters et.al., (2004) compared higher order question results for cat dissection versus sculpting human anatomy in clay. Students involved in sculpting clay images of human anatomy were found to perform better on higher order questions than those in the cat dissection group. The researchers surmised the reason was that the context was similar, for example, the clay structures were of human structures and the test was also on human material, rather than a test on cat. Better transfer of learning occurred. While this approach may hold promise for the study of human anatomy, it can not be conducted in an online environment.

PAGE 36

24 Gunderman and Wilson (2005) encourage the use of radiologic imaging along with human cadaver dissection. They state that this technique gives a more realistic image of the hidden internal anatomy, and “represents the context” in which most physicians view anatomy today. This was not an empirical study but rather the authors’ viewpoint. While this technique is simple enough to employ in a face-to face human anatomy laboratory, it is impractical for an online course. Multimedia Approaches There are alternative methods to the traditional laboratory study of human anatomy being actively incorporated that involve multimedia approaches, simulations, tutorials, stereoscopic methods, and various other computerized instruction methods (Boudinot and Martin, 2001; Bukowski, 2002; Franklin, Peat and Lewis, 2002; Gunderman and Wilson, 2005; Guy and Frisby, 1992; Jones, Olafson and Sutin, 1978; Khalil, Lamar, and Johnson, 2005; McNulty, Halama, and Espiritu, 2004; Trelease, 1998; Ziv, Wolpe, Small, and Glick 2003) Early efforts in replacing the traditional dissection resulted in the use of videodiscs. In their 1992 study, Guy and Frisby determined that students who used the videodiscs in the computer lab showed no significant difference in performance scores than did those students in the traditional cadaver laboratory. Their study was criticized for being, a simple “media comparison” and not a theory-based study, by Perrin Parkhurst (1992) in subsequent letters to the Editor. However, their research could be viewed as a necessary step in addressing the pressing need to find a cost-effective alternative to the traditional dissection labs.

PAGE 37

25 Jones, Olafson and Sutin (1978) compared traditional dissection to prosection tutorials with a multimedia program at Emory University and found students in the multimedia program with “prosection tutorials did as well as those in the traditional lecture-dissection program” when compared via written and practical examinations as well as the National Board of Medical Examiners examination (NBME). In their Online Anatomy Lab or OAL, Boudinot and Martin (2001) incorporated the ADAM™ Interactive Anatomy Software program into WebCT instructional software for the first year human anatomy lab students. Students were permitted to learn the material at their own pace and convenience. Overall, student evaluations were positive; student participation in the OAL related positively to their performance in the Anatomy lab. In a Physical therapy (PT) program, Bukowski (2002) incorporated computerized instruction over a period of three years. The first year students (n=18) were exposed to the traditional cadaver anatomy laboratory, the second year PT students (n=17) were given the computerized course, no cadaver lab, and were to complete it as self-study, while the third year students (n=20) were given the same computerized course, no cadaver lab, but were also given weekly lectures. A MANOVA was run on the data collected for “class means, class study times, performance throughout the remainder of the PT curricula and performance on the state board licensure examination.” It was determined that there was no significant difference between the three groups on the variables tested; leading the author (Bukowski, 2002, p. 156) to state “This study suggests that computerized self-study techniques may be a viable alternative to traditional cadaver laboratory and instruction of human gross anatomy courses.” It must be noted however, that the group sizes were 18, 17 and 20 respectively for the three groups of

PAGE 38

26 students, far less than the sample sizes suggested if one is to determine significance when working with a small to medium effect size. Khalil et al., (2005) investigated the use of dynamic labeling within anatomical online images and found that students found this process to be favorable in that they could move at their own pace as well as quiz themselves on content. The effectiveness as it relates to test scores was not investigated. In their two-year study on computer aided instruction (CAI) in a medical gross anatomy curriculum, McNulty et al. (2004) found that as students increased their use of CAI, their exam grades also increased a statistically significant amount. Weaknesses of Studies Many of the above mentioned studies failed to fully describe the computerized instruction or how it was presented to students. There is very little information included as to the extent of anatomical images included, what students were to do with the images, how the images were presented, etc. There is no way to repeat the methods of the studies unless one can adequately determine what steps were involved in the methods. In many of the studies, the researchers presented their findings in a way that left the reader feeling that the researcher believed one computerized methodology is as good as the next with very little thought as to what makes it unique and /or effective. Student Perceptions of Alternative Methods Students generally, in medical and allied health courses, tend to prefer an actual dissection over alternative methods such as prosections, computer simulations or sculpting of clay, (Franklin, Peat, and Lewis, 2002, Khalil Lamar, and Johnson 2005,

PAGE 39

27 Snelling, Sahai, and Ellis, 2003, Waters, Van Meter, Perotti, Drogo, and Cyr, 2004). When students in an undergraduate biology lab (n=800) were asked to discuss the usefulness of an actual cat dissection versus a “virtual dissection” Franklin et al, (2002) found that for the majority of the students (72%) the dissection was more useful to their understanding of structure and function than was the virtual dissection, based upon statements classified within a four point Likert scale from strongly agreed to strongly disagreed. The virtual dissection in this case, however was not a 3-dimensional display, but rather consisted of realistically colored 2-dimensional images. One student in this study (Franklin et al, 2002, p. 128) also stated that “using both is excellent – the cadavers are better for forming an understanding of structure and computers are useful for understanding process.” Khalil et al, (2005) measured student perceptions (n=68) toward a newly integrated interactive imagery strategy in an anatomy course in a veterinary program. They found that students preferred to have control over the viewing of images and that the “presence of multiple views of key structures presented in different planes or angles help students develop a more complete and accurate 3D visualization of a structure” (Khalil et al, 2005, p.74). The “interactive imagery strategy” used was one in which students had the option of having labels appear or not on any particular image. Students enjoyed the experience overall, but there was no attempt to measure the effectiveness of the interactive strategy. Waters, et.al, (2004) found no significant difference between group attitudes (n=120) prior to dissection and clay sculpting, but found that those students who participated in the actual dissection had more positive comments regarding the use of real material than anticipated. Interestingly, those in the clay modeling group saw actual dissection as less important. The question of which

PAGE 40

28 method produced better learning results was not addressed. Snelling, et.al. (2003) found in a series of three surveys, (n= 474, 364, 371, respectively) that 91% of medical and dental students felt actual dissection to be important to their understanding of anatomy, and after 12 weeks, that percentage increased to 95%. It was also demonstrated that medical students and dental students preferred textbooks and tutorials overall to dissection or prosections. The type of tutorials used was not elaborated on. What is Stereo Imaging? In its most simplistic terms, stereo imaging involves the convergence of two separate, but similar images into one image; much like the human eye does naturally. The two images are of the same thing or object, but are taken at slightly different viewpoints. Stereo images can be created of people, situations, landscapes, individual cells and anatomical structures. The convergence of the two images in a stereo image causes the human eye to create a new image that conveys depth, or a third dimension, hence the term 3-dimensional or 3D. Stereo images have been in existence for over 100 years. Early efforts involved creating stereo images with daguerreotypes (All About Stereo Photography, 2005), however the cost was prohibitive. During the Victorian period, photographic methods changed to a less expensive “albumen print” and it was during that same time that stereocards (two images printed onto one card) of vacation spots were mass produced and were viewed through a special viewer that held the card and combined the images into one with depth through a viewer.

PAGE 41

29 When is Stereo Imaging Used? Stereo imaging techniques have been tested in a variety of fields for research purposes. Hsu, Pizlo, Babbs, Chelberg, and Delp (1994) found that stereo imaging can assist the user in determining “subtle features” of simulated x-rays. The researchers controlled for flicker, ghosting of images, and “subjects’ stereo acuity”. In another example of how researchers utilized stereo imaging, Odenwald, et al., ( 1986) used the imaging technique to successfully visualize structural components of a virus that were not originally identified with standard 2-dimensional electron microscopic techniques. Rhodes (1997) describes how he has used stereo imaging to interpret electron-density maps from x-ray crystallography. He also states that without stereo imaging, the interpretation would be nearly impossible. Current Research Prentice, Metcalf, Quinn, Sharp, Jensen, and Holyoke (1977) evaluated stereoscopic anatomical images as a substitute for gross anatomy dissection in a medical school gross laboratory and determined that “while having minor limitations in terms of anatomical orientation, (stereo imaging) does provide a viable alternative to dissection.” A 3D Stereoscopic interactive program was designed in 1997 by Trelease (1998) at UCLA for the School of Medicine gross anatomy course. The 3D images were created much as one would today, for example, by taking stereo pair photographs of dissected materials, then interlacing them into a stereoscopic image by using a 3D image processing program. The stereoscopic 3D images were used for a “virtual” laboratory practical examination. They were not used instead of, or alongside of actual dissected

PAGE 42

30 cadavers. Students were not required to use the images to learn the anatomy, but rather for testing purposes in a computer lab using CRT monitors and liquid crystal shutter glasses. Images were created of the thorax, abdomen, pelvic region, and upper and lower extremities. Trelease found that students who suffered from monocular dominance, which has been found to affect from 2 to 4 percent of the population, influenced how readily students could view the images with the shutter glass system. Overall, the medical students were enthusiastic about the method and requested that more images be presented in stereoscopic view. However, some students complained about the flicker effect one can get from the shutter glasses and a few could not see the 3D images at all, which Trelease attributed to strong monocular vision dominance in those students. Eye Strain In fact, eye strain is a common factor when one views a stereo image on a computer monitor. This is caused because a user’s eyes are fusing two images into a common image and then interpreting the image on a flat display (McVeigh, Siegel, and Jordan 1996). This group of authors devised an algorithm that can be used when creating stereo images that forces all points of convergence beyond the image, which they believe results in less strain on the eyes. It is possible, however, to align a stereo image too much, resulting in a lack of depth of the image along with color disparity (McVeigh, et.al., 1996). When creating stereo-images, one must converge the two images at either a center point or an outside point, depending upon which area shows depth. It is impossible to focus on both the center and the outside edge, for instance when creating a stereo image. The result can lead to eye strain for the user, because the user may be trying to

PAGE 43

31 focus on a portion of the image that is simply not in focus for their eye structure (Ware, 1995). Different Types of 3D The Bassett Stereoscopic Atlas (1952) is a well known collection of gross anatomical images prepared in 3D stereoscopic view. These images have been used in medical school laboratories for many years. Their use has been primarily as a study guide, or for laboratory practical examinations (Trelease, 1998). They have not been used as a replacement for actual dissection. The images can still be obtained for a small royalty of approximately $400.00 from Stanford University. The technology exists today however that permits faculty to create their own stereoscopic images inexpensively and to present those images online without the need for a CRT monitor, but with only an inexpensive pair of red/blue stereo glasses. Images can be labeled and narration can also be incorporated. Pokescope Pro is a 3D imaging software product that is available for approximately $40.00 that can permit the user to create 3D images from any 2D image. All that is required is either a set of digital cameras, or one camera and a focal length chart that explains the distance the camera must be moved between images in order to have two appropriately distanced images that can then be made into a 3D stereoscopic image using the Pokescope Pro software. The 3D images can then be labeled using Neotek software and incorporated into a standard PowerPoint. The PowerPoint can be narrated and recorded and made into a movie file using Camtasia or similar software. These 3D stereoscopic images can then be used online for any course, anywhere, and at any time. There are commercial 3D software packages that are designed to supplement or

PAGE 44

32 to replace actual human cadaver dissection. They include but are not limited to, ADAM™ Interactive Anatomy Software, Primal Pictures ™, Neotek™ Stereo Imaging System, and 3D Explorer. The ADAM system is user friendly and has a wealth of images for high school as well as undergraduate health students. The Neotek Stereo Imaging system consists of 3D images created from the Bassett Collection (1952), but is expensive to use and operate, since one must invest in multiple pairs of liquid crystal shutter glasses for a cost of approximately $200.00 each. In addition, a CRT monitor is required in order to use the shutter glasses. Primal Pictures provides students an online version or CD version of 3D images, but is an expensive investment. The CD collection that encompasses the anatomy of the entire human body can cost as much as $1000.00 per set. To license the online version for an institution can be as high as $10,000 for only 30 seats. Once an institution licenses the online version, students may access it for free. This is a cost savings to the student but not to the institution. In addition, an instructor must concern themselves with whether or not the commercial version of the product they adopt will be available in coming years. 3D Learning Environments Three-dimensional learning environments, or virtual learning environments, as they are often referred to have developed over the years and include three categories: “textbased, desktop and sensory-immersive VR”, (Dalgarno, Hedberg, and Harper, 2002; Moore, 1995). Text-based virtual reality involves text-chat in real time, while desktop VR involves the use of 3D images on a desktop and is not immersive. Immersive VR permits the learner to interact with and in a 3D environment by using headgear and often

PAGE 45

33 “datagloves” and “datagear” for tactile information gathering. In his case study work with immersive VR, Moore (1995) stated that VR had “limited application to education at present.” However, he held that “A final way of creating learning experience and transference is to allow users to construct and experience their own abstract worlds, giving them first hand experience in the transfer of two dimensional knowledge into three dimensional knowledge”(p.96). Previous research in the area of VR that had simply compared 3D environments to 2D environments had found little if any real difference between the two methods, (Hedberg and Alexander ,1994, Cockburn, 2004, Dalgarno and Harper, 2004 ). This was partially due to the fact that immersive VR (which requires sophisticated head gear) was difficult and expensive to use. There was no quantitative proof that the media was any better than the real thing, and it was not seen to be a ready replacement for the laboratory experience or other practical applications, (Chan, Chung, Yim, Lau, Ng, and Li, 1997, Dalgarno, Hedberg and Harper., 2002, Gatto, 1993). Chan et al, (1997) did discover, however, that two-thirds of the surgeons they tested with a 3D camera system in laparoscopic surgery commented that they found better depth perception with the 3D system than with the 2D system. Newer methods of 3-dimensional learning environments, particularly those that fall within Moore’s (1995) desktop VR description, have developed much more fidelity, user-control, and interactivity, than the older desktop versions (Dalgarno & Harper, 2004). This is due to advances that have been made to graphics capabilities within desktop computers. Hedberg and Alexander (1994) described the features they felt distinguished 3D learning environments or 3DLE’s from other learning environments,

PAGE 46

34 and had the potential to make them superior learning environments. Those features include: “increased immersion, increased fidelity, and a higher level of active learner participation.” 3D Learning Environments and Complex Relationships Dalgarno and Harper, (2002) explored how desktop 3D environments “can facilitate learning of complex conceptual relationships”. Based upon earlier research (Csikszentmihalyi, 1990; Hedberg and Alexander, 1994; Alberti, Marini, and Trapani, 1998; Akiyoshi, Miwa and Nishida,1996 as cited in Winn and Jackson, 1999; Sweller, 1998; Ruzic, 1999; and Robertson, Card, and MacKinlay, 1993) Dalgarno, Hedberg and Harper (2002, p.152) describe what they believe to be the eight contributions to learning made by 3D learning environments. They include: 1. facilitate familiarisation of inaccessible environments 2. facilitate task mastery through practice of dangerous or expensive tasks 3. improve transfer by situating learning in a realistic context 4. improve motivation through immersion 5. reduce cognitive load through integration of multiple information representations 6. facilitate exploration of complex knowledge bases 7. facilitate understanding of complex environments and systems 8. facilitate understanding of complex ideas through metaphorical representations. The authors also concluded that 3D learning environments tended to be just as effective as, but no better than, a real environment when developing spatial knowledge.

PAGE 47

35 In a population of 34 undergraduates in a virtual chemistry laboratory, (Dalgarno and Harper, 2004), students were exposed to either a real laboratory or a virtual desktop version of the laboratory and were then given a follow-up test of spatial ability. The two factors the authors determined were important factors that contributed to learning were for the learner to have control over view position and direction and object manipulation. This only held true however, assuming students were assigned “authentic tasks” to complete within the 3D environment. An authentic task in this case was telling the students that they must learn the layout of the laboratory, and to find specific items of apparatus within the laboratory. They were given a list of items to look for. The authors also describe the two factors of view control and manipulation as the two things that distinguish a 3D environment from an animation or video. The authors noted that if an authentic task is not assigned the students within the 3D environment, and that “instead learners are simply presented with an environment to explore it is likely that there will be no learning advantage over alternatives such as video or animation.” Research in the field of VR, regardless of whether or not a desktop or immersive version of VR was utilized now emphasizes how VR can best be utilized in a learning environment. According to Waller, Hunt, and Knapp, (1998) “…researchers no longer need to question whether VEs can be effective in training spatial knowledge. Today’s more pressing questions involve examining the variables that mediate the training effects of VE’s.” Similarly, Dalgarno, Hedberg Harper (2002) suggest that future research in the area of 3D learning environments investigate the characteristics of learning tasks within the 3D environments that help to create better spatial knowledge for the learner, as

PAGE 48

36 well as what kind of support is necessary to help in the development of spatial knowledge. Spatial Relationships and 3D Marks (2000) investigated the implications of 3D information on anatomy and dissection. The study was not an empirical study, but instead offered four general questions researchers of 3D information should consider. The questions were (a) what is the best way to teach and learn with 3D data, (b) which method is best ut ilized with which type of image content, (c) do values beyond the dissection proper contribute to the professionalism of the student, (d) what anatomy should be taught and when and by whom. It is clear that there are still many questions to be asked concerning 3D imaging and the study of human anatomy, particularly when one does not have the luxury of a handson dissection experience. As stated by Heylings (2002, p. 708): “A clear understanding of gross anatomy involves the development of threedimensional understanding of structure. Current medical students can easily study an anatomical text and then answer standard anatomical questions. However, this knowledge base is often found to be deficient because it does not always enable students to develop an understanding of the interrelationships of each structure to others. It takes time and practice to develop the ability to visualize in three dimensions and this is best gained through hands-on learning experiences. Insufficient ability to visualize is frequently expressed by students who have difficulty identifying structures in the living body as required in clinical examination”.

PAGE 49

37 Mental Models Mental Model Theory offers one theory of how 3D imaging techniques can best be utilized in the educational arena to increase spatial awareness among learners in a complex system such as human anatomy. Early authors used various terms to describe a mental model. The concept had been defined as “mental models, conceptual models, cognitive models, mental models of discourse, component models and causal models” (Staggers and Norcio, 1993, p.587). All agree, however, that a “mental model” can not be completely described as it is a personal phenomenon. Each individual creates their own mental model that works best for them. The question researchers continue to struggle with is how to best assist the user in creating an effective mental model (Staggers and Norcio, 1993 and Winn and Snyder, 1996). Mental models have typically been utilized to assess and explain constructivist learning environments. It is one way to explain how novices are able to perform problem solving or critical thinking skills by constructing appropriate knowledge about the task. It also involves a transfer of knowledge regarding one realm to another. The concept of “mental models” developed as a way to describe how; users of computers, text editors, machines, and various devices conceptually understood the location, function, and structures within the various systems. Based upon the mental models users were believed to construct for the various devices, that information was then incorporated into the development of the appropriate human computer interface (Card, Moran & Newell, 1983, Carley and Palmquist, 1992, Farooq and Dominick, 1987, Mayer, 1989, and Moray, 1987).

PAGE 50

38 Farooq and Dominick (1988, p. 479) investigated what made an effective user interface for software and determined that there were many reasons why interfaces were not effective. Reasons cited were (a) human engineering was a ‘nebulous concept’, (b) software designers are not always aware of the poor engineering of their product, (c) knowledge and background of system designers and that of users of the system are often radically different, (d) high-level interfaces require a deep understanding of general psychology, psychology of languages, and linguistics not always obvious to designers, (e) current tools do not adequately support the design, implementation and evaluation of user interfaces. Farooq and Dominick (1988) questioned the terminology that was used to evaluate software. They defined cognitive models, conceptual models and mental models as each measuring different things. Cognitive models looked at the goals and methods of the user, rather than at how the user actually understood the tasks. Conceptual models were “typically formulated by a designer of a system, to provide the user with an appropriate representation of that system…” (p. 487). How the user actually understands the tasks is the role of a mental model. These same authors state that “Mental models evolve inductively as the user interacts with the system, often resulting in analogical, incomplete, or even fragmentary representations of how the system works” (p. 489). They encouraged the use of questionnaires and interviews in order to assess the users’ perceived problems. At the time of this study (1988) the use of questionnaires and interviews was not an accepted practice, yet Farooq and Dominick recognized it as an effective qualitative method for determining what the user was thinking as they manipulated the software.

PAGE 51

39 In his research into mental models, Moray (1987) found that tasks needed to be broken down into their simplest form in order for the user to form appropriate mental models. He referred to these small blocks as homomorphs of a complex system. According to Moray (1987), once the complex system is broken down into its effective homomorphs, a designer can incorporate those homomorphs into a more effective user interface. In addition, Moray (1987, p. 629) felt that sufficient time in the form of “prolonged, continuous, interactive tasks” was necessary in order for a user to form an appropriate mental model of any complex system. This may also hold true for undergraduate nursing students trying to decipher 3D images of human anatomy for the first time. When incorporating conceptual models, like mental models, Mayer (1989) felt it important to provide “concrete, conceptual models for learners”. He felt that this would improve overall retention, reduce “verbatim recall” as well as improve higher order learning such as problem-solving skills. Mayer believed “The ability to generate novel solutions to new problems is the hallmark of systematic thinking; if students have built models that they can mentally manipulate, they will be better able to solve transfer problems” (p. 59) Mayer reviewed 21 of his prior papers in the area of mental models. These papers involved the development of mental models for such topics as Density, Radar, the BASIC computer language, Brakes, and Cameras. This review led him to describe seven criteria he believed should be contained within instructional materials in order to increase the chances students will build appropriate mental models. The seven criteria are;

PAGE 52

40 1. Complete; they must contain all the essential elements of the task, as well as relationships within the task. 2. Concise; the appropriate amount of steps for the given audience are presented. 3. Coherent; the system must make sense to the learner. 4. Concrete; models must be familiar to the learner, and can be presented as either physical or visual models. 5. Conceptual; meaningful information on how a system works is best. 6. Correct; there should be a good correspondence between the model and the actual system. 7. Considerate; the model must be presented in a manner that is appropriate to the audience. Mayer (1989) also encouraged the presentation of a conceptual model prior to the task in order to encourage the formation of an appropriate mental model. Carley and Palmquist (1992) analyzed mental models from the perspective of linguistics. They devised a computer-based tool that represented mental models as maps that were extracted from text. The text was then analyzed and compared in various social scenarios. Based on this text-based mapping, the authors developed a set of assumptions that they felt encompassed the concept of mental models (p. 602): (a) Mental models are internal representations, (b) Language is the key to understanding mental models; i.e. they are linguistically mediated, (c) Mental models can be represented as networks of concepts, (d) The meanings for the concepts are embedded in their relationships to other concepts, (e) The social meaning of concepts is derived from the intersection of different individuals’ mental models. Although their research was based on mental model

PAGE 53

41 formation with text, and not images, the majority of the assumptions developed by the authors can be applied to the visualization of 3D images. Jonassen (1996) felt that the five concepts as outlined by Carley and Palmquist (1992) did not do enough to adequately describe mental models. He stated (p. 4) “Mental models are thought to consist of an awareness of the structural components of the system and their descriptions and functions, knowledge of the structural interrelatedness of those components, a causal model describing and predicting the performance of the system and a runnable model of how the system functions.” It is difficult for designers of multimedia programs to successfully implement programs that appeal to one kind of mental model because users tend to create personal and varied mental models based upon past experiences and prior knowledge of the domain involved (Moray, 1987). An initial questionnaire and pre-test on anatomical structures, therefore, may help in determining the user’s prior knowledge and individual computer abilities. This may ultimately assist with the designer’s conceptual model of what is important for inclusion in the 3Dimensional images. According to Hueyching & Reeves (1992), multimedia systems can be effective for building mental models, particularly if they are interactive. Others (Byrne, Furness and Winn 1995) found that “the most successful treatment for building mental models was a highly interactive one.” According to Winn and Snyder (1996, p. 123) “the greatest interest in mental models by educational technologists lies in ways of getting learners to create good ones.” They explain that learners incorporate events and instructional materials along with what they already understand in order to develop appropriate mental models to further their understanding about complex topics.

PAGE 54

42 Although the terminology used to define the concept of Mental Models has changed and evolved over the years, the features that most researchers agree on is that learners create and use mental models in an individual and internal way. They also agree that specific knowledge domains elicit particular mental models for learners and they are usually formed based on prior experience and / or instruction, (Staggers and Norcio, 1993). It is, therefore, difficult to measure the effectiveness of mental models. Understanding that their development is necessary for learners to understand complex systems such as human anatomy, however, makes it important to help learners create effective ones. Summary In summary, the supply of graduating students in nursing and other allied health fields does not currently meet the demand in the U.S. Because of this shortage, there is a need and a desire to offer more required courses such as human anatomy/ physiology at a distance for allied health students in order to accommodate a greater number of students without adding additional stress to the infrastructure of a University. Offering human anatomy online is a challenge due to the 3D nature of human anatomy and the relationships that exist within and between structures. It is also a challenge because dissection is traditionally taught in a face-to-face or hands-on environment. Allied health students however, frequently are offered dissections of cat or dog, rather than human material due to high cost and difficult logistics. The field of human anatomy is currently undergoing a shift in how it is taught, from the traditional methods that have characterized it over the centuries, to the

PAGE 55

43 incorporation of current technology. There are many reasons for the changes taking place in the traditional study of human anatomy; these reasons include cost, public opinion, logistics, and a shortage of faculty. Alternative methods are now being sought to the traditional method of dissection. One alternative method to be considered is 3D stereo imaging, which offers the capability of teaching human anatomy courses at a distance. There are commercial 3D stereo imaging packages available, but there are also ways of creating simple 3D images using pre-existing images and stereo software for labeling. Stereo images can be incorporated into learning environments. These learning environments can be effective in assisting students in creating appropriate mental models of the spatial relationships that exist within a complex system such as human anatomy.

PAGE 56

44 Chapter Three Method Purpose and Research Questions The primary purpose of this study was to determine the effectiveness of implementing desktop 3-dimensional (3D) stereo images of human anatomy into an undergraduate human anatomy distance laboratory. In addition, user perceptions of the 2D and 3D images were gathered via questionnaire in order to determine ease of use and level of satisfaction associated with the 3D software in the online learning environment, as well as overall student perceptions of the two approaches. The research design for this study employed a mixed-methods approach. Questions 1 and 2 were addressed by an experimental design consisting of quantitative data analysis of the test scores for the undergraduates on the laboratory practical and spatial relationship examinations of the skull bones and features. Question 3 was addressed with a questionnaire containing both quantitative and qualitative questions in order to measure ease of use of the digital 3D imaging, overall user satisfaction as well as to gather user perspectives on the 2D and 3D stereo images used in the study. The research questions are reiterated below. 1. Does the use of 3D stereo images result in significantly higher scores for undergraduate students, in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of identification?

PAGE 57

45 2. Does the use of 3D stereo images result in significantly higher scores for undergraduate students, in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of spatial relationships? 3. Are the 3-dimensional digital stereo-images of human anatomy easy to use and to comprehend, and what are the students’ perceptions of them as determined by a questionnaire in a sample of undergraduates? Design Changes Due to Pilot Data Initially, a research design that included three groups (3D, 2D and hands-on) rather than two (3D and 2D), and eight instruments rather than five was considered. After conducting a pilot test in the fall of 2005 (Appendix E), it was decided that a few changes needed to be made to the design of the study as well as to the instruments used within the study. In particular, there were five areas within the design that needed to be addressed and were subsequently changed for purposes of this study. The first issue concerned the list of structures the students used to study the anatomy. It was thought that the list may have been too long and would need to be condensed to accommodate the undergraduate nursing population. The original structure list consisted of approximately 87 structures. Duplicate terms were deleted and the list was then validated by two Professors of human anatomy who have taught at a total of six Universities and Community Colleges in the states of Florida and Georgia. Their audiences consisted of pre-nursing undergraduates, allied health as well as biology students. The instructors

PAGE 58

46 agreed that the 80 structures identified on the list (Appendix B) were appropriate for undergraduate nursing students. The second issue was concerned with using two groups rather than three groups in the treatments, and having identical narration and labeling for the PowerPoint movie files. For the pilot study, the PowerPoints had slightly different images and consequently different narration. This may have caused extraneous variables to come into play. Subsequently, the newly created PowerPoints each have identical images and identical narration and animation. As for use of three treatment groups, it was determined that the third group (hands-on group) was unnecessary for this study, as this approach is not often used in pre-nursing anatomy and physiology course. Therefore, this study contained only two treatments conditions; exposure to 2D and 3D images. The third issue concerned combining the three previous user perspective questionnaires into one concise questionnaire that contained more focused items. The one questionnaire was then administered to both groups (Appendix D). The fourth issue concerned the need for all instruments to be re-assessed and piloted to ensure a range of responses. This was addressed in the spring and summer pilots (Appendix F, G and Appendix H). Finally, it was also determined that the PowerPoint AVI movie files were to be reviewed by multiple experts in the fields of anatomy and instructional technology for correspondence to Mayer’s seven criteria for creating effective conceptual models (1989, p. 59). Mayer’s Criterion Mayer (1989, p. 59) lists seven criteria for how a conceptual model, or mental model, should be used in instruction to “foster student understanding”. Mayer felt that

PAGE 59

47 the following criteria were critical; (a) Complete--It contains all the objects, states, and actions of the system, ( b) Concise--It contains just enough detail, (c) Coherent--It makes ‘intuitive sense’, (d) Concrete--It is presented at an appropriate level of familiarity, (e) Conceptual--It is potentially meaningful, (f) Correct--The objects and relations in it correspond to actual objects and events, (g) Considerate--It uses appropriate vocabulary and organization. One practicing instructional designer, one instructional technology instructor and three instructors of human anatomy were asked to review the 3D PowerPoint and to indicate which, if any, of the seven criteria they felt were identified in the treatments. As is evidenced in Table 2, most of the reviewers indicated that the PowerPoint met the majority of Mayer’s criteria, thereby indicating that the PowerPoint contained most of the necessary elements, according to Mayer, to create an effective mental model for the students to learn human anatomy.

PAGE 60

48 Table 2. Mayer’s criteria for mental models Mayer’s Criterion Instructional Designer Instructional Technology Instructor Health Sciences Instructor Health Sciences Instructor Health Sciences Instructor (a) Complete –it contains all the objects, states, and actions of the system Yes Yes Yes No Yes b) Concise -it contains just enough detail Yes Yes Yes Yes Yes (c) Coherent -it makes ‘intuitive sense’ Yes Yes Yes Yes No (d) Concrete it is presented at an appropriate level of familiarity Yes Yes Yes Yes Yes (e) Conceptual it is potentially meaningful Yes No Yes Yes Yes (f) Correct -the objects and relations in it correspond to actual objects and events Yes Yes Yes Yes Yes (g) Considerate -it uses appropriate vocabulary and organization Yes Yes Yes Yes No

PAGE 61

49 Mayer’s Principles of Design Mayer’s (2001, p. 184) principles of design were also used to develop the PowerPoint presentations and the AVI files to which the students had access via Blackboard. The principles of design, according to Mayer, are as follows: (a) Multimedia Principle--Students learn better from words and pictures than from words alone, (b) Spatial Contiguity Principle--Students learn better when corresponding words and pictures are presented near rather than far from each other on the screen, (c) Temporal Contiguity Principle--Students learn better when corresponding words and pictures are presented simultaneously rather than successively, (d) Coherence Principle--Students learn better when extraneous words, pictures, and sounds are excluded rather than included, (e) Modality Principle--Students learn better from animation and narration than from animation and on-screen text, (f) Redundancy Principle--Students learn better from animation and narration than from animation, narration, and on-screen text, (g) Individual Difference Principle--Design effects are stronger for low-knowledge learners than for high-knowledge learners and for high-spatial learners rather than for low-spatial learners. Sequence of Procedures After completing a second pilot test in the spring of 2006 (Appendix F and G) and a third pilot test in the summer of 2006 (Appendix H) to re-assess all instruments with two rather than three groups, the final design and sequence of procedures (Table 3) was conducted as outlined in this chapter.

PAGE 62

50 Table 3. Sequence of procedures Procedure How Administered 1. Pre-test via BlackBoard prior to group assignments. 2. Informed Consent via BlackBoard 3. Demographic Questionnaire via SurveyMonkey 4. Volunteers assigned to groups Randomly stratified based upon pre-test scores 5.Administer Learning materials 5A. Group A, 2D standard and narrated PowerPoint/with study guide via Blackboard for one week. 5B. Group B, 3D standard and narrated PowerPoint/with study guide via Blackboard for one week. 6.Assessment 6A. All volunteers, identification exam histology lab 6B. All volunteers, spatial relationships histology lab 7. All volunteers administered user questionnaires for 2D and 3D groups. via SurveyMonkey

PAGE 63

51 Variables The independent variables are the instructional material treatments as defined by 2D or 3D, while the dependent variables, or outcomes, are the test scores on the measures of identification and understanding of spatial relationships on the laboratory practical examination. An effort was made to maintain independence among the treatment groups by conducting the instruction over a short period of time and by separating the group materials online. In addition, students were encouraged to work independently when studying the materials, and were assigned times to arrive for the laboratory practical examination so that all students were not in the laboratory at the same time. Student times for the practical examination were staggered so that there was approximately 20 minutes between the time that one group finished and the next group arrived. Each individual examination took no longer than one hour, and each examination accommodated 30 students at one time. Instruments A total of five instruments and tools were utilized for this study in order to gather both quantitative and qualitative data. Tools and instruments are listed in Table 4 and subsequently described in detail.

PAGE 64

52 Table 4. Instruments, tools and groups used in study Instrument / Tool Location Group Demographic questionnaire Appendix A All volunteers Pre-test baseline All volunteers Study Guidelist of structures and questions Appendix B All volunteers 2D PowerPoint and AVI Group A 3D PowerPoint and AVI Group B identification answer key Appendix C All volunteers relationship answer key Appendix C All volunteers User perspective questionnaire Appendix D All volunteers Demographic Questionnaire The first instrument, the Demographic Questionnaire (Appendix A) was administered to the volunteers immediately after they reviewed and signed the informed consent form. The Demographic Questionnaire consisted of ten questions, including name, age range, prior human anatomy course experience, and prior dissection experience, primary area of study, and comfort level and proficiency with computers. This questionnaire was delivered digitally and confidentially via a link to the SurveyMonkey.com website. The questionnaire had been validated by faculty members of the Departments of Secondary Education and Education Measurement and Research.

PAGE 65

53 One faculty member from each department was asked for comments and suggestions as to the content of the questionnaire. Content was changed according to suggestions made. Pre-test Baseline The second instrument used was the Pre-test Baseline test. It consisted of 25 multiple choice questions related to human anatomy. Each question was validated by a professor of anatomy for accuracy and relevance. The professor of anatomy assisted in choosing those images that were felt to represent a wide range of knowledge of anatomy. All images used within the pre-test were derived from a database of images from Grant’s Dissector (Sauerland, 1999). Each question of the pre-test had an image of a specific region of human anatomy that was labeled with a red arrow. The volunteer was to choose the structure indicated from a list of four responses. The pre-test originally consisted of 30 multiple choice questions on human anatomy. Volunteers were asked to choose the best answer to describe the structure to which a red arrow pointed on a variety of human anatomy 2D specimens. Questions were removed from the pre-test if more than 95% of volunteers got the answer correct or incorrect. A total of five questions were removed and Cronbach’s Alpha was computed (Table 5) for the 25 question pre-test.

PAGE 66

54 Table 5. Cronbach coefficient alpha for pre-test baseline Variables Cronbach Coefficient Alpha Raw 0.72 Standardized 0.73 List of Structures and Relationships The study guide list of structures and relationships (Appendix B) that the students were responsible for learning was developed from the Grant’s Dissector (Sauerland, 1999). A professor of anatomy from the Health Sciences Center with 25 years of teaching experience determined which structures to include on the list of structures and which questions would best represent spatial relationships within the human skull. In addition, the professor determined which images to include in both the 2D and 3D PowerPoints that would be representative structures undergraduate nursing and other allied health students would need to learn. PowerPoints The PowerPoints and AVI movies were developed according to Mayer’s criteria (Table 2) and narrated to encompass all structures found within the study guide list for the skull. The 2D images were taken from Grant’s Atlas of Anatomy (2005), as well as from appropriate Bassett Collection Atlas 2D images. The 3D stereo images were derived from the Bassett collection of stereo images or were created by taking digital images of the skull and superimposing them using the Pokescope software to gain the 3D

PAGE 67

55 effect. Images were created if appropriate images were not found within the Bassett Collection Atlas. The skull images were manipulated to get the best focus possible for the 3D stereo images. The images were then cropped, if necessary, in order to highlight specific regions that demonstrate depth. Cropping the images tends to reduce eye strain, as the eyes are focused on one specific area, rather than on a larger area that may not be completely in focus. The background of each image was changed to black using Adobe Photoshop CS in order to enhance the 3D effect and to lessen eye strain. The 2D and 3D images were labeled and animated with the tools common to PowerPoint, and each PowerPoint was identically narrated by a professor of anatomy with over 25 years of experience teaching human anatomy to medical students. The professor began the process by narrating the 2D PowerPoint as he maneuvered through the slides highlighting important areas with PowerPoint highlighting tools. In addition, he pointed out appropriate text labels while he pronounced the anatomical terms. The narrated PowerPoints were converted using TechSmith’s Camtasia software into AVI movie files. Once the narration was complete and accurate, the professor of anatomy held the speakers of one computer over the microphone of another while he once again maneuvered through the 3D PowerPoint at the same speed while highlighting the same structures. In this way, he highlighted the same areas as in the 2D PowerPoint, and each standard and AVI movie resulted in identical images and narration. The 2D PowerPoint had the same sequence and images as the 3D PowerPoint, due to the fact that only one of the stereo pair images was used in the 2D PowerPoint, and the complete 3D stereo images were used in the other. The same narration used for the 2D PowerPoint was also

PAGE 68

56 used with the 3D PowerPoint. Each AVI movie file was posted on Blackboard for a period of one week for student review. Identification Examination The Identification Examination (Appendix C) consisted of 15 identification questions for the gross anatomy laboratory practical that were chosen by two professors of anatomy. Questions were derived from the study guide list of structures and relationships (Appendix B). An anatomy professor obtained the skulls from the Health Sciences Center and identified which skull represented a particular feature best and then labeled the structure directly on the skull with a pointed piece of tape. Tape was used so that the pointer did not move during the examination. Prosected skull material was provided for the practical examination rather than permitting students to conduct their own dissections. This was done due to the shortage of available resource materials for human dissection. In addition, unless dissections are performed accurately, there is a tendency for structures to be damaged during a poor dissection, leaving students with inadequate material to work with. The use of prosected materials mitigated this problem. Students were stationed individually in front of each skull, and were given one minute to correctly identify the structure that was labeled on the skull. Students chose the number for their answer from the study guide list (Appendix B) of structures. There was no need to write down the complete name of the structure, only the number was necessary. In this way, students were not graded on spelling. The same list of numbered structures as the students used to study was available beside each skull and question. Once all volunteers were stationed in front of a test question that included a skull specimen(s) and the

PAGE 69

57 structure list, they were given one minute to chose and write down the correct answer from the list. After one minute, a timer sounded indicating to the students to move ahead in the sequence to the next question. Relationship Examination The relationship examination took place at the same time as the identification examination. The skulls were labeled by the same professor of anatomy, but the questions pertained to the relationships that existed within and between structures. The questions were designed to determine if the students could identify how various structures and features inter-digitated with one another, and they were taken directly from the study guide (Appendix B) list of terms and questions. Students were given 15 questions, with one minute per question. Again, the same lists of structures (Appendix B) was available as a reference for the volunteer, and the volunteers moved in tandem after the appropriate time had passed. Each skull may have had more than one question associated with it. With this design, one complete set of 30 volunteers had the opportunity to simultaneously take both the identification and the relationship examinations in a matter of approximately 45 minutes, which included time for orientation to the examination. Scores obtained for the two groups of volunteers were used to calculate Cronbach’s Alpha (Table 6) for the identification and relationship questions.

PAGE 70

58 Table 6. Cronbach coefficient alpha for identification and relationship questions Variables Cronbach Coefficient Alpha for ID Cronbach Coefficient Alpha for Rel Raw 0.798 0.821 Standardized 0.797 0.826 User’s Perspective Questionnaires The User’s Perspective Questionnaire (Appendix D) consisted of one set of questions designed to gather the 2D and 3D users’ perspectives on their imaging materials. Answers to the questionnaires were compared across groups. The questionnaires were delivered digitally via a link to the SurveyMonkey.com website. Students were asked to complete the brief, 5 to 6 item questionnaire after completing the examination, and were reminded that they would not receive their score until the questionnaire had been completed. A reminder notice with the questionnaire universal resource locator (URL) was also posted to the Blackboard announcement feature to remind those who had not yet completed the questionnaire to please do so. Design Permission was granted to collect data during the fall semester of 2006 in two different sections (.050 and .001) of BSC2085, an undergraduate Anatomy & Physiology course offered through the college of nursing. Both sections were the laboratory portion of the Anatomy & Physiology course, as opposed to the lecture section. The two sections were taught by the same professor from the College of Nursing. This laboratory course is

PAGE 71

59 a required one-credit laboratory for all those aspiring to be admitted to nursing school, as well as other allied health programs. The study was conducted during the third module of each course, which focused on the study of the skeletal system. The study was conducted during this time for several reasons: (a) so that students would have been oriented to the course structure, (b) because the content of the study was pertinent to what they were learning at the time, and (c) because at this point in the semester students become aware of the amount of work necessary on their part in order to do well on examinations. Students in both sections were advised that their score on the laboratory practical would represent 25% of their laboratory grade on the skeletal system. They were also informed that they had the option of including or not including their data in the research study. They were also notified that all data would be normed so that if one group outperformed the other, grades would be adjusted accordingly. An identical announcement was posted to the Blackboard site of each section informing students of the steps to follow if they chose to include their data in the study. Required Steps Students were encouraged to attend an orientation session in which details of the study were outlined. Those students who were assigned to the 3D group also received their 3D glasses at this orientation. A PowerPoint presentation was prepared and presented at all orientation sessions that explained in detail the students’ responsibilities, the schedule of procedures, where to go to find the study guide and PowerPoint and AVI file within their BlackBoard course, as well as contact information if they had additional

PAGE 72

60 questions. A total of five sessions was held over the course of two days in order to accommodate the students’ schedules. The same PowerPoint that was presented at the orientation sessions was posted to each course section within Blackboard, so that anyone who could not attend one of the orientation sessions, or who wanted to review the requirements, was able to have that information within Blackboard. All materials for the study were located within a tab called “Skull Materials” within the respective sections of the course. Students were instructed to study independently, without any other information they may have had available to them. Students were informed that they had one week to study and learn the online materials. They were encouraged to take the study seriously in order to gain as much credit as possible toward their course grade. The Pre-test The pre-test had been administered prior to the orientation sessions because the instructor of the course decided he wanted information on how the students might perform and felt that the pre-test could provide a good measure. The pre-test was designed to rank the students based upon what, if any, prior anatomical knowledge the students brought to the course. The pre-test was administered the week following the drop-add period, and it was required of all students. It was not graded and students were informed of that fact. Because the pre-test was administered prior to the orientation session, those students indicating they would volunteer for the study by signing the online informed consent form had already been stratified into either group A (2D) or B (3D). For example, the two highest scorers on the pre-test were assigned to groups A and B via

PAGE 73

61 random assignment. The next highest two scorers were likewise randomly assigned to groups A and B. This pattern continued until all students had been assigned to a group. This type of stratified assignment was done to retain randomization and power for the statistical tests. It also ensured equal sample sizes. Acquiring 3D Glasses Glasses needed to view the 3D PowerPoint and AVI movie file were given to those in the 3D group at the end of each orientation session. In addition, students were asked to sign-up for a test time during the following week. Those who did not show for the orientation were sent an email to inquire as to whether or not they planned to participate in the study. In addition, if they were interested in participating in the study, and were in the 3D group, they were asked to pick up the necessary 3D glasses at the offices of their respective instructor. Additional Requirements Informed consent forms were posted online. All those agreeing to have their data included in the study were asked to click on a link found on the last page of the informed consent form. The link took them to an agreement that they could digitally sign. After the consent form was acknowledged, the student was branched to the initial demographic questionnaire (Appendix A), which was created using SurveyMonkey, an online survey management tool. The “Adaptive Release Criteria,” as established within the Blackboard CMS, was utilized within the sections to ensure that students in group A (2D) were given access only to the 2D PowerPoint and 2D AVI movie file in Blackboard, and group B students were given access only to the 3D PowerPoint and 3D AVI movie file. Both

PAGE 74

62 groups had access to the study guide. Please refer to Table 3 for the sequence of procedures that were followed. The Practical Examination After one week, volunteers were asked to return to the College of Nursing in order to take the laboratory practical examinations. Because of the number of student volunteers (138 had registered for the study from sections .050 and .001) seven practical exam sessions were held over a period of two days for the students in those sections. Each group was given the same ten minute orientation to the examination after they entered the exam area. Each student was then stationed in front of a test question that included a skull specimen(s) and the same study guide list of structures and relationship questions that had been provided to them via Blackboard during the previous week. Students were given one minute to correctly identify the structure(s) indicated on the skull specimen before advancing to the next question. Sample Size According to Stevens (2002) for a MANOVA with two groups, sample sizes should contain 98 in each group with an alpha of .05, to achieve the optimum power of .80, assuming a small effect size. Therefore, the total sample size needed to consist of at least 196 students (98 students per each of the two groups). Ultimately, 29 pairs from section.001, and 33 pairs from section .050 completed the study. After running an independent samples t-test, assuming equal variances, on responses for the two groups on measures of identification and relationship, it was determined there was no statistical significance between the means for the two sections; pooled variance for variables

PAGE 75

63 identification and relationship (0.6550, p>.05 and 0.8371, p>.05 respectively). The data from the two sections could therefore be combined for the study. Therefore, a total sample size of 62 pairs was available for combined data analysis. This number is less than the requisite 98 pairs recommended for a MANOVA to achieve the optimum power of .80, assuming a small effect size. Attrition is discussed in chapter four. Data Analysis for Questions 1 and 2 In order to address the quantitative questions, a doubly-multivariate repeated measures (DoublyMANOVA) design was conducted. This model assisted in testing the differences between the two treatment variable means of 2D and 3D to the two outcome variables of identification and relationship. This method controlled the Type I error rate across all measures. There were a number of assumptions that must be met in order to appropriately perform a Doubly-MANOVA. The assumptions, which will be addressed in detail in Chapter Four, included: a normal distribution of the observations on the dependent variables, independence of observations on both dependent variables, and equal covariance matrices for the dependent variables (Stevens, 2002). Data Analysis for Question 3 Frequencies were determined for the various responses to the Likert questions within the User Perspective Questionnaire (Appendix D). Additionally, within the questionnaire, there were two open-ended questions asking students to list what they liked most about the method they used, and what they would change regarding the method. The open-ended answers were analyzed for frequency and themes that emerged. Two raters, consisting of the researcher and an Assistant Professor of Instructional

PAGE 76

64 Design categorized the open-ended responses independently. Inter-rater agreement for the first analysis of the comments was 75% for the first question and 89% for the second. Categories were resolved by comparing notes and discussing interpretations. Categories were then combined if possible for the two questions. Inter-rater reliability for the second iteration of the final instrument was 100% for the first question and 100% for the second question. Examples of the themes and their frequencies are listed in more detail within the Results section. Chapter Four will describe all results obtained.

PAGE 77

65 Chapter Four Results Demographics Demographic data were obtained from participants in each section of the course through the demographic survey (Appendix A) that all volunteers took prior to being randomly assigned to treatment groups. Demographic data were nearly identical across the sections (Table 7 and 8). The majority of all volunteers across sections and groups were primarily 18-24 years of age (97.14% and 100% for section .05 and .001 respectively). Regarding whether or not the volunteers had a human anatomy course prior to this one, answers were split (48.57% and 44.44% indicating “no” for sections .05 and .001, while 51.42% and 55.55% indicated “yes” for those same sections) with slightly more indicating that they had had a prior course. Of those indicating a prior course in human anatomy, the vast majority had taken that course less than five years ago (45.71% for section .05 and 51.85% for section .001). In addition, 94.28% of those in section .05 and 85.18% from section .001 indicated that they had not previously utilized human anatomy software. Of those that did, most could not remember the name of the software (2.85% of those in section .05 and 3.70% in section .001). More of the volunteers (34.28% and 62.96% for sections .05 and .001 respectively) were pre-nursing students, and secondarily pre-med students (25.71% and 14.81% for sections .05 and .001), however there were a variety of majors represented in each section. Some of these other areas of study included exercise sciences, athletic training, public health and psychology. Most volunteers had prior dissection experience (88.57% for section .05 and

PAGE 78

66 81.48% for section .001), and gained that experience from a high school general biology course (77.14% for section .05 and 81.48% for section .001). Most students accessed the course via computers that were between 1 and 3 years old (65.71% and 77.77% for sections .05 and .001 respectively), and they had confidence in their computer proficiency, with most indicating an advanced level of proficiency with various web browsers, email, instant messaging, and word processing (all percentages for both sections above 50%). They felt slightly less proficient, however, in the areas of spreadsheets and presentation software. Section .05 volunteers indicated 51.42% intermediate for spreadsheets and 45.71% intermediate for presentation software, while those volunteers in section .001 indicated 48.14% intermediate for spreadsheets and 66.66% intermediate for presentation software.

PAGE 79

67 Table 7. Demographic results for section .05; groups 2D and 3D Question Response Response Total Percentage Response Total Percentage 2D 3D Please indicate your age range 18-24 34 97.14% 35 92.10% 25-30 1 2.80% 2 5.26% 31-35 0 0 1 2.63% Have you had a Human anatomy course prior to this one? No 17 48.57% 14 36.84% Yes 18 51.42% 24 63.15% If yes, how long ago was the course? < 5 years ago 16 45.71% 21 60% 5 years ago or more 2 5.71% 3 7.89% Have you had any experience prior to this class with any human anatomy software? No 33 94.28% 34 89.47% Yes 2 5.71% 4 10.52% If so, which software did you use? Primal Pictures 0 0 1 2.63% I don’t remember the name 1 2.85% 3 7.89% Other 0 0 1 2.63% Please indicate your area of study. Nursing 12 34.28% 17 44.73% Speech disorders 0 0 1 2.63 Pre-med 9 25.71% 8 21.05% Other 13 37.14% 11* 28.94% Have you had a course prior to this class in which you dissected biological materials? No 4 11.42% 6 15.78% Yes 31 88.57% 32 84.21%

PAGE 80

68Question Response Response Total Percentage Response Total Percentage 2D 3D High School General Biology 27 77.14% 27 71.05% Undergraduate Biology 5 14.28% 7 18.42% Other 10 28.57% 9 23.68% How old is the computer you will use most of the time to access this course? Less than one year 10 28.57% 10 26.31% 1-3 years old 23 65.71% 24 63.15% Greater than 4 years 2 5.71% 3 7.89% Please rate your level of proficiency using the following software: Web browsers Beginner 0 0 1 2.63% Intermediate 12 34.28% 15 39.47% Advanced 23 65.71% 21 55.26% Email Beginner 0 0 1 2.63% Intermediate 8 22.85% 11 28.94% Advanced 27 77.14% 26 68.42% Instant messaging/chat Beginner 3 8.57% 2 5.26% Intermediate 10 28.57% 15 39.47% Advanced 21 60% 21 55.26% Word processing Beginner 0 0 1 2.63% Intermediate 15 42.85% 19 50% Advanced 20 57.14% 18 47.37% Spreadsheets Beginner 12 34.28% 14 36.84% Intermediate 18 51.42% 18 47.37% Advanced 5 14.28% 6 15.78%

PAGE 81

69Question Response Response Total Percentage Response Total Percentage 2D 3D Presentation software Beginner 9 25.71% 8 21.05% Intermediate 16 45.71% 22 57.89% Advanced 9 25.71% 8 21.05% Table 8. Demographic results for section .001; groups 2D and 3D Question Response Response Total Percentage Response Total Percentage 2D 3D Please indicate your age range 18-24 27 100% 26 89.65% 25-30 0 0 2 6.89% 31-35 0 0 1 3.44% Have you had a Human anatomy course prior to this one? No 12 44.44% 14 48.27% Yes 15 55.55% 15 51.72% If yes, how long ago was the course? < 5 years ago 14 51.85% 13 44.82% 5 years ago or more 1 3.70% 3 10.34% Have you had any experience prior to this class with any human anatomy software? No 23 85.18% 27 93.1% Yes 3 11.11% 2 6.89% If so, which software did you use? Primal Pictures 1 3.70% 0 0 I don’t remember the name 1 3.70% 2 6.89% Other 1 3.70% 0 0

PAGE 82

70 Question Response Response Total Percentage Response Total Percentage 2D 3D Please indicate your area of study. Nursing 17 62.96% 12 41.37% Speech disorders 1 3.70% 2 6.89% Wellness 2 7.41% 1 3.44% Pre-med 4 14.81% 3 10.34% Other 3 11.11% 9 31.03% Have you had a course prior to this class in which you dissected biological materials? No 5 18.51% 5 17.24% Yes 22 81.48% 24 82.75% If yes, indicate all that apply. Middle School honors program 5 18.51% 8 27.58% High School General Biology 22 81.48% 20 68.96% Undergraduate Biology 4 14.81% 3 10.34% Other 3 11.11% 6 20.69% How old is the computer you will use most of the time to access this course? Less than one year 5 18.51% 2 6.89% 1-3 years old 21 77.77% 24 82.75% Greater than 4 years 1 3.70% 2 6.89% Please rate your level of proficiency using the following software: Web browsers Intermediate 15 55.55% 8 27.58% Advanced 12 44.44% 19 65.52% Email Intermediate 11 40.74% 7 24.13% Advanced 16 59.25% 21 72.41%

PAGE 83

71 Question Response Response Total Percentage Response Total Percentage 2D 3D Instant messaging/chat Beginner 1 3.70% 1 3.44% Intermediate 10 37.03% 6 20.69% Advanced 16 59.25% 19 65.52% Word processing Beginner 1 3.70% 0 0 Intermediate 10 37.03% 9 31.03% Advanced 16 59.25% 19 65.52% Spreadsheets Beginner 11 40.74% 8 27.58% Intermediate 13 48.14% 14 48.27% Advanced 2 7.41% 4 13.79% Presentation software Beginner 4 14.81% 8 27.58% Intermediate 18 66.66% 11 37.93% Advanced 5 18.51% 8 27.58% Attrition After the drop/add period of registration, the total enrollment was 92 students for section .050, and 68 students for section .001. Based upon the numbers of students who completed the pre-test, there were originally 40 groups randomly assigned for section 0.50 and 30 groups assigned for section .001. All 140 volunteers were randomly assigned to groups based upon the Pre-test scores. A total of 69 students attended the orientation sessions for sections 0.50, and 58 students attended the orientations for section .001, for a total of 127 students that completed an orientation session. Students who did not attend an orientation session were permitted to participate if they completed

PAGE 84

72 the informed consent and picked-up their 3D glasses. Students who were not paired, or who did not pick up their 3D glasses, or who did not complete the informed consent form were permitted to take the laboratory practical examination, however their scores and that of their group mate were eliminated from the study. Only four pair of the 3D glasses was not picked up during the orientation sessions or from the nursing instructor. Scores from a total of eight groups were dropped from the study for reasons outlined in Table 9. A total of 33 groups from .050 and 29 from .001 completed the study. This resulted in 62 pairs available for analysis. Observations of Students During the course of the study, a number of issues arose regarding students’ abilities to complete the study. For some students, the dates and times for the orientation sessions did not work with their employment schedules, and they therefore chose not to participate. Some students had to attend to family emergencies, and some decided to not participate for reasons unknown. For those that completed the pre-test and were assigned to a group, Table 9 provides demographic information, when available, for those volunteers from the eight groups deleted from the study.

PAGE 85

73 Table 9. Demographics on volunteers dropped from study Pre-test score Group Reason Age Prior human anatomy experience Study area Prior dissection experience Computer proficiency 18 A no pair 18-24 No Physical therapy No Advanced 18 B No I.C. N.S. No dem. No dem. no dem. no dem. No dem. 16 A no pair 18-24 No pre-med Yes Advanced 16 B N.S. 25-30 No Nursing No Beginner 14 B no pair 18-24 Yes Nursing Yes Intermediate 13 A N.S. 18-24 Yes Exercise Yes Advanced 13 A N.S. 18-24 No Psych Yes Intermed 13 B no pair 18-24 Yes Nursing Yes Intermed 12 B no pair 25-30 No Physician assistant Yes Advanced 12 A N.S. No dem. No dem. no dem. no dem. No dem. 12 A N.S. 18-24 Yes nursing Yes intermed. 11 B N.S. 18-24 Yes Nursing Yes Advanced 11 B no pair 18-24 Yes nursing Yes Intermed 11 A No I.C./N.S. No dem. No dem. no dem. no dem.. No dem. 9 B no pair 18-24 No Biochem Yes Intermed. 9 A No I.C. N.S. No dem. No dem. no dem. no dem. No dem. Note; (No I.C.) = no informed consent completed for that student, (N.S.) = the student didn’t show for the exam, (no dem.) = no demographic information is available for that student and (no pair) = data for that volunteer was deleted because they did not have a group mate.

PAGE 86

74 Note that some students completed the pre-test but did not complete the demographic questionnaire. Note also, that if one volunteer from a group did not show, as indicated by the N.S. in Table 9, their group mate’s scores were automatically deleted from the study. This is indicated with the “no pair” connotation. From the data obtained, half of the students who did not participate were nursing students and the other half were from a variety of fields, including physical therapy, pre-med, physician assistant, biochemistry and psychology. Eight were assigned to group A, and eight were assigned to group B. The majority of those that dropped, or were taken out of the study were within the 18 – 24 year old age range, half had some prior human anatomy experience, but most had done dissection before. Eleven of the 12 that demographic information was available for, indicated either intermediate or advanced computer proficiencies. During the course of the study, students sent email regarding questions they had regarding various aspects of the study. During the period of two weeks that included the orientation sessions and the time available for studying the material, 90 emails were received. The majority of questions were in reference to the orientation sessions. Seventeen students could not attend any of the sessions, seventeen students emailed that they could attend and eight students needed clarification on where the orientation sessions were to be held and how long they would last. Nineteen students had issues with their computer locking up, losing connectivity or having difficulty submitting their demographic questionnaire. Another eleven students could not find the “Skull Materials” tab that contained their study materials. In this instance, students were looking at the wrong Blackboard course for the material. Nine students asked for the answers to the relationship questions and three could not participate and wanted to know if they could

PAGE 87

75 take a make-up examination. The make-up requests were denied due to the time required to set-up for the examination. Six students became sick or had a death in the family that prohibited them from participating in the study. Quantitative Results Students were given a laboratory practical examination that consisted of 15 identification (ID) questions and 15 relationship (Rel) questions (see Appendix C). The practical examination took each group 30 minutes to complete, since the students had one minute to answer each of the 30 questions. The identification questions involved the student choosing the correct number from the study guide, Appendix B, which corresponded with the structure that was being pointed to by a green arrow. Relationship questions asked the students to choose the correct number or numbers from the same study guide that best described the relationships between various bones and features of the human skull (Appendix C). Exams were scored using the key (Appendix C) so that each of the 30 questions was worth one point. A total of 15 points was possible for the identification questions and 15 points were possible for the 15 relationship questions. The entire laboratory practical examination was worth 30 points. For the relationship portion of the examination, three questions had more than one part to the answer. Questions 19 and 26 each had two answers, therefore each answer was worth half of a point, and question 20 had four answers to it. Each answer for that question was worth one quarter of a point. Descriptive statistics for scores on the identification and relationship subtests are reported in Table 10. These data suggest that the mean scores for the 3D groups, for both

PAGE 88

76 variables of identification and relationship, were consistently higher than those of the 2D groups for both sections. The 2D group scores demonstrated slight negative kurtosis; -0.56 for the ID subtest scores and -0.86 for relationship subtest scores. Skewness for 2D scores was -0.39 for the ID subtest scores and 0.13 for the relationship scores. The 3D group demonstrated positive kurtosis of 0.41 for ID subtest scores and -0.23 kurtosis for the relationship subtest scores. In addition, the 3D group demonstrated slight negative skewness for both ID scores (-0.85) and relationship scores (-0.67). Overall, score distributions for each treatment group on identification and relationship subtests were normal with no outliers. Univariate plots for 2D and 3D scores for variables identification and relationship are shown in Figures 1 and 2. Table 10. Descriptive statistics on test scores by group ______________________________________________________ Group Mean SD Skewness Kurtosis Identification Subtests 2D 9.5 3.34 -0.39 -0.56 3D 10.19 3.31 -0.85 0.41 Relationship Subtests 2D 8.08 3.63 0.13 -0.86 3D 9.45 3.46 -0.68 -0.23 Note: n = 124 per group

PAGE 89

77 16 + | | | | | | | 14 + | | | | | | | +-----+ | | | | 12 + +-----+ | | | | | | | | | | *-----* | | | | | 10 + *-----* | + | | | + | | | | | | | | | | | | | 8 + | | +-----+ | | | | | +-----+ | | | | 6 + | | | | | | | | | | | 4 + | | | | | | | | | | | 2 + | | | | | | | | | 0 + ------------+-----------+----------Group 2D 3D Figure 1. Univariate plot of identification scores for 2D and 3D

PAGE 90

78 | 16 + | | | | | | | 14 + | | | | | | | | | | | 12 + | +-----+ | | | | | | | | | +-----+ | | 10 + | | *-----* | | | | + | | | | | | | *-----* | | 8 + | + | | | | | | | | | | | +-----+ | | | | 6 + | | | | +-----+ | | | | | | | 4 + | | | | | | | | | | | 2 + | | | | | | | | | 0 + ------------+-----------+----------Group 2D 3D Figure 2. Univariate plot of relationship scores for 2D and 3D Assumptions for Doubly Multivariate Repeated Measures Analysis There were a number of assumptions that needed to be met in order to appropriately perform a Doubly-MANOVA. The assumptions included: multivariate normality of the observations on the dependent variables, independence of observations on both dependent variables, and equal covariance matrices for the dependent variables

PAGE 91

79 (Stevens, 2002). The data did not appear to violate the assumption of multivariate normality [multivariate skewness !2 (4, N=58) = 7.39, p = .1167 and multivariate kurtosis zlower = -1.04, zupper = -0.65], although a statistically significant outlier was detected [F(2,55) = 5.72, p =.006]. After verifying that the data associated with the significant outlier were accurate, the analysis was run again without the outlying observation. There were no substantive changes to the MANOVA results, therefore, the outlying observation was retained for all analyses. Independence was maintained as best as possible by ensuring that those students in the 2D group had access to only 2D materials within Blackboard. Likewise, the 3D group volunteers were only given access to the 3D materials online. Because the course site was password protected, volunteers could not gain access to the other groups’ materials unless they were working beside that student outside of class, or were given the password that would gain them access to the other materials. Lastly, results from Box’s M test for homogeneity of covariance matrices did not provide evidence that the assumption of equal covariance matrices was violated [ !2 (3, N=58) = 1.98, p = .5776) therefore it appeared reasonable to conduct the planned analyses. Doubly Multivariate Repeated Measures Results The results of a doubly multivariate repeated measures design revealed a statistically significant difference in group means for the main effect of the treatment groups (2D vs 3D) on both dependent measures of identification and relationship test scores (Table 11). The 3D group outperformed the 2D group on both dependent

PAGE 92

80 measures, (Wilk’s Lambda (0.0479, p<.0001). However, there is no significant treatment*outcome effect (Wilk’s Lambda = 0.938, p>.1443) (Table 11). The absence of an interaction effect suggests that the treatment group differences are consistent across the two variables. When graphed, it is clear that there is a between-treatment visual difference with the 3D group consistently outperforming the 2D group on scores of identification and relationship (See Figure 3); however the test of interaction indicates the lines are not significantly non-parallel. Table 11. Wilks’ lambda, F value and degrees of freedom Main Effect Wilk’s Lambda F value Num DF Den DF 0.0479 596.74 2 60 Treatment Outcome effect Wilk’s Lambda F value Num DF Den DF 0.9375 2.00 2 60

PAGE 93

81 Figure 3. Visual display of differences between means. Effect Size Although a significance was found for the main effect with the 3D group outperforming the 2D group on both dependent variables (Wilk’s Lambda 0.0479, p<.0001) the size of the significance was not clear from the data. For instance, with a large enough sample size even very small significant differences may be found. Therefore, the effect sizes (Table 12) were calculated in order to determine the size of the difference between the 2D and 3D treatment groups on the outcomes of identification and relationship scores. Identification vs. Relationship0 2 4 6 8 10 12IDRel Subtests Mean values 2D 3D

PAGE 94

82 Table 12. Effect sizes Effect Size Cohen’s Interpretation identification 0.215 small effect size relationship 0.359 small / medium effect size From the Cohen’s d values, it can be stated that the difference between the 2D and 3D treatment groups on the outcome of identification is small. However, the difference between the 2D and 3D treatment groups on the outcome of relationship is slightly larger, at 0.359. This is also apparent from the visual display of means in Figure 3. The relationship outcome ascends quicker than does the identification outcome measure, although not significantly so. Another way to decipher these values is to construct a confidence interval (Table 13) around the effect sizes for the variables of identification and relationship. From the confidence intervals, it can be estimated with a 95% probability that in the population the actual difference between the means of the 2D group and the 3D group for identification scores is somewhere between -0.136 and 0.56. Likewise, the actual difference between the means of the 2D and 3D groups on measures of relationship scores is between 0.005 and 0.713, 95% of the time. The confidence interval for the relationship scores does not include zero, and therefore indicates that the effect size for the relationship scores is significant. Regarding identification scores, the zero is included in the confidence interval; therefore, the effect size for the identification score is not significant.

PAGE 95

83 Table 13. Confidence intervals 95%Confidence interval identification -0.136 to 0.56 relationship 0.005 to 0.713 An item analysis was conducted on scores for identification and relationship questions by group (Figure 4 and 5). From the histogram, it can be seen that both groups answered most questions similarly, although the 3D group outperformed the 2D group on 10 of the 15 identification questions and 11 of the relationship questions. Figure 4. Item analysis for identification questions by group Identification Scores 0 10 20 30 40 50 60 123456789 101112131415 ID questions scores 2D 3D

PAGE 96

84 Figure 5. Item analysis for relationship questions by group. Qualitative Results Upon completion of the laboratory practical examination, a URL to the User Perspective Survey (Appendix D) was provided to the students involved in the study via an announcement in their respective Blackboard course section. A total of 139 students from both sections chose to answer the survey. Students were told they could not receive their course grade until the survey had been completed. A total of 133 students chose to answer the following two open-ended questions: 1. What did you like MOST about using the PowerPoint images? 2. What did you like LEAST about using the PowerPoint images? The two open-ended questions from the User Perspective Survey (Appendix D) were analyzed for themes. Relationship Scores 0 10 20 30 40 50 60 161718192021222324252627282930 Rel questions scores 2D 3D

PAGE 97

85 An instrument was developed to determine themes based upon an iterative process by which one rater developed possible categories based upon feedback. A second rater then categorized answers separately from the first. There were a number of themes that emerged from the qualitative open-ended questions. Themes were stratified to the different groups of 2D and 3D (Table 14), and categorized according to number and percentages reported. Tables 15 and 16 reflect miscellaneous comments that were made and that did not fit into any of the categories. Results show that the most common theme to emerge among the 2D and 3D groups for the question, “What did you like most about working with the PowerPoints?” was “convenience” (31.5% and 22.0%) for the 2D and 3D groups respectively. The 3D group, for the same question, stated they felt the PowerPoints were more realistic (18.6%) than did the 2D group (2.7%). When asked “What did you like least about working with the PowerPoints?” among the 3D group, the most common theme to emerge was “eye strain” (33.9%). Both groups also listed image quality (21.6% and 23.7%, for 2D and 3D respectively) as the one thing they did not like about the PowerPoints.

PAGE 98

86 Table 14. Themes from qualitative open-ended questions What did you Like Most? What did you Like Least? Category 2D 3D Category 2D 3D Convenience 23 (31.5%) 13 (22.0%) Eye strain 1 (1.3%) 20 (33.9%) Detailed labels 12 (16.2%) 8 (13.5%) Image quality 16 (21.6%) 14 (23.7%) Dual images 11 (14.8%) 2 (3.3%) Lack of depth perception 11 (14.8%) 3 (5.0%) Image quality 8 (10.8%) 4 (6.7%) Not real enough 8 (10.8%) 4 (6.7%) Narration 6 (8.1%) 1 (1.7%) Too much info. 6 (8.1%) 3 (5.0%) Color images 5 (6.7%) 2 (3.3%) Ppt. organization 6 (8.1%) 3 (5.0%) Informative 3 (4.1%) 4 (6.7%) Confusing 7 (9.4%) 4 (6.7%) Use of real skulls 3 (4.1%) 3 (5.0%) Nothing to note 7 (9.4%) 2 (3.3%) More Realistic 2 (2.7%) 11 (18.6%) Not enough info. 3 (4.0%) 4 (6.7%) A different way to learn 2 (2.7%) 7 (12.0%) Misc. 9 (12.0%) 3 (5.0%) Nothing to note 1 (1.4%) 1 (1.7%) Misc. 3 (4.1%) 5 (8.4%) Total 74 59 Total 74 59

PAGE 99

87 Table 15. Miscellaneous comments for the question, “What did you like most?” 2D 3D The ending Better slides Size 2D It’s a visual I didn’t Depth Table 16. Miscellaneous comments for the question, “What did you like least?” 2D 3D Not being in 3D group Requires expensive equipment 2D different than 3D 3D Size of font The beginning Learning Not audio Boring Too long Frequency results from the 12 Likert questions within the User Perception Survey (Appendix D) are reported in Table 17. The level of agreement results show that within both groups, the greatest percentage of agreement occurred for the statement, “ in general the images were easy to use” ; 50 of those in the 2D group agreed and 39 of those in the 3D group agreed with that statement. The 2D group was split on their reaction to the question “ I think this activity was fun ”, with 30 volunteers indicating they agreed and 31

PAGE 100

88 disagreeing with that statement. The same split can be seen in the 3D group, with 25 agreeing and 25 also disagreeing with that statement. Volunteers from both groups tended to agree that they “ could see the images clearly ”, with more of those students falling within the 2D group (46 and 29, for 2D and 3D respectively). The majority of the 2D group agreed with the statement that the “ graphics were of high quality ” (53 individuals, or 71.6%), whereas in the 3D group 36 of the 63 volunteers indicated they agreed with that statement; slightly more than half. This is in contrast to the themes that emerged from the open ended questions in which volunteers from both groups indicated that “ image quality ” was one of things they disliked most about the PowerPoints, (Table 14). Volunteers generally disagreed that it was “ easy to find specific information ”, (40 volunteers, or 54.79% for 2D, and 36 of the 3D group, or 57.14%), but agreed with the statements that “ they would like to use similar images to study other areas of human anatomy ”, (68.92% for 2D and 59.38% for 3D), and that “ they would use this PowerPoint as a primary reference ”, (with 53 of the 2D volunteers, or 71.62% for 2D and 43 of those in the 3D group, or 67.19%). Volunteers clearly did not feel that the “ PowerPoints were a waste of their time ”, with 86.49% of the 2D group and 87.50% of the 3D group disagreeing with that statement. A majority of students from both groups disagreed with the statement, “ I would rather study only images from a book ”, (70.83% for 2D and 70.31% for 3D), while an area of note is that a considerable percentage of students from both groups disagreed with the statement, “ I feel that I can learn as much from PowerPoint images as from doing a real dissection ”, (75.68% for 2D and 70.31% for 3D). Clearly, students prefer an actual dissection over the 2D or 3D PowerPoint. More volunteers disagreed than agreed with the statement, “ I was often confused as to where to

PAGE 101

89 go to find what I was looking for ”; a total of 52.70% of those in the 2D group and 62.50% of the 3D group volunteers disagreed. Groups were divided only on their agreement responses to the statement, “ Looking at these images hurt my eyes ”, with more of the 3D group agreeing (60.94% or 39 out of 64 individuals) and 83.78%, or 62 of 74 individuals, of the 2D group disagreeing with that statement. Table 17. Level of agreement frequencies from questionnaire for both groups Group Strongly Agree Agree Disagree Strongly Disagree Not Applicable In general the images were easy to use. 2D 12 (16.2%) 50 (67.5%) 11 (15%) 1 (1.3%) 0 3D 8 (12.5%) 39 (61%) 12 (18.7%) 5 (7.8%) 0 I think this activity was fun. 2D 3 (4.0%) 30 (40.5%) 31 (41.8%) 8 (10.8%) 2 (2.7%) 3D 4 (6.25%) 25 (39%) 25 (39%) 7 (11%) 3 (4.6%) I could see the images clearly. 2D 9 (12.5%) 46 (63.8%) 16 (22.2%) 1 (1.3%) 0 3D 4 (6.3%) 29 (46.0%) 23 (36.5%) 6 (9.5%) 1 (1.5%) The graphics were of high quality. 2D 6 (8.1%) 53 (71.6%) 14 (18.9%) 1 (1.3%) 0 3D 6 (9.5%) 30 (47.6%) 19 (30.1%) 7 (11.1%) 1 (1.5%)

PAGE 102

90 Group Strongly Agree Agree Disagree Strongly Disagree Not Applicable It was easy to find specific information. 2D 1 (1.3%) 32 (43.8%) 30 (41%) 10 (13.6%) 0 3D 5 (7.9%) 22 (34.9%) 26 (41.2%) 10 (15.8%) 0 I would like to use similar images to study other areas of human anatomy. 2D 6 (8.1%) 45 (60.8%) 20 (27.0%) 3 (4.0%) 0 3D 9 (14.0%) 29 (45.3%) 20 (31.2%) 4 (6.2%) 2 (3.1%) I would use this PowerPoint as a primary reference. 2D 11 (14.8%) 42 (56.7%) 17 (22.9%) 4 (5.4%) 0 3D 8 (12.5%) 35 (54.6%) 14 (21.8%) 5 (7.8%) 2 (3.1%) I found the PowerPoint images to be a waste of my time. 2D 0 10 (13.5%) 45 (60.8%) 19 (25.6%) 0 3D 2 (3.1%) 5 (7.8%) 36 (56%) 20 (31.2%) 1 (1.5%)

PAGE 103

91 Group Strongly Agree Agree Disagree Strongly Disagree Not Applicable I would rather study only images from a book. 2D 3 (4.1%) 14 (19.4%) 40 (55.5%) 11 (15.3) 4 (5.5%) 3D 3 (4.6%) 14 (21.8%) 34 (53.1%) 11 (17.1%) 2 (3.1%) I feel that I can learn as much from PowerPoint images as from doing a real dissection. 2D 5 (6.7%) 11 (14.8%) 32 (43.2%) 24 (32.4%) 2 (2.7%) 3D 4 (6.2%) 14 (21.8%) 29 (45.3%) 16 (25%) 1 (1.5%) I was often confused as to where to go to find what I was looking for. 2D 8 (10.8%) 27 (36.4%) 33 (44.5%) 6 (8.1%) 0 3D 5 (7.8%) 17 (26.5%) 34 (53.1%) 6 (9.3%) 2 (3.1%) Looking at these images 2D 2 (2.7%) 7 (9.4%) 42 (56.7%) 20 (27.0) 3 (4.0)

PAGE 104

92hurt my eyes. 3D 16 (25.0%) 23 (36%) 19 (29.6%) 5 (7.8%) 1 (1.5%) When asked to describe how they felt while working with the PowerPoint images, (Table 18), the majority of both groups (43.2%, 2D and 45.3%, 3D) agreed that they were “a little confused”. When asked which method they would prefer to use to learn human anatomy (Table 19), the vast majority (75.6% for 2D and 75.0% for 3D) agreed that they would prefer a combination of textbooks, PowerPoint and actual dissection, rather than simply one more than any other. Both groups seemed to find the pace of the task (Table 20) to be “just right”, (45.9%, 2D and 43.7%, 3D) and finally the majority of both groups found that the PowerPoint added to their ease of learning human anatomy, (50.0%, 2D and 61.0%, 3D) (see Table 21). Table 18. Describe how you felt while working with the powerpoint images. 2D 3D Completely confused 3 (4.0 %) 3 (4.6 %) A little confused 32 (43.2 %) 29 (45.3 %) Everything made sense 31 (41.8 %) 25 (39.0 %) Don’t know 3 (4.1 %) 0 % Other – please specify 5 (6.7%) 8 (12.5 %)

PAGE 105

93 Table 19. Which method would you prefer to use to learn human anatomy? 2D 3D Textbooks only 1 (1.3 %) 2 (3.1 %) PowerPoints only 2 (2.7 %) 2 (3.1 %) Actual Dissection 11 (14.8 %) 9 (14.0 %) Some combination of the above 56 (75.6 %) 48 (75.0 %) Other (please specify) 3 (4.0 %) 3 (4.6 %) Table 20. Compared to what you may have anticipated, this task was… 2D 3D Much slower 3 (4.0 %) 1 (1.5 %) Slow 10 (13.5 %) 11 (17.1 %) Just right 34 (45.9 %) 28 (43.7 %) Fast 21 (28.3 %) 16 (25.0 %) Much faster 6 (8.1 %) 7 (10.9 %) Table 21. Do you feel the powerpoint added to your ease of learning human anatomy? 2D 3D No 11 (14.8 %) 13 (20.3 %) Yes 37 (50.0 %) 39 (61.0 %) I’m not sure 25 (33.7 %) 12 (18.7 %)

PAGE 106

94 Chapter Five contains a discussion of the interpretation of the research results along with implications for practice. The chapter will conclude with recommendations for future research.

PAGE 107

95 Chapter Five Discussion Problem Statement To fully understand anatomy, students must understand the 3-dimensional (3D) spatial relationships that exist among the structures. Studying anatomy from a 2D representation, such as from a text or a PowerPoint presentation, may not adequately permit students to learn the many spatial relationships that exist within human anatomy. Purpose The purpose of this study was to determine if 3D images could assist the students in creating appropriate mental models of anatomical structures and therefore be reflected in better scores on measures of identification and spatial relationships than standard 2D images of the human skull. In addition, user perception of the 2D and 3D PowerPoints was determined via survey questions of all participants. Research Questions The three research questions developed for this study were as follows: 1. Does the use of 3D stereo images result in significantly higher scores for undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of identification? 2. Does the use of 3D stereo images result in significantly higher scores for

PAGE 108

96 undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of spatial relationships? 3. Are the 3-dimensional digital stereo-images of human anatomy easy to use and to comprehend, and what are the students’ perceptions of them, as determined by a questionnaire in a sample of undergraduates? Sample Volunteers for the study were gathered from two different sections of the same anatomy and physiology online laboratory. The two sections had the same instructor and were offered through the College of Nursing. The study was conducted as a regular portion of the course and was offered during the time the students were studying the skeletal system. They were required to learn the online study materials on the human skull as part of their course; however, they had the option of including, or not including, their data in the study. Ultimately data from 62 groups was available for analysis. Each group consisted of a pair, with one volunteer randomly assigned to the 2D group, and the other assigned to the 3D group based upon pre-test scores. Pearson’s Correlation Coefficient (Tables 22 and 23) was utilized to determine to what degree the pre-test scores correlated with the measures of identification and relationship. The results indicate that there is little correlation between the pre-test and the identification variable (Pearson correlation coefficient = 0.191), or the pre-test and relationship variable (Pearson correlation coefficient = 0.243). This indicates that the pre-test was not a good indicator of test score outcomes for either the identification or relationship questions.

PAGE 109

97 Table 22. Pearson correlation coefficient for pre-test and identification scores pre-test identification pre-test 1.000 0.1913 identification 0.1913 1.000 Table 23. Pearson correlation coefficient for pre-test and relationship scores pre-test relationship pre-test 1.000 0.243 relationship 0.243 1.000 Instrumentation All students were asked to complete a demographic survey (Appendix A) prior to the study. Those that required 3D glasses acquired them at one of the five orientation sessions offered prior to the start of the study. During the orientation sessions, the study was explained to the students via a PowerPoint presentation. The same PowerPoint presentation was then posted to the Blackboard sections for the students to review. In addition, all study materials were available online to the students within a tab labeled “Skull Materials”, for easy reference. All materials were available online for a period of one week for the students to study. By using the “Adaptive Release” feature within BlackBoard, volunteers had access to only the materials for their group, either 2D or 3D. Study materials for the respective groups included PowerPoints and AVI narrated movies of the same PowerPoint. Both groups had access to the study guide list of questions and

PAGE 110

98 relationships, as seen in Appendix B. After one week, students were asked to return to the College of Nursing to take the laboratory practical examination that consisted of 15 identification questions and 15 relationship questions taken directly from the study guide list. The answer key is displayed in Appendix C. Upon completion of the laboratory practical exam, a user perspective survey (Appendix D) was administered online to all participating students in order to gauge their level of satisfaction with the PowerPoints and the study in general. Threats to Internal and External Validity As with all studies, there are situations that can create threats to validity of the instruments. One such threat to internal validity is the “history” of the volunteers in terms of the amount of anatomy they had previous to the study. Students indicated on the demographic survey a similar level of knowledge; however, there was no way of knowing the quality of their previous courses, or even if those previous courses included an in-depth study of the human skull. In addition, there was no way to guarantee that the students were not utilizing other resources to study and learn the material they were to be tested on. They were encouraged to utilize only those materials posted online; however, knowing that their lab scores would be a portion of their course grade could have tempted some of them to utilize additional resources. The one threat to external validity of note is population validity. The sample for this study was taken from a population of primarily 18-24 year old recent high school graduates, with one prior course in human anatomy. The majority of them indicated confidence in their computer abilities, and were seeking placement into one of many

PAGE 111

99 allied health fields of study. Results from a sample from this population then, can not be generalized to the general public, or to one specific population of allied health students. Summary of Findings Demographics for students from both sections of the online anatomy and physiology course showed similarities regardless of which group they were assigned (Table 8 and 9). The majority of the students were of the same age range; 18-24 years of age. Students were about evenly divided in terms of whether or not they had taken a prior anatomy course. A small percentage of students, 2.85% to 7.89% had prior experience with human anatomy software, and the majority of students taking part in this study were nursing students. Most students had a previous dissection course, and they had that course in high school. Regarding computer proficiency, students indicated they were either “intermediate” or “advanced” in terms of web browsers, email, instant messaging, word processing, spreadsheets and presentation software. Results for Research Questions 1 and 2 To re-iterate; research questions one and two were: 1. Does the use of 3D stereo images result in significantly higher scores for undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of identification? 2. Does the use of 3D stereo images result in significantly higher scores for

PAGE 112

100 undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of spatial relationships? When reviewing the doubly-multivariate repeated measures design, results reveal a statistically significant difference in group means for the main effect of the treatment groups 2D and 3D and variables of identification and relationship with the 3D group performing higher on both dependent variables, (Wilk’s Lambda (0.0479, p<.0001). The use of 3D stereo images did result in significantly higher scores for undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by subtest scores on a practical examination of relationships. Also, the use of 3D stereo images resulted in significantly higher scores for undergraduate students in learning the anatomy of the skull, when compared to 2D images of the same structures as measured by scores on a practical examination of spatial relationships. Research question three was: 3. Are the 3-dimensional digital stereo-images of human anatomy easy to use and to comprehend, and what are the students’ perceptions of them, as determined by a questionnaire in a sample of undergraduates? Students in the 2D group found the PowerPoint images to be convenient (31.5%) and detailed (16.4%), as evidenced in the qualitative themes (Table 19) that emerged. Likewise, the 3D group also listed “convenience” (22.0%) as their top reason for liking the PowerPoint, while the next highest theme score of 20.0% was that the images were

PAGE 113

101 “more realistic”. This finding corresponds with the theory that students in the 3D group were better able to visualize relationships within the skull than those in the 2D group. Poor image quality was a recurrent theme for both groups with regard to the PowerPoint images. When asked what they liked least about working with the PowerPoint, the theme that emerged most frequently for the 2D group was “image quality” (24.7%), while for the 3D group they least liked “the eye strain” (33.9%). It is interesting to note that the second and third highest themes to emerge for the 2D group to the question of what they liked least was “lack of depth perception”, (14.5%), and “not real enough” (11.6%), while for the 3D group, the second most common theme to emerge was “image quality” (22.5%) while “confusing” and “not real enough” both were stated with a frequency of 6.4% (Table 19). The results (Table 22) of the Likert questions within the User Perspective Survey, (Appendix D) show that within the 2D and 3D groups, the greatest percentage of agreement occurred for the statement, “ in general the images were easy to use” A slight majority of the volunteers within both groups disagreed that the “ activity was fun ”, (52.70% of 2D and 50.0% of those in the 3D group). Volunteers from both groups tended to agree that they “ could see the images clearly ”, with more of those students falling within the 2D group. The majority agreed with the statement that the “ graphics were of high quality ”, (79.73% and 57.14%) for 2D and 3D respectively. This is in contrast to the themes that emerged from the open ended questions in which volunteers from both groups indicated that “ image quality ” was one of things they disliked most about the PowerPoints, (Table 19). Volunteers generally disagreed that it was “ easy to find specific information ”, but agreed with the statements that “ they would like to use

PAGE 114

102 similar images to study other areas of human anatomy ”, (68.92% for 2D and 59.38% for 3D), and that “ they would use this PowerPoint as a primary reference ”, (71.62% for 2D and 67.19% for 3D). Volunteers clearly did not feel that the “ PowerPoints were a waste of their time ”, with 86.49% of the 2D group and 87.50% of the 3D group disagreeing with that statement. A majority of students from both groups disagreed with the statement, “ I would rather study only images from a book ”, (70.83% for 2D and 70.31% for 3D), while an area of note is that a considerable percentage of students from both groups disagreed with the statement, “ I feel that I can learn as much from PowerPoint images as from doing a real dissection ”, (75.68% for 2D and 70.31% for 3D). Clearly, students prefer an actual dissection over the 2D or 3D PowerPoint. More volunteers disagreed than agreed with the statement, “ I was often confused as to where to go to find what I was looking for ”. Groups were divided only on their agreement responses to the statement, “ Looking at these images hurt my eyes ”, with more of the 3D group agreeing (60.94%) and 83.78% of the 2D group disagreeing with that statement. When asked to describe how they felt while working with the PowerPoint images, (Table 23), the majority of both groups (43.0%, 2D and 45.3%, 3D) agreed that they were “a little confused”. When asked in Table 23, which method they would prefer to use to learn human anatomy, the vast majority (75.0% for both groups) agreed that they would prefer a combination of textbooks, PowerPoint and actual dissection, rather than simply one method more than any other. Both groups seemed to find the pace of the task (Table 25) to be “just right”, (45.2%, 2D and 43.7%, 3D) and finally the majority of both groups found that the PowerPoint added to their ease of learning human anatomy, (50.6%, 2D and 61.0%, 3D), as seen in Table 26.

PAGE 115

103 Conclusion According to Shaffer, when learning human anatomy, students must be able to visualize the 3D organization in their mind to fully understand the workings of and relationships that exist within the human body (2004). This has been the historical goal of the human dissection laboratory. Mental Model Theory addresses the issue of how students learn such complex systems. The Mental Model theory as described by Jonassen (1994) and Bayman and Mayer (1984) explain how a user processes complex systems into a conceptual representation that they can then understand. As was evidenced in Table 2, the reviewers indicated that the PowerPoints met the majority of Mayer’s criterion, thereby indicating that the PowerPoints had the necessary elements, according to Mayer, to create an effective mental model for the students to learn human anatomy. The students who were assigned to the 3D group did outperform those in the 2D group on measures of identification as well as relationships. One way to explain that difference is to suggest that those in the 3D learning environment were better able to create appropriate mental models of the complex system of human anatomy as they looked at the images and studied the relationships. Mental models tend to be created by the individual in a way that works best for them. The learners in this study still created their mental models in an individual and internal way, based upon what Staggers and Norcio (1993) stated as prior experience and / or instruction. Based upon the results, it also appears that the 3D materials, as designed in this study, did assist the users in creating appropriate mental models of the complex system of the human skull. Students in the 3D group could more clearly see which structures inter-digitated with the others and how the different bones and features were positioned in relation to the others, and

PAGE 116

104 they learned this material prior to coming into the laboratory for the practical examination. They were able to create appropriate mental models of the structures better than those in the 2D group. As noted in the qualitative data, the greatest complaint from those in the 3D group was eye strain from the 3D images, yet they outperformed those in the 2D group on scores of both identification and relationships. Likewise, those in the 2D group commented more frequently that they could not see some of the images clearly and did not have the depth perception necessary to distinguish between structures. The results of this study are not consistent with that found by Bukowski (2002), Guy and Frisby (1992) and Jones et.al. (1978). Bukowski (2002) found no significant difference between groups of physical therapy students exposed to various techniques in the gross anatomy laboratory over a period of three years. In the first year, a group of 18 students were exposed to a traditional cadaver anatomy laboratory. A different group in year two, consisting of 17 students was exposed to a self-study “computerized noncadaver laboratory course”. Finally a third group of students (n=20) was exposed to weekly lectures along with the “noncadaver laboratory course”. No significance was found on measures of group means, class study time or state board licensure results. In her article however, Bukowski (2002) did not describe in detail the elements of the “computerized noncadaver laboratory course”. The students in her study were directed to use this computerized lab as a self-study, and there was no indication that the students were directed to learn specific information, or how much time they were given to learn it. The fact that the current study found significance and the Bukowski (2002) study did not may be due to the fact that the current study incorporated a much larger group “n” than

PAGE 117

105 did Bukowski’s groups of 18, 17 and 20 respectively. Also, this study consisted of much different treatments. Likewise, in their 1992 study, Guy and Frisby found students demonstrated no significant difference in performance when assigned to videodiscs or traditional dissection techniques. Their study was not theory based however, and was criticized for being simply a media comparison. Another study looked at the effects of the incorporation of multimedia with prosections into anatomy laboratories. Jones, Olafson and Sutin (1978) found that there was no difference between this multimedia program coupled with prosections and traditional dissection techniques on measures of written and practical examinations. Within this study, eye strain was a significant factor for the 3D group volunteers (Table 22). Over 60.0% of those in the 3D group agreed that “ looking at the images hurt my eyes”. Ware (1995) attributes this “eye strain” when viewing 3D images to the fact that the user may be trying to focus on a portion of the image that is simply not in focus for their eye structure. If however, in creating the 3D images so that there is less eye strain; it becomes possible to align the stereo image too much which can result in a lack of depth of the image along with color disparity (McVeigh, et.al., 1996). Color disparity was not a concern in this study because the specimen chosen was basically white throughout. The issue of loss of color may become more of a concern when studying different areas of human anatomy that require that one distinguish features based upon nuances in color. Regions of the body that include muscle and vessels are two such areas. Clearly, more research needs to be done in the area of developing 3D digital images that retain the depth of field, but reduce the flattening of the color.

PAGE 118

106 Regarding student perceptions, the results of this study generally agree with those found previously ( Franklin, Peat, and Lewis, 2002, Khalil Lamar, and Johnson 2005, Snelling, Sahai, and Ellis, 2003, Waters, Van Meter, Perotti, Drogo, and Cyr, 2004). In the earlier studies, students in medical and allied health courses tended to prefer an actual dissection over alternative methods such as prosections, computer simulations or sculpting of clay. Students in this study also indicated a preference for actual dissection over any other method (Table 22). When asked whether they agreed or disagreed with the statement, “ I feel that I can learn as much from PowerPoint images as from doing a real dissection ”, 75.68% of the 2D group and 70.31% of the 3D group disagreed. Likewise, when asked to indicate “ which method they would prefer to use to learn human anatomy ”, 75.0% of all students indicated they would prefer a combination of textbook, PowerPoint and actual dissection (Table 24). However, the next highest percentage for that question indicated that actual dissection (15.3% and 14.0% for 2D and 3D respectively) was preferred over textbook or PowerPoint alone. Practical Significance Although the effect sizes (s) were small, as indicated in Table 12, there is a practical significance to these findings. Being able to improve student performance when learning human anatomy, even to a small degree such as was found in this study can have benefits in application beyond the classroom. For instance, a student who is better able to understand the relationships that exist within the complex system of human anatomy may be better positioned to accurately apply that information when dealing with patients. Understanding the nuances involved in the interrelationships of the structures involved in

PAGE 119

107 human anatomy may help future nurses or other medical practitioners to better explain a condition or disease state to a patient. For example, they may have more confidence in their ability to understand and relay information on coronary artery disease or the effects of laparoscopic surgical techniques. These are just a few simple examples of how learning the human anatomy more effectively can assist in improving patient care delivery. Implications for Practice and Policy It is clear from this study that 3D images incorporated into online human anatomy and physiology laboratories can be effective in assisting the students to learn and understand important relationships that exist between and among complex structures of human anatomy. However, because of the eye strain that tends to occur with the 3D images as created for this study, it is doubtful that the 3D images will replace the standard PowerPoint and text images. It is more likely that 3D imaging should be utilized as a supplement to standard materials. Neotek 3D images can be visualized on a CRT monitor without the eye strain that is apparent with the self-made 3D images created with Pokescope Pro, however CRT monitors are not commonly purchased by students any longer, and the cost of the Neotek headset for viewing stereo images can be costly; currently around $400.00 per set. Therefore, in order to use the Neotek 3D images, one may need to assign the task of reviewing the 3D images in a computer lab where the instructor has control over the equipment obtained and utilized, and where CRT monitors can be found. That being said, however, if a method could be devised that lessened the eye strain of the 3D images it is likely that they could be used for different systems or

PAGE 120

108 regions of human anatomy. Pokescope Pro, http://pokescope.com has devised a 3D viewer called the Pokescope stereoscope. With this small, collapsible, viewer, created with glass prism windows and an outer plastic construction, a user can view; € Full-screen stereo images on their computer € Large print stereographs € 4"x6" stereo prints from photo processors € Traditional stereo cards € Stereo images on TV screens € Projected stereo images Images can be reproduced on cards, with the two images appearing side by side and then viewed with a Pokescope stereo viewer, or the images can be replicated onto a TV screen. The cost of a Pokescope stereo viewer is currently $40.00 and the software to create the stereo images is also currently $40.00. Students would need only to purchase the pokescope stereo viewer to visualize the images on cards. This is one way that the images could be used in an online course, as a primary or secondary reference, without the use of advanced computer technology. Cards would cut down on eye strain as they can be viewed in a well lit environment, and the cards could consist of labeled images on one side and unlabelled images on the opposite side for study purposes. The cards would also be convenient for students to carry with them as a study reference. Recommendations for Further Research It was difficult to determine how the students actually utilized the online materials available to them since they studied at their own time and pace. It was also unclear how much time was spent on the various methods, i.e., PowerPoint versus the .AVI narrated

PAGE 121

109 movie file, and how that time difference may have influenced scores on the measures of identification and relationship. It would be of interest to replicate this study under more controlled circumstances. For instance, although for this study online classes were used and all materials were posted online, it would be worthwhile to conduct the study with a face-to-face section of the course, and to measure how much time was spent on the various methods. It would also be of value to observe student reactions to the PowerPoint, by way of a usability study, in order to gather more in-depth information regarding student perceptions. This could be done using video equipment or by having a number of observers available to record “think aloud” comments and gestures made by the students when they used the study materials. Additional future work could involve repeating the test with a graduate student population and comparing the results with the undergraduate population. It could be of interest to see if there is any difference between how the two populations view and use the study materials. An analysis of the motivations of the students to learn would add to this study. According to Kickul and Kickul (2006, p. 371), students each bring different “preferences, needs and motivations” to their learning goals, and it is worthwhile to try to understand what those are when developing e-learning courses. Kickul and Kickul found that “ while learning goal orientation was a key factor in influencing learning and satisfaction, proactive personality played a pivotal role in enhancing students learning”(2006, p 369). It would therefore be of value to determine what student needs, preferences and motivations are brought to any course. By doing that, an instructor perhaps has a better chance of motivating, and not frustrating the students. Also, Clark,

PAGE 122

110 Nguyen and Sweller ( 2006) suggest that there is a difference between novice and expert learners and that novices needed more time to reconstruct an image they just saw. This may suggest that graduate students with more experience in human anatomy may do better with the 3D material than the undergraduate population because they are not as overloaded cognitively with new information. In line with this theory on cognitive overload, which has its roots in mental model theory, the two different types of materials that were presented in this study, i.e., the standard Powerpoints as well as the .AVI movie files, may have had different effects on the novice students. Future research could test the different types of study materials for their cognitive effects and ultimately their learning effectiveness for the different student populations. In addition, it could be of value to study the use of the Pokescope stereo viewer with stereo cards. The cards could be created for different regions of human anatomy and students could be surveyed regarding their perception and value of the cards to their understanding of human anatomy relationships. These cards would be similar to those created during the Victorian period, when stereo-cards (two images printed onto one card) of vacation spots were mass produced and were viewed through a special viewer that held the card and combined the images into one with depth through a viewer. If the cards resulted in less eye strain, they could be a valuable study guide for students. The study could also be replicated with a larger “n” to see if any differences in results occur. Because the effect sizes were small as indicated by Cohen (Table 12) it may be that a larger sample size would shed more light on nuances that may occur between the groups, and would perhaps be evidenced in the qualitative data.

PAGE 123

111 In conclusion, the 3D group significantly outperformed the 2D group both on measures of identification as well as on measures of relationship when tested on the human skull. Students found the images confusing at times, and the 3D group indicated much greater eye strain as opposed to the 2D group, however the 2D group indicated that a lack of depth perception was a problem for them in identifying structures of the human skull. Having 3D images of human anatomy can be an effective way to assist students in understanding the relationships that exist within human anatomy. It is difficult to present the 3D images at a distance, primarily due to the eye strain caused. If, however images could be created that do not cause the eye strain as evidenced in the PowerPoints utilized in this study, then online applications of the 3D images could be easily incorporated into any online course of human anatomy and physiology. This is an area of study in which there are many important areas for future research that may impact the practical application of online instruction in undergraduate human anatomy and physiology courses.

PAGE 124

112 List of References A.D.A.M. Online Anatomy, (2005). Retrieved December 03, 2005, from http://www.adam.com Agur, A., & Dalley, A. (2005). Grant’s Atlas of Anatomy Baltimore: Lippincott, Williams and Wilkins. Alberti, M.A., Marini, D., & Trapani, P.(1998). Experimenting web technologies to access an opera theatre. In T. Ottman, & I. Tomak (Eds.), Proceedings of the World Conference on Educational Multimedia, Hypermedia and Telecommunications Charlottesville, VA: AACE. All About Stereo Photography (2005). Retrieved on December 21, 2005, from http://www.shortcourses.com/how/stereo/stereoimages.htm American Association of Anatomists, (2005). Retrieved November 12, 2005, from, http://www.anatomy.org American Association of Colleges of Nursing, (1999). Distance technology in nursing education. Retrieved September 15, 2005, from http://www.aacn.nche.edu American Association of Colleges of Nursing, (2000). Distance Learning is Changing and Challenging Nursing Education. Issue Bulletin. Retrieved October 14, 2005, from http://www.aacn.nche.edu American Association of Colleges of Nursing, (2003). Faculty shortages in baccalaureate and graduate nursing programs: scope of the problem and strategies for expanding the supply. Washington, D.C. American Association of Colleges of Nursing (2004). Thousands of students turned away from the nation’s nursing schools despite sharp increase in enrollment. Retrieved on February 9, 2004, from http://www.aacn.nche.edu/Media/NewsReleases/enrl03.htm Association of American Medical Colleges (1984). Physicians for the Twenty-First Century. Association of American Medical Colleges; Washington D.C. Boudinot, S.G., & Martin, B.C. (2001). Retrieved November 1, 2005, from http://imej.wfu.edu/articles/2001/1/01/index.asp

PAGE 125

113 Bassett Stereoscopic Atlas (1952). Slice of Life. Retrieved January 01, 2006, from, http://medlib.med.utah.edu/sol/contributors/DavidLBassett.html Bayman, P.& Mayer, R.E. (1984). Instructional manipulation of users’ mental models for electronic calculators. International Journal of Man-Machine Studies 20, 189199. Burdeau, C. (2004, March 10). Tulane stops cadaver delivery after bodies used in mine tests. The Associated Press State & Local Wire. Retrieved April 28, 2004, from http://web.lexis-nexis.com/universe/printdoc Bureau of Labor Statistics(2005). Retrieved November 1, 2005, from http://www.stats.bls.gov/oco/ocos083.htm Bukowski, E. (2002). Assessment outcomes: Computerized instruction in a human gross anatomy course. Journal of Allied Health 31, 153-158. Byrne, C., Furness, T., & Winn, W. (1995). The use of virtual reality for teaching atomic/molecular structure. American Educational Research Association San Francisco, CA. Card, S. K., Moran, T.P., & Newell, A. (1983). The psychology of humancomputer interaction. Hillsdale, NJ: Lawrence Erlbaum. Carley, K., & Palmquist, M. (1992). Extracting, representing, and analyzing mental models. Social Forces, 70 (3), 601 – 636. Chan, A.C.W., Chung, S.C.S., Yim, A.P.C., Lau, J.Y.W., Ng, E.K.W., & Li, A.K.C. (1997). Comparison of two-dimensional vs three-dimensional camera systems in laparoscopic surgery. Surgical Endoscopy 11: 438-440. Clark, R.C., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-based guidelines to manage cognitive load. San Francisco, CA: Pfeiffer. Cohen, J. (1992). A Power Primer. Psychological Bulletin 112 (1), 155-59. Cockburn, A. (2004). Revisiting 2D vs 3D implications on spatial memory. Australian Computer Society, Inc. 5th Annual Australasian User Interface Conference (AUIC2004), Dunedin. Ciuffreda, K.J., Levi, D.M., &.Selenow, A. (1991). Amblyopia: Basic and Clinical Aspects Boston: Butterworth-Heinemann. Cody, R.P. & Smith, J.K. (1997). Applied Statistics and the SAS Programming Language, 4th Edition. New York: North-Holland.

PAGE 126

114 Commission on Accreditation of Allied Health Education Programs, (2005). Retrieved November 1, 2005, from http://www.caahep.org Cosman, P.H., Hutchins, M., & Cregan, P. (2001). Letter to the editor. Art macabre: Is anatomy necessary? ANZ. Journal of Surgery 71, 779-784. Csikszentmihalyi, M.(1990 ). Flow: The Psychology of Optimal Experience. New York: Harper Collins. Dalgarno, B. (2002). The potential of 3D virtual learning environments: A Constructivist analysis. Electronic Journal of Instructional Science and Technology, 3(19), 1-19. Dalgarno, B., Hedberg, J., & Harper, B. (2002). The contribution of 3D environments to conceptual understanding. In A.Williamson, A. Gunn, Young & T. Clear (Eds.), Winds of Change in the sea of learning: Charting the course of digital education Proceedings of the 19th annual conference of ASCILIT Aucklund, NZ: UNITEC Institute of Technology, 149-158. Dalgarno, B., & Harper, B. (2004). User control and task authenticity for spatial learning in 3D environments. Australian Journal of Educational Technology 20 (1), 1-17. Dictionary.com (2005). Retrieved September 04, 2005, from http://www.dictionary.com Dyer, G. & Thorndike, M., (2000). Quidne mortui vivos docent? The evolving purpose of human dissection in medical education. Academic Medicine, 75 (10), 969979. Farooq, M. U., & Dominick, W. D. (1988). A survey of formal tools and models for developing user interfaces. International Journal of Man-Machine Studies, 29 479-496. Franklin, S., Peat, M., & Lewis, A. (2002). Traditional versus computer-based dissections in enhancing learning in a tertiary setting: a student perspective. Journal of Biological Education, 36(3), 124-129. Gatto, D. (1993). The use of interactive computer simulations in training Australian Journal of Educational Technology 9 (2), 144-156. Gregory, S.R. & Cole, T.R. (2002). The changing role of dissection in medical education. Journal of the American Medical Association, 287 (3), 1180. Gunderman, R.B., & Wilson, P.K. (2005). Exploring the human interior: The roles of

PAGE 127

115 cadaver dissection and radiologic imaging in teaching anatomy. Academic Medicine 80(8); 745749. Guy, J.F., & Frisby, A.J. (1992). Using interactive videodiscs to teach gross anatomy to undergraduates at the Ohio State University. Academic Medicine 67 (2), 132133. Harrison, J.F., Nichols, J.S., & Whitmer, A.C. (2001). Evaluating the impact of physical renovation, computerization, and use of an inquiry approach in an undergraduate, allied health human anatomy and physiology lab. Advances in Physiology Education. 25, 202-210. Hedberg, J., & Alexander, S. (1994). Virtual reality in education: Defining researchable issues. Educational Media International 31, 214-220. Heylings, D.J.A. (2002). Anatomy 1999 – 2000: The curriculum. Who teaches it and how? Medical Education 36, 702-710. Hueyching, J.J. & Reeves, T.C. (1992). Mental models: A research focus of interactive learning systems. Educational Technology Research, 40 39-53. Hillsborough Community College, (2005). Anatomy laboratory syllabi. Retrieved November 12, from, http://yborweb.hccfl.edu/cgibin/Departments/DisplayFaculty_Miletta_Info.pl?document=03_Syllabi_in_Word Hsu, J., Pizlo, Z., Babbs, C.F.,Chelberg, D.M., & Delp, E.J. (1994). Design of studies to test the effectiveness of stereo imaging truth or dare: Is stereo viewing really better? In S.Fisher, J.Merritt & M. Bolas (Eds.), Stereoscopic Displays and Virtual Reality Systems, Proceedings of SPIE, 2177, 211-222. Jablon, R. (2004, March 10). Demand for cadaver tissue fuels illegal activity. Associated Press State & Local Wire Retrieved April 28, 2004, Available from Lexis-Nexis website, http://web.lexisnexis.com/universe/printdoc Jonassen, D. H. (1995). Operationalizing mental models: Strategies for assessing mental models to support meaningful learning and design-supportive learning environments. The first international conference on Computer support for collaborative learning, Indiana University, Bloomington, Indiana, Mahwah, N.J.: Lawrence Erlbaum Associates, Inc. Retrieved, September 23, 2004, from Http://www.Ittheory.Com Jonassen, D. H. (Ed.). (1996). Handbook of Research for Educational Communications and Technology. New York: Macmillan LIBRARY Reference USA.

PAGE 128

116 Jones, N.A., Olafson, R.P., & Sutin, J. (1978). Evaluation of a gross anatomy program without dissection Academic Medicine, 53, 198-205. Keppell, M., Macpherson, C. (1997) Is the Elephant Really There? Virtual Reality in Education, Retrieved January 03, 2006, from, http://www.ddce.cqu.edu.au/ddce/ confsem/vr/present.html Khalil, M.K., Lamar, C.H., & Johnson, T.E. (2005). Using computer-based interactive imagery strategies for designing instructional anatomy programs, Clinical Anatomy 18, 68-76. Kickul, G., & Kickul, J. (2006). Closing the gap: Impact of student proactivity and learning goal orientation on e-learning outcomes. International Journal on Elearning 5 (3), 361 – 372. Klestinec, C. (2004). A History of anatomy theaters in sixteenth century padua. Journal of the History of Medicine and Allied Sciences 59, 375 – 412. MacPherson, C., (1997). Is the elephant really there? – Virtual reality in education. A seminar presentation made at Central Queensland University, October. Retrieved October 12, 2005, from, http://infocom.cqu.edu.au/Units/aut99/00101/00101/RESOURCE/TUTORIAL/V R-PRES.PDF Marks, S.C. (2000). The role of three-dimensional information in health care and medical education: The implications for anatomy and dissection. Clinical Anatomy 13, 448-452. Mayer, R.E. (1983). Thinking, problem solving, cognition, (2d ed.), New York: Freeman. Mayer, R.E. (1989). Models for understanding. Review of Educational Research 59, 4364. Mayer, R. E. (2001). Multimedia learning. New York, NY: Cambridge University Press. McCuskey, R., Carmichael, S., & Kirch, D.G. (2004). The importance of anatomy in health professions education and the shortage of qualified educators. Academic Medicine 80, 349-351. McNulty, J.A., Halama, J., & Espiritu, B.(2004). Evaluation of computeraided instruction in the medical gross anatomy curriculum. Clinical Anatomy 17 (1), 73 – 78.

PAGE 129

117 McVeigh, J.S., Siegel, M.W., & Jordan, A.G. (1996). Algorithm for automated eye strain reduction in real stereoscopic images and sequences. Proceedings of the SPIE International Conference on Human Vision and Electronic Imaging, San Jose, CA, 2657. Moore, K.L. & Agur, A.M.R. (1995). Essential clinical anatomy. Baltimore: Williams & Wilkins. Moray, N. (1987). Intelligent aids, mental models, and the theory of machines. International Journal of Man-Machine Studies 27 (5-6), 619-629. Neotek. (2004). Retrieved September 23, 2004, from, http://www.neotek.com/ National Library of Medicine (2005). Retrieved November 1, 2005; http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=Mesh&cmd=search&term=alli ed+health+personnel Newman, A. (2004, March 12). The logistics of the cadaver supply business. New York Times. Retrieved April 28, 2004. Available from LexisNexis website, http://web.lexis-nexis.com/universe/printdoc Odenwald, W.F., Arnheiter, H., Dubois-Dalcq, M. & Lazzarini, R.(1986). Stereo images of vesicular stomatitis virus assembly. Journal of Virology, 3, 922932. Orenstein, C., & Zarembo, A. (2004, March 10). The UCLA Body parts scandal; UCLA suspends body-donor program after alleged abuses; medical school’s actions follow accusation that cadavers have been sold illegally to outsiders. Los Angeles Times. Retrieved, April 28, 2004, Available from LexisNexis website, http://web.lexis-nexis.com/universe/printdoc PokeScope Stereoscopic Software. (2005). Retrieved September 06, from http://www.pokescope.com/software/PokeScopePro24.html Prentice, E.D., Metcalf, W.K., Quinn, T.H., Sharp, J.G. Jensen, R.H., & Holyoke, E.A. (1977). Stereoscopic anatomy: evaluation of a new teaching system in human gross anatomy. Academic Medicine, 52, 758-763. Publication Manual of the American Psychological Association (5th ed.). (2001). Washington, DC: American Psychological Association. Primal Pictures. (2004). Retrieved September 24, 2004, from, http://www.primalpictures.com/Index.aspx

PAGE 130

118 RehabCare, UMC to address allied health worker shortage. (October 12, 2005). St. Louis Business Journal, Retrieved November 12, 2005, from, http://stlouis.bizjournals.com/stlouis/stories/2005/10/10/daily41.html Rhodes, G. (1997). Stereo viewing. Retrieved December 20, 2005, from http://www.usf.maine.edu/~rhodes/0Help/StereoView.html Robertson, G.G., Card, S.K., & Mackinlay, J.D. (1993). Non immersive virtual reality. Computer, 26, 81-83. Robinson, A.G., Metten, S., Guiton, G. & Berek, J. (2004). Using fresh tissue dissection to teach human anatomy in the clinical years. Academic Medicine, 79, 711-716. Ruzic, F. (1999). The future of learning in virtual reality environments. In M. Selinger, & J. Pearson, (Eds.), Telematics in education: Trends and issues. Amsterdam: Pergamon. Sasse, M. A. (1991). How to trap user's mental models. In M.J. Tauber, & D. Ackerman, (Eds.), Mental models and human-computer interaction Amsterdam: Elsevier. Sauerland, E.K. (1999). Grant’s dissector. Baltimore: Williams & Wilkins. Shaffer, K. (2004). Teaching anatomy in the digital world. The New England Journal of Medicine 351 (13), 1279-1281. Snelling, J., Sahai, A., & Ellis, H., (2003). Attitudes of medical and dental students to dissection. Clinical Anatomy 16, 165-172. Staggers, N., & Norcio, A.F. (1993). Mental models: Concepts for human-computer interaction research. International Journal of Man-Machine Studies 38(4), 587605. Stevens, J. P. (2002). Applied Multivariate Statistics for the Social Sciences. (4th ed.) New Jersey: Lawrence Erlbaum Associates, Inc. Sweller, J., van Merrienboer, J.J.G. & Paas G.W.C. (1998). Cognitive architecture and instructional design. Educational Psychology Review 10, 251296. Tavanti, M. & Lind, M. (2001). 2D vs 3D, Implications on spatial memory. Proceedings of IEEE Info Vis 2001 Symposium on Information Visualization, San Diego, CA.

PAGE 131

119 Trelease, R.B. (1998). The virtual anatomy practical: A stereoscopic 3D interactive multimedia computer examination program. Clinical Anatomy 11, 8994. University of Arkansas for Medical Sciences. Retrieved October 25, 2005, From, http://anatomy.uams.edu/anatomyhtml/grossinfo.html University of South Florida, College of Nursing, Plan of Study, (2005). Retrieved November 12, 2005, from http://hsc.usf.edu/nocms/nursing/Programs_of_Study/ftic.html Vichitvejpaisal, P., Sitthikongsak, S., Preechakoon, B., Kraiprasit, K., Parakkamodom, S., Manon, C., et al. (2001). Does computer-assisted instruction really help to improve the learning process? Medical Education. 35 983-989. Waller, D., Hunt, E., & Knapp, D. (1998). The transfer of spatial knowledge in virtual environment training. Presence 7(2), 129 -143. Waters, J.R., Van Meter, P., Perotti, W., Drogo, S., & Cyr, R.J. (2004). Cat dissection vs. sculpting human structures in clay: An analysis of two approaches to undergraduate human anatomy laboratory education. Advances in Physiology Education, 29, 27-34. Ware, C. (1995). Dynamic stereo displays. Proceedings of the ACM CHI'95 Conference, Denver, 311-316. Winn, W., & Jackson, R. (1999). Fourteen propositions about educational uses of virtual reality. Educational Technology 39, 5-14. Winn, W., & Snyder, D. (1996). Cognitive perspectives in psychology. In D. H. Jonassen (Eds.), Handbook of research for educational communications and technology (pp. 112 142). New York: Macmillan LIBRARY References USA. Zarembo, A. (2004, February 28). Cutting out the cadaver; dissecting human bodies in Medical school anatomy labs, long a gruesome rite of passage for doctors, is going the way of house calls. Los Angeles Times. Retrieved April 28, 2004, Available from Lexis-Nexis website, http://web.lexisnexis.com/universe/printdoc Zarembo, A. (2004, March 14). Surgeons fear effect of scandal on training. Los Angeles Times Retrieved April 28, 2004. Available from LexisNexis website, http://web.lexis-nexis.com/universe/printdoc Ziv, A., Wolpe, P., Small, S. & Glick, S. (2003). Simulation-based medical

PAGE 132

120 education: An ethical imperative. Academic Medicine, 78, 783-788. Zugar, A. (2004, March 28). The case for and against cadavers. The Toronto Star Retrieved April 28, 2004. Available from Lexis-Nexis website, http://web.lexis-nexis.com/universe/printdoc

PAGE 133

121 Appendices

PAGE 134

122 Appendix A: Demographic Questionnaire This brief questionnaire is designed to gather some general demographic information from you as well as information on your computer experience before you begin the study. Responses are anonymous and no information about you individually will be identified or used in any way. Thank you for participating in this study. Please indicate your age range. __ 18-24 __ 25-30 __ 31-35 __ 36-40 __ 41-45 __> 45 Have you had a Human Anatomy course prior to this one? __No __Yes If yes, how long ago was the course (s)? __ Less than 5 years ago. __ 5 years ago or more. Have you had any experience prior to this class with any human anatomy software? _No _Yes If so, which software did you use? _Primal Pictures _ADAM _Neotek _I don’t remember Please indicate your area of study. __________________ Have you had a course prior to this class in which you dissected biological materials? _No _Yes If yes, please indicate the type of course it was. You may check all that apply. Middle School Honors Program

PAGE 135

123 High School General Biology Undergraduate biology other, please specify How old is the computer you will use most of the time to access this course? Less than one year 1 to 3 years old Greater than 4 years old Please rate your level of proficiency using the following software: Beginner IntermediateAdvancedI don’t know Web browsers (i.e. internet Explorer, Netscape) Email Instant Messaging/chat Word Processing (i.e. Microsoft Word, WordPerfect) Spreadsheets Presentation software Thank you for taking this questionnaire! Please click on the "Done" button to submit your replies. You will be redirected to the USF homepage!

PAGE 136

124 Appendix B: Study Guide List of Structures and Relationships – Human Skull 1. Angle of mandible 2. Anterior clinoid process 3. Anterior cranial fossa 4. Auditory tube 5. Body of mandible 6. Body of Sphenoid 7. Carotid canal 8. Choana 9. Clivus 10. Coronoid process 11. Cribriform plate 12. Crista gali 13. Ethmoid bone 14. External acoustic meatus 15. External occipital protuberance 16. Foramen magnum 17. Foramen ovale 18. Foramen rotundum 19. Foramen spinosum 20. Frontal bone 21. Greater wing of sphenoid 22. Head of mandible 23. Hypoglossal canal 24. Inferior concha 25. Inferior meatus 26. Inferior orbital fissure 27. Infraorbital foramen 28. Infraorbital groove 29. Infratemporal fossa 30. Internal acoustic meatus 31. jugular foramen 32. Lacrimal bone 33. Lacrimal fossa 34. Lateral pterygoid plate 35. Lesser wing of sphenoid 36. Mandible 37. Mandibular fossa 38. Mastoid process

PAGE 137

125 39. Maxillary bone 40. Medial pterygoid plate 41. Mental foramen 42. Middle concha 43. Middle cranial fossa 44. Middle meatus 45. Nasal bone 46. Nasal cavity 47. Nasal spine 48. Nasion 49. Neck of mandible 50. Occipital bone 51. Occipital condyle 52. Optic canal 53. Oral cavity 54. Orbital cavity 55. Palatine bone 56. Palatine processes of maxillary bone 57. Parietal bone 58. Perpendicular plate of ethmoid bone 59. Petrous part of temporal bone 60. Posterior clinoid process 61. Posterior cranial fossa 62. Pterion 63. Pterygoid fossa 64. Ramus 65. Sella trucica 66. Sphenoid bone 67. Spinous process 68. Squamous part of temporal bone 69. Styloid process 70. Stylomastoid foramen 71. Superior orbital fissure 72. Superior temporal line 73. Supraorbital foramen (notch) 74. Supraorbital margin 75. Temporal bone 76. Temporal fossa 77. Vomer 78. Zygomatic arch

PAGE 138

126 79. Zygomatic bone 80. Zygomatic process of temporal bone Relationships: 1. Which bones articulate (combine) to form the pterion? 2. Identify a foramen that enters the petrous portion of the temporal bone. 3. Identify the bones that form the borders of the inferior orbital foramen. 4. Identify the specific bone features that articulate to make the temporomandibular joint (TMJ). 5. Identify the foramen that is located along a line connecting the superior orbital notch and the mental foramen. 6. Identify the bone that makes up the anterior most-portion of the lateral wall of the orbit. 7. Identify the bone that forms the floor of the anterior cranial fossa. 8. Identify the bone that makes up the posterior most portion of the nasal septum. 9. Identify the small foramen that is located in the lateral wall of the foramen magnum. 10. Identify all the bones that articulate directly with the lacrimal bone. 11. What bony feature of the skull articulates with the first cervical vertebra to form the atlanto-occipital joint? 12. The posterior most portion of the hard palate is formed by a portion of which bone? 13. The anterior clinoid processes are features of which bone. 14. Which fossa on your list best describes the location of the cribriform plate of the ethmoid bone? 15. The anterior wall of the posterior cranial fossa is formed by which bone? 16. The foramen ovale and rotundum are features of which bone?

PAGE 139

127 17. The optic foramen is a feature of which bone? 18. What foramen is located directly behind the temporomandibular joint? 19. The inferior orbital foramen is a feature of which bone? 20. The superior orbital notch is a feature of which bone? 21. The superior concha is a feature of which bone? 22. The nasal bone articulates with which bone(s)? 23. The mastoid process is a feature of which bone? 24. The jugular foramen is located between which two bones? 25. The sella tursica is a feature of which bone? 26. The middle ear cavity is located inside of which bone? 27. Which bone forms the floor of the posterior cranial fossa? 28. The mental foramen is a feature of which bone? 29. The medial and lateral pterygoid plates are features of which bone? 30. The foramen magnum is a feature of which bone?

PAGE 140

128 Appendix C: Answer Key for Identification and Relationship Questions Identification Questions Question # Identification Answer number Identify the structure indicated by the arrow. 1. 69 Styloid process 2. 32 or 33 Lacrimal bone or lacrimal fossa 3. 73 Supraorbital foramen 4. 10 Coronoid process 5. 14 External acoustic meatus 6. 50 or 51 Occipital bone or occipital condyle 7. 77 Vomer 8. 21 or 66 Greater wing of sphenoid bone or sphenoid bone 9. 52 Optic canal 10. 50 or 61 Occipital bone or posterior cranial fossa 11. 6 or 65 Body of sphenoid or sella trucica 12. 17 Foramen ovale 13. 55 Palatine bone 14. 64 or 36 Ramus of mandible or mandible 15. 59 or 75 Petrous part of temporal bone or temporal bone Relationship questions Question # Relationship Answer # Relationship answer terms 16. The anterior clinoid processes are features of which bones? 66 Sphenoid 17.Which fossa on your list describes the location of the cribriform plate of the ethmoid bone? 3 Anterior cranial fossa 18. The middle ear cavity is located inside of which bone? 75 or 59 Temporal bone or petrous part of temporal bone 19. The jugular foramen is located between which two bones? 50, 75 Occipital, temporal 20. Which four bones articulate (combine) to form the pterion? 20, 57, 66, 75 or 21 Frontal, parietal, sphenoid, temporal, greater wing of sphenoid 21. Name a foramen that enters the petrous portion of the temporal bone. 30 or 14 Internal or external acoustic meatus 22. Identify the bone that makes up the posterior most portion of the nasal septum 77 Vomer 23. What bony feature of the skull articulates with the first cervical vertebra to form the atlanto-occipital joint? 51 Occipital condyle 24. Identify the foramen that is located along a line connecting the supraorbital notch and the mental foramen. 27 Infraorbital foramen

PAGE 141

129 Relationship Questions continued 25. The superior concha is a feature of which bone? 13 Ethmoid 26. The nasal bone articulates with which two bone(s)? 20, 39 Frontal, maxillary bones 27. The inferior orbital foramen is a feature of which bone? 39 Maxillary bone 28. Identify the bone that makes up the anterior most-portion of the lateral wall of the orbit. 79 Zygomatic bone 29. The foramen magnum is a feature of which bone? 50 Occipital bone 20. The superior orbital notch is a feature of which bone? 20 Frontal bone

PAGE 142

130 Appendix D: User Perspective Questionnaire This brief questionnaire is designed to gather your perspectives on the technology employed in learning about the skull. Your responses are important. Please complete each question. All information you share is confidential. Thank you for participating in this study. Please provide your full name in the space below. Describe how you felt while working with the PowerPoint images? completely confused a little confused everything made sense I don’t know Other (please specify) Please rate your level of agreement with the following statements. Strongly agree Agree DisagreeStrongly Disagree Not Applicable In general, the images were easy to use. I think this activity was fun. I could see the images clearly. The graphics were of high quality. It was easy to find specific information. I would like to use similar images to study other areas of human anatomy. I would use this PowerPoint as a primary reference. I found the PowerPoint images to be a waste of my time. I would rather study only images from a book. I feel that I can learn as much from the PowerPoint

PAGE 143

131 images as from doing a real dissection. I was often confused as to where to go to find what I was looking for. Looking at these images hurt my eyes. Tell us which method you would prefer to use to learn human anatomy Textbooks only PowerPoints only Actual dissection Some combination of the above Other (please specify) Compared to what you may have anticipated, this task was.... much slower slow just right fast much faster Do you feel the PowerPoint added to your ease of learning the human anatomy material? No Yes I’m not sure What did you LIKE MOST about using the PowerPoint images? What did you LIKE LEAST about using the PowerPoint images? Thank you. Please click on the "Done" button to submit your responses. Thank you for taking the time to take this questionnaire. You will be now be redirected to the USF homepage! Go Bulls!

PAGE 144

132 Appendix E: Fall 2005 Pilot Study Results Permission was granted to pilot test this study in BSC2085, Anatomy & Physiology for Health Professionals, in the College of Nursing which is taught by Dr. Stephen Morris, a Professor of the College of Nursing. This course has four sections (lectures and labs) with a total enrollment of approximately 85 students. Access to all four sections was granted. As per Dr. Stephen Morris, “Students who chose to participate in the study will be exempt from the next lab exam and given 100% credit for that exam.” An announcement was posted to Blackboard informing students of the steps to follow if they chose to enroll in the study. Informed consent was posted and signed online. The consent form signature then led to the initial demographic questionnaire created using SurveyMonkey. Students were also given a Pre-test online via Blackboard which consisted of 30 multiple choice questions. Each question contained an image of an anatomical structure taken from the Grant’s Dissector text. Each image had a red arrow pointing directly to a structure. Students were asked to choose the structure from a list of four multiple choice answers. Two, two hour presentations were made on October 27th during which the organization and purpose of the study was described. During the latter half of each session a PowerPoint presentation, composed of labeled images of the skull bones and features, was presented by a professor of the USF Health Sciences Center department of anatomy. The narrated presentation was recorded as a movie (Camtasia), which was subsequently posted to Blackboard along with a comprehensive list of structures for which the students would ultimately be responsible for identifying during the laboratory practical examination. Students were encouraged to study the list of structures of the skull

PAGE 145

133 and compare them against the narrated PowerPoint. A non-narrated PowerPoint was also posted, in the event that some students had difficulty accessing the Camtasia movie version. The non-narrated PowerPoint had the same labeled images as the narrated version. Ten days following the presentation, students were asked to return to the Nursing School computer laboratory to perform the final portion of the study. The students who showed up on the day of testing were separated into study groups. Based upon their pretest score, each student was assigned to one of the following three groups: (a) 2D group (PowerPoint only), (b) 3D group (computer-based stereo images of actual human skulls within a PowerPoint) or (c) hands-on study of an actual human skull. For this process, the top three scorers on the pre-test were divided into groups A, B, then C. The next highest scorers were also divided into groups A, B and C. Students were not randomly stratified to groups as they will be for the data collection in the final research study; they were assigned based upon their score only. This pattern was repeated until all students were assigned to groups. Students in Group A were each provided with a computer equipped with a headset to permit independent review of the narrated PowerPoint during their one hour study time. Group B students were provided with a PowerPoint presentation composed of ten 3D stereo-images of various views of a human skull and a set of 3D glasses. They were asked to study the images for 40 minutes. For Group C, the assigned students were divided into groups of two, and each was provided with a printout of the PowerPoint illustrations to use as a laboratory guide, a list of bones and features of the skull for which they were responsible, and an actual human skull to hold and study in

PAGE 146

134 the classroom. A total of 61 students showed up for the laboratory; as a result, each group was composed of 20 or more students. After the 40 minutes of study time, the students were led to a separate room for the practical examination composed of identification and relationship questions. Stations were set up, with one question per station. There were a total of 10 skulls used for the practical examinations. At each station, a list of structures was provided from which the students were to choose the correct answer. This was the same list used in their laboratory study prior to the test, but not the same list as posted on Blackboard. Students moved through the stations at their own pace, not moving on until the next station was vacated. Each student took approximately 20 minutes to complete the practical examination, which was composed of a total of 30 questions. Upon completion of the exam, each student left the laboratory through a side door so as not to share their information with others who were waiting to complete the test. Pilot Data Results Demographic questionnaire A total of 79 students completed the online demographic questionnaire after signing the consent form. There were a total of 15 questions on the questionnaire. The majority of students (91.1%) indicated an age range of 18 – 24 years, and lived less than ten miles from the USF campus (60.8%). Most students indicated (79.7%) that they would be willing to come to the USF campus to work in the gross anatomy laboratory on a Saturday morning. Although this question was not pertinent for the pilot test, it will be important to keep for the actual data collection in the spring term, as Saturday may be the only day the gross anatomy laboratory will be available for use by the undergraduate

PAGE 147

135 students. A small majority of students (55.7%) had a prior anatomy course, and most of them had a dissection course prior to this one as well (79.7%). Most who had taken an anatomy course prior to this one, had taken that course less than five years ago (95.5%) and while in High School (77.8%). This makes sense, since the vast majority of the students are recently out of high school. Nursing students made up the majority of the students in the pilot data (64.6%) group, and 77.8% of them have newer computers, less than three years old. Most students rated themselves as either advanced or intermediate on software proficiency. Please refer to Table E1 for a list of questions and frequency distribution of responses for the Demographic Questionnaire.

PAGE 148

136 Table E1. Demographic questionnaire results Please indicate your age range 18-24 25-30 31-35 36-40 41-45 91.10% 2.50% 1.30% 1.30% 3.80% Please indicate your distance from the USF Tampa campus live on campus < 10 mi. > 10 mi. > 50 mi. Out of state 12.70% 60.80% 24.10% 2.50% 0% Would you be willing to come to the Tampa campus for one Saturday morning to work in the Gross Anatomy laboratory? No Yes 20.30% 79.70% Have you had a Human anatomy course prior to this one? No Yes 44.30% 55.70% If yes, how long ago was the course? < 5 years > 5 years 95.50% 6.80% Please indicate your area of study. Nursing Speech Disorders Wellness Pre-Med Other 64.60% 1.30% 2.50% 8.90% 22.80% Have you had a course prior to this class in which you dissected biological materials? No Yes 20.30% 79.70% If yes, please indicate the type of course it was. You may check all that apply. Middle School Honors Program High School General Biology Undergrad. Biology Other 11.10% 77.80% 27% 19% How old is the computer you will use most of the time to access this course? < one year 1-3 years old > 4 years old 30.40% 65.80% 5.10%

PAGE 149

137 Please rate your level of proficiency using the following software: Beginner Intermediate Advanced Don't Know Web browsers 0% 37% 63% 0% Email 0% 34% 66% 0% Instant messaging/chat 4% 30% 66% 0% Word processing 3% 33% 65% 0% Spreadsheets 30% 48% 18% 4% Presentation software 28% 46% 24% 3% User Perspective Questionnaire A total of 18 students from the group of 21 completed the questionnaire after their experience with the 3D software and glasses in the College of Nursing computer laboratory. The majority (94.4%) had not used any type of human anatomy 3D imaging software prior to this experience. Of the two students who had, they could not remember the name of the product they had used. Most students who used the 3D images (56%) indicated they were a “little confused” while working with the images. However, the majority (55.6%) found the task rate to be “just right”. When asked if the 3D software added to their ease of learning human anatomy, (44.4%) indicated they were not sure, and they also indicated that they would prefer a combination (83.3%) of textbooks, actual dissection and 3D software to learn anatomy. Fourteen questions were asked of the students regarding their level of agreement with statements regarding the 3D software.

PAGE 150

138 The levels of agreement were “strongly agree”, “agree”, “disagree” and “strongly disagree”, with an option of NA, or not applicable. Most students agreed that they found the exercise to be fun (67%) and were able to visualize the 3D (61%). They agreed that the images were professional (72%) and of high quality (72%), and most agreed they would like to use the images to study other areas of human anatomy (72%). However, there was a split of 44% in agreement and 44% in disagreement for whether or not the information was easy to find. This may have been due to the fact that the 3D images were unlabeled, and unless the student made a serious effort to study the 2D PowerPoint, they would have a limited base of information and knowledge from which to work. When asked if they would use the 3D images as a primary reference, 67% of the students disagreed. When asked if they would rather study the 2D images from a PowerPoint, 50% disagreed. Most (56%) disagreed that they could learn as much from the 3D images as from doing a real dissection. Most (61%) also agreed that they would not need assistance to properly use the 3D images. Please refer to Table E2, for a list of questions and responses to the questionnaire.

PAGE 151

139 Table E2. User perspective questionnaire results Have you used any type of Human Anatomy 3D imaging before? No Yes 94.40% 5.60% If so, please indicate which program. I don't remember the name 100% Please rate your level of agreement with the following statements: Strongly agree Agree Disagree Strongly disagree NA I found this exercise to be fun 0.00% 67% 28% 0% 6% I was able to visualize the images in 3D 22% 61% 11% 0% 6% The look of the images was professional 28% 72% 0% 0% 0% The graphics were of high quality 22% 72% 6% 0% 0% It was easy to find specific information 0% 44% 44% 6% 6% I would like to use these images to study other areas of human anatomy 6% 72% 17% 0% 6% I would use the 3D images only secondary to other materials. 17% 67% 17% 0% 0% I would use the 3D images as a primary reference. 0% 22% 67% 6% 6% It took me awhile before I could see the images in 3D. 0% 33% 61% 6% 0% I found the 3D images to be a waste of my time. 0% 11% 67% 17% 6%

PAGE 152

140I would rather study only Grant’s Atlas of Anatomy’ s images from the PowerPoint 17% 28% 50% 6% 0% I feel that I can learn as much from the 3D images as from doing a real dissection. 0% 28% 56% 17% 0% I was often confused as to where to go to find what I was looking for. 6% 33% 61% 0% 0% To use 3D images properly I would need assistance. 6% 28% 61% 6% 0% Descibe how you felt while working with the 3D images completely confused a little confused Everything made sense don't know Other 0% 55.60% 44.40% 0.00% 0.00% Compared to what you may have anticipated with using the 3D images, this task was… much slower slow just right Fast Much faster 11.10% 33.30% 55.60% 0.00% 0.00% Do you feel the 3D software added to your ease of learning human anatomy? No Yes I'm not sure 33.30% 22.20% 44.40% Tell us which method or combination of methods you would prefer to use to learn human anatomy. text books only 3D software only Actual dissection Some combination of the above Other 16.70% 5.60% 33.30% 83.30% 5.60% In addition, each questionnaire contained two open-ended questions. The first asked the student what they liked most about using the 3D images, and the other asked what they liked least about using the images. Responses and themes that occurred are listed in Tables E3 and E4. In general, students liked the 3D images because they were

PAGE 153

141 different and something new. A few felt that the images were more realistic and offered more or better detail. Themes that arose regarding what users liked least about the 3D images listed eye strain and confusion as two top themes. Table E3. Open –ended question responses and themes. What did you LIKE most about using the 3D? Themes Something new/different More realistic More/better detail Convenience I don’t know It was online and I could do it at home X i don't know X it was something different X The ability to see certain bones that would have otherwise been more difficult to visualize on a regualr powerpoint. X it was something new X Trying to locate the different regions X It realistic x That it was hands on X It was different X the look X It was more realistic x

PAGE 154

142 What did you LIKE most about using the 3D?, continued More detail, allowed you to see proximity of surrounding structures X The clarity of the images, as opposed to dissection and the confusion entailed in those situations. X It looked cool and was a different way to learn X Made it easier to picture, looked like the skulls were actually infront of you X Being able to see 3d on a pc X You could see the fossas better. X the glasses!! X

PAGE 155

143 Table E4. Open –ended question responses and themes. What did you like LEAST about using the 3D? Themes I don’t know Eye strain Lack of labels Visualization Confusing Nothing X n/a X it was alittle hard on the eyes....I had to keep refereing to the powerpoint to see where the labels where of all the things I needed to study X X Some of the smaller areas of the skull were actually harder to find because larger bones were in the way. X viewing the images irritated my eyes X It was confusing and was'nt very helpful X some things were not as easy to find X No assistance to help show me X While looking at the printout of the powerpoint (with the structure names), it was sometimes hard to then locate the exact structure on the 3D images. X Nothing X it was just as boring as powerpoint X

PAGE 156

144 What did you like LEAST about using the 3D?, continued The glasses hurt my eyes x Learning to use the software. X the glasses bothered my eyes...but I have sensitive eyes x glasses gave me a headach x the glasses and taking them off x I think it would have been easier if there were labels on the structures. X the blue background x Descriptive Statistics As can be seen from Table E5, and the box plots below, the means for each measure, pretest, ID and relationship, across all groups were similar. The pretest scores overall visually appeared normally distributed in the boxplots and each had one missing score, which appeared as an outlier. Absolute values for skewness were below 1.0 for all measures except for the positive skewness of 1.6 as demonstrated in the “relationship” measure in group B. Kurtosis for the “relationship” measure in group B was also high at 4.05. Likewise, the kurtosis of 2.6 for the pretest measure in group A was also high, signifying non-normality. Cronbach’s Coefficient Alpha (Table E6) for test reliability for the pre-test, identification and relationship practical examination instruments was as follows: 0.627 (raw), 0.627 (standardized); 0.676 (raw) 0.624 (standardized); 0.471(raw). There was no standardized score given in SAS for the “relationship” practical. This may have been due to the fact that there were two questions (number 17 and number 27) on

PAGE 157

145 which no students scored a correct response. The low Cronbach’s alpha score for the relationship practical examination may have also been due to many students not being motivated to do well on the examination. This will be addressed in the data collection in the future by giving actual grades for performance on the various measures. Table E5. Descriptive statistics (pilot) for the three measures by group Measure Group A 2D Group B 3D Group C Actual Mean SD Skew Kurtosis Mean SD Skew Kurtosis Mean SD Skew Kurtosis Pre-test 54.5 18.7 -0.82 2.6 51.4 22. 0 -1.0 1.37 52.5 22. 0 -1.19 1.47 Identification 45.5 15.0 0.18 -1.30 41.2 20. 1 -0.3 -1.4 49.0 19. 0 -0.15 -0.66 relationship 16.0 10.5 0.04 -1.25 14.7 11. 0 1.6 4.05 16 12. 9 0.9 -0.27 Table E6. Cronbach’s coefficient alpha Test Raw Standardized pre-test 0.627 0.627 Identification 0.676 0.624 relationship 0.471 No data

PAGE 158

146 | 100 + | | | | 90 + | | | | | | | | | | 80 + | | | | | | | | | | | | | | | 70 + +-----+ | +-----+ | | | | | | | | | +-----+ | | | | | | | | | 60 + | | | | | | | | | | | | | | *--+--* | | *-----* | | | *--+--* | + | 50 + | | | | | | | | | | | | | | +-----+ | | +-----+ | | +-----+ | 40 + | | | | | | | | | | | | | | | 30 + | | | 20 + | | | 10 + | | | 0 + 0 0 0 ------------+-----------+-----------+----------group A B C Figure E1. Boxplots for pre-test by group

PAGE 159

147 | 80 + | | | | | | | | 70 + | | | | | | | | | | | | | 60 + +-----+ +-----+ +-----+ | | | | | | | | | | | | | | | | | | | *-----* 50 + | | | | | + | | | | | | | | | | + | | | | | | | | *--+--* | | 40 + *-----* | | +-----+ | | | | | | | | | | | | | +-----+ | | | 30 + | | | | | | | | | | | | | | | | | | 20 + +-----+ | | | | | | | | | | 10 + | | | | | | 0 + ------------+-----------+-----------+---------group A B C Figure E2. Boxplots for identification practical by group

PAGE 160

148 | 50 + | | | 45 + | | | | | | 40 + | | | | | | | 35 + | | | | | | | | | 30 + | | | | | | +-----+ | +-----+ | | | | | | 25 + | | | | | | | | | | | | | | | | | | | | | | | 20 + | | | | | | | | | | | | | | | | | | *--+--* *-----* | + | 15 + | | | + | | | | | | | | | | | | | | | | | | | | | | *-----* 10 + | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 + +-----+ +-----+ +-----+ | | | | | | | | | | | 0 + | | ------------+-----------+-----------+----------group A B C Figure E3. Boxplots for relationship practical by group MANOVA Evaluation As seen in the descriptive statistics, the groups demonstrated non-normality. Independence was maintained for groups A and B however, while group C students worked in small groups of 2 or 3 students. MANOVA is robust enough to withstand the violation of the independence assumption (Stevens, 2002). Wilk’s Lambda F value of

PAGE 161

149 0.9692 shows there is no significant difference between groups. If a significant difference was observed, it would make sense to move forward with ANOVAs for each test among groups using the Tukey’s Studentized Range (HSD) Test with modified Bonferroni with alpha set to 0.025. Another view of the data shows the relationships between the variables for each group appear to be linear and positive, but show low correlation (Table E7). For group A, the Pearson Correlation Coefficients for pretest and ID is only 0.47 and the same measure for pretest and relationship is 0.55. The correlation between ID and relationship is 0.51. Table E7. Pearson correlation coefficients by group. Group A Group B Group C pretest/ID 0.47 pretest/ID 0.27 pretest/ID 0.50 pretest/relationship 0.55 pretest/relationship0.26 pretest/relationship0.23 ID/relationship 0.51 ID/relationship 0.40 ID/relationship 0.54 Observations Overall, the research design worked well. Students were able to access and digitally sign the online informed consent form, and all those that signed the form were able to complete the initial online questionnaire. A total of 78 students completed and signed the online informed consent form. They all completed the initial questionnaire as well. Of the 78 students who completed the informed consent, initial questionnaire and pre-test, 65 showed up at the lab. Therefore, 83 % completed the pilot study. Scores

PAGE 162

150 ranged from 10 to 28 on the randomly ordered pre-test baseline test. The follow-up questionnaire on perceptions of the 3D portion of the study resulted in 18 completed questionnaires, or 81%. To increase that percentage, in the final study, the questionnaire could be administered before the students leave the testing lab. Directions must be clear and repeated often for the undergraduate population. Approximately 20 emails were received over the course of two weeks, some from the same students, who could not find the documents or tests that were posted online to their Blackboard section. All information had been described in detail in the face-to-face sessions, and posted as an Announcement in each section of the course. In addition, emails were sent to all students in all four sections with detailed steps as to how to get involved in the study. In the future, in order for all students to understand the study and to be clear on the steps involved, the sequence of events will need to be described in detail on the course websites. Another area of improvement would be the list of structures. Based upon observations during the practical examination, it would be best to display the structure list in alphabetical order, so that students can find the structure they are looking for easily. In addition the same list of structures and relationships should be used for both studying and testing purposes. In addition, students in group B and C, the 3D group and hands-on group should be given only the 3D PowerPoint images or skull to study by, rather than also giving them the 2D PowerPoint in advance. Students in all groups did not spend much time in the computer lab reviewing the content of the PowerPoints. Many students seemed to feel they already knew the information when they came to the lab, and they did not really use the 3D or actual skulls to assist them. In fact many wanted to move onto the practical examinations in 20 minutes or so. They had

PAGE 163

151 the opportunity to study the material for two weeks prior to coming to the lab, so they may have felt they knew the content already and did not need to study with the 3D or actual skull. This can be rectified by exposing the students to the material for the first time when they have access to the PowerPoint on Blackboard, in addition to offering a grade for their efforts. In this way, the students should really use the method they are offered to learn the material. Students in the study were asked to give their names on the initial questionnaire so that their distance from campus could be determined. A filter can be applied to the data within SurveyMonkey in order to determine which students had taken an anatomy course prior to this one. Therefore, additional information about the groups can be ascertained, such as correlating test scores with prior experience. Scores on the practical exams ranged from 13% to 80% for the identification practical examination and 0% to 44% for the relationship practical examination. Overall, students had more difficulty with the relationship questions than the identification questions. This may have been due to the fact that the students did not have skulls to refer to for that portion of the test, or to the inherent difficulty of the material. Another change in the research protocol is that participants from each group, not only those in the 3D group, should have a perception questionnaire. It is important to compare student attitudes across groups, in order to see if the 3D group had any additional or different insight into the 3D material and the learning of the human material. The new sequence of events is reflected in Table E8. Text in bold, highlights changes made to the sequence due to Pilot test results.

PAGE 164

152 Table E8. New sequence of procedures Instrument/procedure When/how administered 1. Informed Consent Administered via Blackboard 2. Demographic Questionnaire Administered via SurveyMonkey 3. Pre-test Administered via BlackBoard 4. Volunteers assigned to groups Based upon pre-test scores and physical distance from campus. 5. 2D narrated PowerPoint/with list of structures to learn Administered via Blackboard 6. Group A/ Ppt. only Administered via Blackboard 7. Group B/Ppt. and 3D stereo images 3D stereo images and 3D PowerPoint in 2 hour lab 8. Group C/ Ppt. and prosection 2D PowerPoint administered via blackboard and prosections in 2 Hour lab 9. All volunteers, identification exam In lab 10. All volunteers, spatial relationships In lab 11. Groups A, B and C/ qualitative questionnaire Administered via SurveyMonkey

PAGE 165

153 Summary In summary, the Pilot Test data showed no significant difference with the MANOVA. There could be a number of reasons for this. One reason is that the sample sizes may have been too small to detect a difference. This issue should be adequately addressed in the actual data collection, as approximately 250 students enroll in the Anatomy and Physiology I laboratory. Students also were not assigned a grade for participating in the pilot study. The undergraduates enrolled in the study did not, therefore, seem to take the study seriously. Those assigned to the 3D group, did not take full advantage of looking at the images. Many of them put their 3D glasses on and simply took them off again, and asked to take the final examination. The feeling was that they wanted to be finished as soon as possible. In addition, the 3D images were not as crisp as they could have been, which may have led to eye strain for the students. This will be addressed in the actual data collection. Select areas of the Bassett 3D images will be manipulated so that the portion of interest is more in focus. Smaller areas of the images will be converged into a 3D stereo image, which should reduce eye strain, because more of the smaller image will be in focus for the user. The 3D images will also be labeled, which the students in the pilot indicated would be of help to them.

PAGE 166

154 Appendix F: Spring 2006 Pilot Study Results A second pilot study was conducted in the spring of 2006 in an online undergraduate anatomy and physiology laboratory, HSC2933.318S06. The purpose of conducting a second pilot study was to address five issues. The first issue concerned the list of structures the students used to study the anatomy. It was thought that the list may have been too long and would need to be condensed. The second issue was concerned with using two groups rather than three groups in the treatments, and having identical narration and labeling for the PowerPoint movie files. It was determined that the handson group was unnecessary for this study, as this approach is not used in any actual prenursing anatomy and physiology course. Likewise, it was felt that the narration should be identical if possible for the two groups to control for extraneous variables. It was also decided that the user perspective questionnaire (Appendix D) should be modified, so that one questionnaire, rather than two different questionnaires could be delivered to both groups. All assessments needed to be re-assessed and piloted to ensure a range of responses. Finally, it was also determined that the PowerPoint AVI movie files were to be reviewed by multiple experts in the fields of anatomy and instructional technology for correspondence to Mayer’s seven criteria for effective presentations (Mayer 1989). This course had a total enrollment of 160 students. A total of 86 students signed the online consent form in order to volunteer to participate in the study. Students were told that participation in the study could net them up to an additional 8 points onto their final grade. For example, whatever percentage they achieved on the practical examination was multiplied by 8. Those achieving 100% on the practical examination could receive 8 points added to their final course grade. This opportunity did not conflict,

PAGE 167

155 in terms of time or content, with any other assignments for the course. The informed consent form was provided to the students online and they were then provided a link from their Blackboard site to the signature form. Once a signature was received, they were then led to the online demographic questionnaire, (Appendix A). Students were also directed to an online Pre-Test that was made available on the Blackboard site. Five questions were eliminated from the first Pilot of this instrument, based upon the following criteria. Those questions that had a score of 97% or better for all students were removed. This resulted in a Pre-Test consisting of 25 multiple choice questions. Four of the 89 students had to be reminded to take the Pre-Test. Once the Pre-Test scores were obtained, students were randomly assigned to either group A, the 2D group, or B, the 3D group. This was done by stratifying the scores from highest to lowest. The two highest scores were randomly assigned an A or B, by pulling either an “A” or “B” marble blindly from a dish. This process was continued until all students were assigned to a group. This was done to increase power and maintain a random assignment of students to groups. Once the students were assigned to their groups, an area within Blackboard was created for the 2D or 3D group. This ensured that only those in the 2D group could see the 2D PowerPoint and AVI file, and those in the 3D group could see only the 3D PowerPoint and AVI file. The 2D group had 42 students and the 3D group had 44 assigned to it. Delivering the 3D glasses Students assigned to the 3D group were notified by email as well as via an announcement on their Blackboard site that they must obtain 3D glasses to appropriately

PAGE 168

156 view the 3DPowerpoint. Glasses were made available to students over a six hour period in the main campus library lobby coffee shop. Times were two hours on a Tuesday morning, two hours that same evening and finally two hours the following morning. Of the 44 students assigned to the 3D group, 50% of them came to get their glasses during the initial time allocated. Of the remaining 22 students in the 3D group, 19 made other arrangements to retrieve them, primarily by coming to campus at various times throughout the week. A total of 41 students had 3D glasses in hand for the pilot test. A total of 42 students had been randomly assigned to the 2D group. Creating the PowerPoints PowerPoints for the two groups were prepared using identical 2D images taken from Grant’s Atlas of Anatomy the Bassett Collection, or custom created images. Custom images were created with Pokescope software when a necessary orientation was not available from the two other sources, or when a 3D image was blurry. For instance, custom images of the nasal cavity and orbit were created, as the Bassett 2D images did not correspond well enough to create a sharply focused 3D image. Also, a custom image of the infratemporal fossa was created because the orientation within the Bassett Collection was not the same as the Grant’s Atlas of Anatomy image. Each slide of a Grant’s Atlas of Anatomy image in the PowerPoint was then followed by an image of a real skull in the same orientation. Images were labeled identically, from slide to slide and PowerPoint to PowerPoint. Once the appropriate sequence was determined for the slides, a lecture for the PowerPoint was recorded using Techsmith’s Camtasia. This recording, along with the PowerPoint images and any notations made on the slides was packaged

PAGE 169

157 into an AVI file format. The same recording was utilized for each of the PowerPoints, and the same notations were made to the slides. A full professor of anatomy with over 20 years of teaching experience in Human Anatomy recorded the lectures and created annotations on the slides. In addition to uploading the 2D and 3D AVI’s to the corresponding Blackboard group site, a standard PowerPoint was also uploaded without audio and notations. This was done to accommodate any student who could not download the AVI file, as well as to provide images only, without narration and annotation, for studying purposes. A list of Mayer’s (1989) seven criterion was created. Two graduate students of Instructional Technology were asked to review the 3D PowerPoint and to indicate which of the seven criteria they felt the PowerPoint met. One indicated that all seven criteria were met and the other indicated that six of the seven were met. See Table F1.

PAGE 170

158 Table F1. Mayer’s criterion Criterion Reviewer A Reviewer B (a) Complete –it contains all the objects, states, and actions of the system Yes Yes – unsure on “action” component. b) Concise-it contains just enough detail Yes Yes – detail for nomenclature (c) Coherent-it makes ‘intuitive sense’ Yes Yes – labeled appropriately with 2D/3D corresponding images (d) Concreteit is presented at an appropriate level of familiarity Yes No – not for me, but perhaps for students. (e) Conceptual-it is potentially meaningful Yes Yes (f) Correct-the objects and relations in it correspond to actual objects and events Yes Yes – nomenclature corresponds to bones/features of the skull. (g) Considerate-it uses appropriate vocabulary and organization Yes Yes – seems well organized and appropriately labeled.

PAGE 171

159 Study Guide In addition to the PowerPoints and AVI file, a study guide (Appendix B) containing a list of structures and relationships the volunteers were asked to study was uploaded to each group area within Blackboard. Each group was given one week to learn the material by utilizing the PowerPoint, AVI file and study guide. The study guide was based upon items in the Grant’s Atlas of Anatomy textbook for the skull. The original list of 84 items was shortened to 80, based upon the deletion of duplicates and erroneous terms. Thirty relationship questions were also incorporated into the study guide. This list of questions was created by a professor of anatomy with over 20 years of teaching experience in Human Anatomy. It was felt that the list of relationship questions would assist the students in creating appropriate mental models for the intricacies that exist between common features and bones of the skull as they study the images. The list of 80 structures was then shown to two instructors who teach undergraduate nursing and biology students, in order to determine if the list of structures was too in depth for the students in this undergraduate course. The two instructors have taught at 6 different institutions throughout Florida and Georgia, including the College of Nursing at USF. They both felt that the list was not too extensive and that it was in essence the same list they used in their undergraduate courses. The lab manual they used for their courses was by the same author, Elaine Marieb, but it differed from the lab manual used in the undergraduate course used for this pilot test, which was the Study Guide for Memmler's The Human Body in Health and Disease, by Barbara Cohen and Dena Wood, 10th edition, 2005, Lippincott Williams and Wilkins.

PAGE 172

160 Results Of the original 86 students who volunteered, 63 completed the pilot for extra credit points. Testing was conducted one week after the 2D and 3D PowerPoints and study guides were uploaded to Blackboard. Students were given three available times to take the practical examination of skull structure identification and relationships. All times were one hour apart and all on the same day. This was done to insure sample independence. The students for each group were brought into the histology laboratory at the Health Sciences Center and were given the same instructions regarding the examination. They were each asked to find a place in front of a specimen, and to indicate on their answer sheet, with a circle, the answer they were beginning with. They were then given one minute to correctly identify the structure(s) and/or feature of the skull specimen in front of them. After one minute, a timer would go off and volunteers were asked to move to the next specimen. The practical examination consisted of 15 identification questions and 15 relationship questions taken directly from the study guide. For a key to the structures that were to be identified by the volunteers, please refer to Appendix 8. It took 30 minutes for each student to complete the examination. All examinations were administered within a three hour time frame. A User Perspective Questionnaire (Appendix 3) was then posted to the Blackboard site, and students were asked to complete it. The questionnaire was re-designed to include questions for both the 2D and 3D students onto one questionnaire. Students were asked for their name on the questionnaire in order to determine which group they belonged to. Of the 63 students who completed the lab practical examinations, 59 of the students also completed the user perspective questionnaire.

PAGE 173

161 Descriptive Statistics Descriptive statistics (Table F2) for each group provided the following information. For the overall score on the laboratory practical, the mean was 55.44 with a standard deviation of 24.95. Values by group for overall scores are listed in Table 5. The 2D group had a mean of 50.63, with a standard deviation of 27.48. The 3D group had a mean of 60.73, with a standard deviation of 21.04. The means appeared to be different, but statistically were not. The p-value for F, at alpha=0.05 is 0.1490, which is greater than the pre-established alpha of 0.05. Therefore, we fail to reject the hypothesis that the means are different. Table F2. Descriptive statistics for overall scores Overall Scores Overall 2D 3D Mean 55.44 50.63 60.73 Standard deviation 24.95 27.48 21.04 Skewness -0.315 -0.60 Kurtosis -1.57 -0.009 For the ID scores (Table F3), the overall mean was 59.26 with a standard deviation of 26.39. For the 2D group, values for ID test were a mean of 55.3, a standard deviation of 30.25, with a negative skewness of -0.402 and a negative kurtosis of -1.43 and no outliers. For the 3D group, values for ID test were a mean of 63.63, standard deviation of 21.03, negative skewness of -0.58 and positive kurtosis of 0.45 with no outliers. It is clear from visual inspection that the means are not equal, as seen in the univariate plot below. However, the p-value for F, at alpha= 0.05 is 0.0511, which is equal to the alpha. So, we fail to reject the hypothesis that means are different. Looking at

PAGE 174

162 the pooled method for t-test score gives a p value of 0.213, which is larger than the alpha of 0.05, causing us to fail to reject the null hypothesis that the means are equal. Table F3. Descriptive Statistics for ID scores Identification Scores Overall 2D 3D Mean 59.26 55.3 63.63 Standard deviation 26.39 30.25 21.03 Skewness -0.402 -0.58 Kurtosis -1.43 0.45 Scores for the relationship portion of the practical examination are listed below in Table F4. Again, the values look normally distributed, with slight negative skewness. The means appear to differ, but the p-value for F is 0.8, causing us to fail to reject the hypothesis that the means are the same. The t-test score for a pooled method with equal variance shows the p value to be 0.191, larger than alpha of 0.05. This causes us to fail to reject the null hypothesis that the means are equal. Table F4. Descriptive statistics for relationship scores Relationship Scores Overall 2D 3D Mean 50.06 46.15 54.36 Standard deviation 24.8 25.19 24.05 Skewness -0.10 -0.312 Kurtosis -1.52 -1.027

PAGE 175

163 | 100 + | | | | | | | | | | | 90 + | | | | | | | | | | | 80 + +-----+ +-----+ | | | | | | | | | | | *-----* | | 70 + | | | | | | | | | | | | *-----* | | | | + | 60 + | | | | | | | | | | | + | | | | | | +-----+ 50 + | | | | | | | | | | | | | | | 40 + | | | | | | | | | | | | | | | 30 + | | | | | | +-----+ | | 20 + | | | | | | | 10 + | | | 0 | | 0 + ------------+-----------+----------group 0 1 Figure F1. Plots for identification scores by group

PAGE 176

164 MANOVA When running the MANOVA between groups for each test, there was no significance at the current sample sizes of 30 and 33. As seen in the descriptive statistics, the groups demonstrated normality, and independence was maintained for groups A and B. Wilk’s Lambda F value of 0.40 shows there is no significant difference between groups. If a significant difference was observed, it would make sense to move forward with ANOVAs for each test among groups using the Tukey’s Studentized Range (HSD) Test with modified Bonferroni with alpha set to 0.025. Another view of the pilot data shows the relationships between the variables for each group appear to be linear and positive, and demonstrate medium to high correlation. For the 2D group, the Pearson Correlation Coefficients for ID and relationship is 0.90 and the same measure for relationship and overall would be expected to be high and it was at 0.976. The correlation between ID and overall was also high at 0.90. For the 3D group, the Pearson Correlation Coefficients for ID and relationship is lower, at 0.54, while the correlation between ID and overall and relationship and overall scores were, 0.75 and 0.84, respectively. Qualitative Themes There were a number of themes that emerged from the qualitative open-ended questions on the User Perspective Questionnaire (Appendix D). A total of 59 students completed the online User Perspective Questionnaire. Students were told they could not

PAGE 177

165 receive their extra credit grade until the questionnaire had been completed. Values in these tables have not yet been stratified to the different groups, but they do indicate the overall themes to emerge. Additional information derived from the questionnaire is found in Tables F5 through F10 below. Table F5. Themes from qualitative questions What did you like MOST about using the Powerpoints What did you like LEAST about using the Powerpoint? Something new/ different Eye strain More realistic Labeling More/ better detail Visualization Convenience Confusing Ease I don’t know High quality Well organized It was 3D I don’t know Table F6. Additional information from questionnaire Describe how you felt while working with the Powerpoint images? Completely confused 1.7% A little confused 15.3% Everything made sense 74.6% Don’t know 1.7% Other – please specify 6.8%

PAGE 178

166 Table F7. Additional information from questionnaire Please rate your level of agreement with the following statements. Strongly agree Agree Disagree Strongly Disagree Not Applicable Response Average In general the images were easy to use. 31% (18) 61% (36) 8% (5) 0% (0) 0% (0) 1.78 I think this activity was fun. 22% (13) 51% (30) 20% (12) 2% (1) 5% (3) 2.17 I could see the images clearly. 25% (15) 59% (35) 15% (9) 0% (0) 0% (0) 1.90 The graphics were of high quality. 24% (14) 61% (36) 14% (8) 0% (0) 2% (1) 1.95 It was easy to find specific information. 15% (9) 54% (32) 24% (14) 5% (3) 2% (1) 2.24 I would like to use similar images to study other areas of human anatomy. 32% (19) 49% (29) 15% (9) 3% (2) 0% (0) 1.90 I would use this PowerPoint as a primary reference. 25% (15) 56% (33) 15% (9) 3% (2) 0% (0) 1.97 I found the PowerPoint images to be a waste of my time. 2% (1) 5% (3) 47% (28) 46% (27) 0% (0) 3.37 I would rather study only images from a book. 2% (1) 12% (7) 44% (26) 41% (24) 2% (1) 3.29 I feel that I can learn as much from a PowerPoint images as from doing a real dissection. 11% (6) 35% (20) 40% (23) 12% (7) 2% (1) 2.60 I was often confused as to where to go to find what I was looking for. 3% (2) 15% (9) 56% (33) 22% (13) 3% (2) 3.07 Looking at these images hurt my eyes. 3% (2) 14% (8) 51% (30) 29% (17) 3% (2) 3.15

PAGE 179

167 Table F8. Preferred method to learn human anatomy Tell us which method you would prefer to use to learn human anatomy. Textbooks only 3.3% PowerPoints only 1.7% Actual Dissection 10% Some combination of the above 85% Other (please specify) 0% Table F9. Task rate Compared to what you may have anticipated, this task was…. Much slower 0% Slow 3.3% Just right 65% Fast 23.3% Much faster 8.3% Table F10. Did powerPoint add to ease of learning human anatomy? Do you feel the PowerPoint added to your ease of learning the human anatomy material? No 8.3% Yes 76.7% I’m not sure 15%

PAGE 180

168 Summary In summary, the Pilot Test data showed no significant difference between groups on tests of identification or relationships at the current sample sizes of 30 and 33 students per group. Students did appear to take this practical examination seriously since they were able to increase their overall course grade by participating. Both the 2D and 3D images were available for one week to the students in each group. Few students (two) contacted me during that time to express concerns about not seeing or finding the PowerPoint or AVI file. One of the greatest areas of concern during this pilot was getting the 3D glasses into the hands of the students. This issue will be addressed during the summer term by mandating all students come to the USF campus for an orientation session regarding their course at which time 3D glasses will be given to all students. It is anticipated that additional 3D samples will be provided to the students during the summer term, in order to make the 3D glasses of necessity to all students, after the time of data collection for this study. The crispness of the 3D images was increased for this set of Powerpoints. This was done by blackening the background in Photoshop, creating images when necessary when the Bassett images were not crisp or did not have the correct orientation, and decreasing the actual area to be viewed in 3D. This helped in reducing the blurry edges of the images by focusing the user’s attention to the area of interest. The 3D images were also labeled identically to the 2D images, which the students in the first pilot indicated would be of help to them.

PAGE 181

169 Appendix G: Spring 2006 Pilot Addendum Data collected in the spring 2006 Anatomy and Physiology, HSC2933.318S06 was run again using a Doubly MANOVA rather than a standard MANOVA. This was done to increase power and to construct doubly multivariate contrasts between the 2D and 3D groups and to compare each group with the outcome variables, identification and relationship. Of the original 86 students who signed the online consent and completed the pre-test, only 19 pair of students, or 38 students was eligible to have their data included in the analysis. Once the pre-test scores were obtained, students were randomly assigned to either group A, the 2D group, or B, the 3D group. Once the stratification of volunteers was completed, there were originally 42 pairings. Results for large numbers of volunteers were removed from the data analysis because when one member of the pairing did not show up for the practical examination, the results for the second member of the pairing had to be removed. This resulted in the original 42 groups dwindling to 19 pair. Pre-test scores ranged from a high of 24 to a low of 6. In comparing the students that did take the practical examination with those that did not take the exam and consequently had their “group mate’s” score removed from the data analysis, it is not clear that there was any pattern among those that did not show for an exam and score on the pre-test. For instance, of the groups that fell within the range of 6-15 on the pre-test scores, eleven groups had to be removed from data analysis. Of the groups that fell within the range of 16 – 24 for the pre-test, twelve group scores were removed. Also, similar numbers from each group were dropped from the study, i.e. 11 from group A and 13 from group B. It appears that those students that did not attend the practical

PAGE 182

170 examination appear to be evenly split between top scoring and low scoring students, and between groups, (see Table G1). Table G1. Information on volunteers deleted from study. pre-test score group Age Prior HA experience Study area prior dissection experience 22 A 18-24 Yes nursing Yes 21 A 18-24 Yes P.T. Yes 21 A 25-30 Yes nursing Yes 20 B 18-24 Yes pre-med Yes 19 B 18-24 Yes nursing Yes 17 A 18-24 Yes nursing Yes 17 B 18-24 Yes nursing No 16 B 18-24 No nursing Yes 16 B 18-24 Yes nursing Yes 16 A 25-30 Yes exer. Sci Yes 16 A 18-24 Yes pre-med Yes 16 A 18-24 Yes nursing Yes 15 B 18-24 Yes nursing Yes 15 A 18-24 Yes nursing No 14 B 18-24 Yes wellness Yes 13 A 18-24 Yes nursing Yes 13 B 18-24 Yes nursing Yes 13 B 18-24 Yes nursing Yes 12 B 18-24 Yes P.T. Yes 12 A 18-24 Yes wellness No

PAGE 183

171 pre-test score group Age Prior HA experience Study area prior dissection experience 12 B 18-24 Yes psych Yes 11 A 18-24 Yes nursing Yes 10 B 18-24 Yes wellness Yes 9 B 18-24 Yes nursing Yes Descriptive Statistics Descriptive statistics for the outcome variable “identification” gave us the following information. For the ID2D score on the laboratory practical, the mean was 66.36 with a standard deviation of 25.79 (Table G2). The ID3D group had a lower mean of 61.84, with a standard deviation of 23.61. The distribution of the ID2D scores was negatively skewed (-1.07) with four outliers, scores of 33, 20, 20 and 13 at the lower end of the distribution. The scores ranged from a low of 13 to a high of 100, with the IQR of 14.0, meaning that 50% of all scores were between 66 and 86. The distribution of the ID3D scores was also negatively skewed (-0.42), but with no outliers. The sample mean was slightly lower than the ID2D mean at 61.84. The range was higher; from a low of 7 to a high of 100, with an IQR of 33 meaning that 50% of all scores were between 40 and 83. See boxplots below in Figure G1. Descriptive statistics for the outcome variable “relationship” gave us the following information. The Rel2D score on the laboratory practical had a mean of 56.94 with a standard deviation of 23.51. The distribution of the Rel2D group was negatively skewed (-0.89) with no outliers. The scores ranged from a low of 10 to 86, with an IQR

PAGE 184

172 of 29 meaning that 50% of all scores were between 46 and 78. The distribution for the Rel3D score on the laboratory practical examination were negatively skewed (-0.20) with no outliers. The mean was slightly lower at 53.0 with a standard deviation of 25.61. Scores ranged from a low of 10 to a high of 92, with an IQR of 47 meaning that 50% of all scores were between 30 and 77. See Table G3 below. All distributions were slightly negatively skewed and each consisted of a large range, although both the “relationship” outcomes had smaller ranges. In general the 2D groups had higher means than the 3D groups. See boxplots below in Figure G1 and G2. Table G2. Descriptive statistics for identification scores Identification Scores ID2D ID3D Mean 66.36 61.84 Standard deviation 25.79 23.61 Skewness -1.07 -0.42 Kurtosis 0.134 0.22 Table G3. Descriptive statistics for relationship scores Relationship Scores 2D 3D Mean 56.94 53.0 Standard deviation 23.51 25.61 Skewness -0.89 -0.20 Kurtosis -0.41 -1.06

PAGE 185

173 | 100 + | | | | | | | | | | | 90 + | | | | | | | | | | | 80 + +-----+ +-----+ | | | | | | | | | | | *-----* | | 70 + | | | | | | + | | | | +-----+ *-----* | | + | 60 + | | | | | | | | | | | 50 + | | | +-----+ | | | | 40 + | | | | | | 0 | 30 + | | | | | | | 20 + | | | | | | | 10 + | | | | | 0 + ------------+-----------+----------Group ID2D ID3D Figure G1.Boxplots for 2D and 3D identification scores | 100 + | | | | 90 + |

PAGE 186

174 | | | | | | | | 80 + | | | | +-----+ | +-----+ | | | | | | | 70 + | | | | | | | | | | *-----* | | | | | | | 60 + | | *-----* | | + | | | | | | | | | | | | + | 50 + | | | | | | | | | | +-----+ | | | | | | 40 + | | | | | | | | | | | | | | | 30 + | +-----+ | | | | | | | | | 20 + | | | | | | | | | | | 10 + | | ------------+-----------+----------Group Rel 2D Rel 3D Figure G2.Boxplots for 2D and 3D relationship scores Doubly MANOVA Repeated Measures A Doubly-MANOVA repeated measures was run on the data for the variables of ID2D, ID3D (scores on identification examination) and Rel2D, and Rel3D (scores on the relationship portion of the examination) along with Differential Item Functioning (DIF) to see if/where differences existed. When reviewing the Doubly MANOVA results there is a significant difference for the main effect of the treatment groups 2D and 3D;(Wilk’s Lambda (0.07998712, p<.0001). However, there is a no significant treatment*outcome

PAGE 187

175 effect (Wilk’s Lambda = 0.9758619, p=.8125). This suggests that the difference between the treatment groups of 2D and 3D does not differ across the dependent variable measures of identification and relationship. When graphed, it is clear that there is a between treatment visual difference, however there is no significant difference when comparing treatments versus outcomes of identification or relationship. Please refer to Figure G3. The correlation statement for the variables demonstrates a stronger correlation between ID2D and Rel2D (0.89195) than ID3D and Rel3D (0.57260). This suggests that the 2D treatment and outcome measures correlate better than the 3D treatment and outcome measures.

PAGE 188

176 Identification vs. Relationship0 10 20 30 40 50 60 70 2D3D2D and 3DID and Rel ID Rel Figure G3. Graph of group differences spring

PAGE 189

177 Appendix H: Summer 2006 Pilot Demographic Survey A third pilot study was conducted during the summer of 2006. This course had a total enrollment of 87 students at the beginning of the six week term. The final student number was 80 at the end of the term. Participation in the study was 20% of their final course grade. They were told that they must complete the assignment of learning the anatomical structures of the human skull, but that they could decide whether or not they wanted their data to be included. This opportunity did not conflict, in terms of time or content, with any other assignments for the course. Students were introduced to the study by way of a mandatory orientation session that took place the first Saturday after classes began. Half of the class of 87 students came to the orientation session. The informed consent form was provided to the students online and they were then provided a link from their Blackboard site to the signature form. A total of 86 students signed the online consent form in order to volunteer to have their data included in the study. Once a signature was received, they were then led to the online demographic questionnaire, Appendix A. Students were also directed to an online Pre-Test that was made available on the Blackboard site. Once the Pre-Test scores were obtained, students were randomly assigned to either group A, the 2D group, or B, the 3D group. This was done by stratifying the scores from highest to lowest. The two highest scores were randomly assigned an A or B, by

PAGE 190

178 pulling either an “A” or “B” marble blindly from a dish. This process was continued until all students were assigned to a group. This was done to increase power and maintain a random assignment of students to groups. Once the students were assigned to their groups, an area within Blackboard was created for the 2D or 3D group. This ensured that only those in the 2D group could see the 2D PowerPoint and AVI file, and those in the 3D group could see only the 3D PowerPoint and AVI file. The 2D group had 21 students and the 3D group had 21 assigned to it. A total of 66 students completed the online demographic survey after signing the consent form. The majority of the students (95.5%) indicated an age range of 18-24 years (see Table H1). A majority of students (89.4%) had a prior anatomy course, and most of them had a dissection course prior to this one as well (80%). Most who had taken an anatomy course prior to this one, had taken that course less than five years ago (98.4%) and while in high school (92.5%). Nursing students made up 41.5% of the students in the summer session. Pre-med and wellness students accounted for 15.4% and 13.8% of the students respectively. There was a large percentage (29.2) that indicated “other” for area of study. This was broken down into physical therapy, athletic training and exercise science, medical technology, pharmacy, nutrition, biomedical sciences, and occupational therapy.

PAGE 191

179 Table H1. Demographic survey results – summer Question Majority Response Response Percent Response Total Please indicate your age range 18 – 24 95.5 % 63 Have you had a Human anatomy course prior to this one? Yes 89.4% 59 If yes, how long ago was the course? Less than 5 years 98.4% 63 Please indicate your area of study. Nursing 41.5% 27 Have you had a course prior to this class in which you dissected biological materials? Yes 80.0% 52 If yes, please indicate the type of course it was. You may check all that apply. H.S. General Biology 92.5 % 49 How old is the computer you will use most of the time to access this course? 1-3 years old 53.0 % 35 Please rate your level of proficiency using the following software: Web browsers Advanced 61% 40 Email Advanced 67% 44 Instant messaging/chat Advanced 63% 41 Word processing Advanced 52% 34 Spreadsheets Intermediate44% 29 Presentation software Intermediate50% 33

PAGE 192

180 Of the 66 students that originally signed the consent form to be included in the study, only data from 21 pair could be included in the study, or a total of 42 students. There were a number of reasons why much of the data either could not be used or was not collected. Nearly half of all students in the course did not show up for the laboratory practical examination. There was a great deal of confusion as to what their responsibilities were. This may have been due to the fact that only half of all students attended the mandatory orientation session. The 3D glasses were distributed to all students that attended the orientation session, with the thought that it would be the easiest way to distribute the 3D glasses effectively to an online group of students. Students were asked to then watch the Announcement board as well as to check their email in order to learn which group they were assigned, and thus to know whether or not they would require the 3D glasses. Many students however, for example 20%, did not get their 3D glasses. Repeated efforts were made to contact the students via email and announcements within Blackboard to make arrangements for them to get their 3D glasses. Glasses were even left in an instructor’s mailbox for student’s to pick them up, but only two students did come to campus to get them. Therefore, those students who did not have glasses, or did not show up for the laboratory practical had their data removed from the analysis. In addition, as students showed up for the practical examination, they were asked whether or not they used their glasses. Those that stated, “no” had their data removed. Others were removed from the study when they were seen studying materials other than what was provided. For example, one student had a different anatomy book

PAGE 193

181 with her and was studying it before she entered the examination room. Table H2 lists those volunteers that had been assigned to groups but then had to have their data, and consequently their group mates, data removed from the study. Table H2. Information on volunteers deleted from study. pre-test score reason for deletion from study group Age Prior HA experience Study area prior dissection experience 24 no glasses B 18-24 Yes Pre-med yes 18 ns A 18-24 No exercise sci Yes 17 used book A 18-24 Yes Pre-med Yes 16 no glasses B 13 ns B 18-24 Yes nursing Yes 10 ns B 18-24 Yes nursing Yes 10 no glasses B 18-24 Yes nursing Yes 10 no glasses B 18-24 Yes wellness no 9 no glasses B 18-24 No wellness yes ns *= no show There were no issues relating to accessing the PowerPoints or AVI movies for this pilot. Students did not indicate that they couldn’t find them or see, or download them. The same is true for the study guide list of questions and relationships, in that no one indicated that they could not find or see the list. Many students complained vigorously, after the examination, that there was too much material to learn in a short time. This may have been due to the fact that the summer term for the anatomy and physiology II lab was

PAGE 194

182 only six weeks long, and there was much material, in general, to be covered during that time. They study, partly because it was different than their other materials, and partly because they had to go out of their way to get the 3D glasses, also may have added to their frustration. Results Descriptive statistics (Table H3) for each group provided the following information. For the 2D group, values for ID test were a mean of 56.28, a standard deviation of 31.4, with a negative skewness of -0.362 and a negative kurtosis of -1.02 and no outliers. For the 3D group, values for ID test were a mean of 62.23, standard deviation of 25.09, negative skewness of -0.70 and negative kurtosis of -0.78 with no outliers. Table H3. Descriptive statistics for identification scores Identification Scores 2D 3D Mean 56.28 62.23 Standard deviation 31.40 25.09 Skewness -0.362 -0.70 Kurtosis -1.02 -0.78 Scores for the relationship portion of the practical examination are listed below in table H4. Again, the values look normally distributed, with slight negative skewness. The means appear to differ.

PAGE 195

183 Table H4. Descriptive statistics for relationship scores Relationship Scores 2D 3D Mean 50.9 54.38 Standard deviation 28.94 24.36 Skewness 0.28 -0.09 Kurtosis -1.37 -1.95 Doubly MANOVA Repeated Measures – Summer A Doubly-MANOVA repeated measures was run on the data for the variables of ID2D, ID3D (scores on identification examination) and Rel2D, and Rel3D (scores on the relationship portion of the examination) along with Differential Item Functioning (DIF) to see if/where differences exist. When reviewing the Doubly MANOVA results there is a significant difference for the main effect of the treatment groups 2D and 3D;(Wilk’s Lambda (0.11153384, p<.0001). However, there is a no significant treatment*outcome effect (Wilk’s Lambda = 0.96793731, p=.7338). This suggests that the difference between the treatment groups of 2D and 3D does not differ across the dependent variable measures of identification and relationship. When graphed, it is clear that there is a between treatment visual difference, however there is no significant difference when comparing treatments versus outcomes of identification or relationship. Please refer to Figure H3. The correlation statement for the variables demonstrates a similar correlation between ID2D and Rel2D (0.82320) and ID3D and Rel3D (0.82163). This suggests that

PAGE 196

184 the 2D treatment and outcome measures correlate well with the 3D treatment and outcome measures. Identification vs. Relationship0 10 20 30 40 50 60 70 2D3D2D and 3D ID Rel Figure. H1. Graph of group differences summer Qualitative Themes Responses to the qualitative questions on the user perspective survey were reviewed for themes. Table H5, below lists the most common themes to occur to the two questions, “what did you like most about using the PowerPoints and what did you like least about using the PowerPoints. Answers to the questions were divided into responses made by those from the 2D and also those from the 3D groups.

PAGE 197

185 Table H5. Themes from qualitative questions What did you like MOST about using the Powerpoints What did you like LEAST about using the Powerpoint? 2D group 3D group 2D group 3D group Clarity of images Clarity of images Difficult to see depth Hurt eyes Pictures and graphics Easy to learn from Too much information Difficult to find position/orientation Narration Color Hurt eyes Images blurry/ not clear Labeling Something Images blurry/not clear Better depth perception Confused by orientation More information Table H6. Additional information from questionnaire Describe how you felt while working with the PowerPoint images? Completely confused 0 % A little confused 38.3% Everything made sense 46.7% Don’t know 5.0 % Other – please specify 10.0%

PAGE 198

186 Table H7. Additional information from questionnaire Please rate your level of agreement with the following statements Strongly agree Agree Disagree Strongly Disagree Not Applicable Response Average In general the images were easy to use. 20% (12) 65% (39) 12% (7) 2% (1) 2% (1) 2.00 I think this activity was fun. 8% (5) 52% (31) 27% (16) 5% (3) 8% (5) 2.53 I could see the images clearly. 12% (7) 58% (35) 23% (14) 3% (2) 3% (2) 2.28 The graphics were of high quality. 13% (8) 67% (40) 15% (9) 3% (2) 2% (1) 2.13 It was easy to find specific information. 12% (7) 50% (30) 32% (19) 5% (3) 2% (1) 2.35 I would like to use similar images to study other areas of human anatomy. 22% (13) 55% (33) 15% (9) 7% (4) 2% (1) 2.12 I would use this PowerPoint as a primary reference. 17% (10) 58% (35) 18% (11) 5% (3) 2% (1) 2.17 I found the PowerPoint images to be a waste of my time. 0% (0) 12% (7) 52% (31) 35% (21) 2% (1) 3.27

PAGE 199

187 Please rate your level of agreement with the following statements I would rather study only images from a book. 3% (2) 13% (8) 62% (37) 17% (10) 5% (3) 3.07 I feel that I can learn as much from a PowerPoint images as from doing a real dissection. 0% (0) 28% (17) 62% (37) 8% (5) 2% (1) 2.83 I was often confused as to where to go to find what I was looking for. 2% (1) 38% (23) 47% (28) 12% (7) 2% (1) 2.73 Looking at these images hurt my eyes. 10% (6) 23% (14) 47% (28) 15% (9) 5% (3) 2.82 Total Respondents 60 Table H8. Preferred method to learn human anatomy Tell us which method you would prefer to use to learn human anatomy. Textbooks only 5.0 % PowerPoints only 11.7 % Actual Dissection 16.7 %

PAGE 200

188 Some combination of the above 65% Other (please specify) 1.7 % Table H9. Task rate Compared to what you may have anticipated, this task was…. Much slower 3.3 % Slow 15.0 % Just right 51.7% Fast 20.0% Much faster 10.0% Table H10. Did powerPoint add to ease of learning human anatomy? Do you feel the PowerPoint added to your ease of learning the human anatomy material? No 15.0 % Yes 58.3 % I’m not sure 26.7 %

PAGE 201

189 Appendix I: Informed Consent Form for IRB Space below reserved for IRB Stamp – Please leave blank Informed Consent for an Adult Social and Behavioral Sciences University of South Florida Information for People Who Take Part in Research Studies Researchers at the University of South Florida (USF) study many topics. To do this, we need the help of people who agree to take part in a research study. Title of research study: The Effectiveness and User Perception of 3-Dimensional Digital Human Anatomy in an Online Undergraduate Anatomy Laboratory. Person in charge of study: Amy J. Hilbelink, M.S. Where the study will be done: Gross Anatomy Laboratory – USF, Health Sciences Center Should you take part in this study? This form tells you about this research study. You can decide if you want to take part in it. You do not have to take part. Reading this form can help you decide. As a participant of this study, I would like to provide you with the following informed consent information. Since a large portion of this study will be done electronically, the return of this consent form will also be electronic. So, after reading this message and being sure you understand it, simply type your name in the space at the end of this form and continue with the survey as a way to provide me with your consent. The name you enter here will not be attached to any data, but will be used only to record that you have given consent. Thank You! Before you decide: € Read this form. € Talk about this study with the person in charge of the study or the person explaining the study. You can have someone with you when you talk about the study. € Find out what the study is about. You can ask questions: € You may have questions this form does not answer. If you do, ask the person in charge of the study or study staff as you go along. € You don’t have to guess at things you don’t understand. Ask the people doing the study to explain things in a way you can understand.

PAGE 202

190After you read this form, you can: € Take your time to think about it. € Have a friend or family member read it. € Talk it over with someone you trust. It’s up to you. If you choose to be in the study, then you can sign the form. If you do not want to take part in this study, do not sign the form. Why is this research being done? The purpose of this study is to find out how effective 3-dimensional digital human anatomy software is in an online undergraduate Anatomy class. Why are you being asked to take part? We are asking you to take part in this study because the researcher would like to determine if test scores are affected in an online version of human anatomy when 3D images are employed. How long will you be asked to stay in the study? You will be asked to spend about one week in this study learning the online materials. This will give you time to learn some anatomy online. You will also be required to come to the gross anatomy laboratory to take a practical examination. How often will you need to come for study visits? A study visit is one you have with the person in charge of the study or study staff. You will need to come for one study visit in all. You will be asked to come to the gross anatomy laboratory for one morning for one hour. The date and time will be determined based upon your course schedule. The one study visit, which consists of the lab practical examination, will take one hour Prior to the visit, the person in charge of the study or staff will: € Give you online access to all study materials you will need to learn the anatomy of the skull. The information will consist of a study guide list of anatomical structures to learn. Everyone that volunteers for the study will be asked to come to the Health Sciences Center in order to take a practical examination of anatomical structures of the skull. An online survey will be administered to everyone participating in the study. This survey can be taken on your own time, and should take only approximately 10 minutes. What other choices do you have if you decide not to take part? If you decide not to take part in this study, that is okay. Your grade in this course will not be adversely affected. How do you get started? If you decide to take part in this study, you will need to sign this consent form. You will be asked to take a demographic survey. It is a brief survey administered online. In addition, before we assign you to a study group, (2D or 3D), you will be given a simple pre-test that will consist of 25 questions on basic anatomy. What will happen during this study?

PAGE 203

191 The primary purpose of this study will be to determine the effectiveness of 3-dimensional (3D) human anatomy stereo images in an online undergraduate Anatomy/Physiology laboratory in encouraging appropriate understanding of anatomical facts and relationships as compared to an actual dissection. A secondary goal will be to measure the level of student satisfaction with the images as well as how user-friendly the 3D images were to use. Plan of Study Students enrolled in the Anatomy/Physiology undergraduate distance course will be randomly assigned to one of two treatment groups based upon pre-test scores and proximity to campus. All student volunteers will be given a brief demographic survey and also will be given access to either a 2D or 3D Powerpoint on Blackboard that will include 2-dimensional or 3-dimensional labeled images and a narration that leads them through the PowerPoint. The students will also all be given an identical list of anatomical structures to study and learn. Students assigned to either group will be required to come to the Health Sciences Center to take a laboratory practical examination that will consist of students identifying labeled structures. All participants will be given on online survey to take on their own that will access the satisfaction level of students who worked with the images. Here is what you will need to do during this study. Student volunteers will be required to study the PowerPoints that will be posted online. This will take a few hours. You will be given up to a week to study. In addition, you will be required to come to the Health Sciences Center lab for one hour on a specified day of the week to complete the practical examination. Will you be paid for taking part in this study? We will not pay you for the time you volunteer in this study. What will it cost you to take part in this study? It [will not] cost you [anything] to take part in the study. We will provide you with all the materials you will need. What are the potential benefits if you take part in this study? All participants will have the opportunity to visit the Human Anatomy laboratory to take an identification test on a dissected human skull. In addition, those students that are assigned to the 3-dimensional group of the study will have the opportunity to work in detail with 3-Dimensional digital human anatomy software. What are the risks if you take part in this study? There are no known risks to those who take part in this study. What if you get sick or hurt while you are in the study? If you are harmed because you are take part in the study:

PAGE 204

192 We will pay your medical costs if you were harmed because our staff did something they should not have done. Florida law limits how much USF is able to pay. USF cannot pay for lost wages, disability, or discomfort. Read Florida Statute 768.28 to find out how much USF is able to pay. You can get a copy of the law by calling USF Research Compliance at (813) 9745638. Call the USF Self Insurance Programs (SIP) at (813) 974-8008 and ask them to look into what happened. What will we do to keep your study records private? Federal law requires us to keep your study records private. However, certain people may need to see your study records. By law, anyone who looks at your records must keep them confidential. The only people who will be allowed to see these records are: The study staff. People who make sure that we are doing the study in the right way. They also make sure that we protect your rights and safety: The USF Institutional Review Board (IRB) The United States Department of Health and Human Services (DHHS) We may publish what we find out from this study. If we do, we will not use your name or anything else that would let people know who you are. What happens if you decide not to take part in this study? You should only take part in this study if you want to take part. If you decide not to take part: You won’t be in trouble or lose any rights you normally have. You will still get the same services you would normally have. What if you join the study and then later decide you want to stop? If you decide you want to stop taking part in the study, tell the study staff as soon as you can. Are there reasons we might take you out of the study later on? Even if you want to stay in the study, there may be reasons we will need to take you out of it. You may be taken out of this study: If you are not coming for your study visits when scheduled. You can get the answers to your questions. If you have any questions about this study, call Amy J. Hilbelink at 974-3471. If you have questions about your rights as a person who is taking part in a study, call USF Research Compliance at (813) 974-5638. Consent to Take Part in this Research Study It’s up to you. You can decide if you want to take part in this study.

PAGE 205

193 I freely give my consent to take part in this study. I understand that this is research. I have received a copy of this consent form. Statement of Person Obtaining Informed Consent I have carefully explained to the person taking part in the study what he or she can expect. The person who is giving consent to take part in this study Understands the language that is used. Reads well enough to understand this form. Or is able to hear and understand when the form is read to him or her. Does not have any problems that could make it hard to understand what it means to take part in this study. Is not taking drugs that make it hard to understand what is being explained. To the best of my knowledge, when this person signs this form, he or she understands: What the study is about. What needs to be done. What the potential benefits might be. What the known risks might be. That taking part in the study is voluntary. If you agree to participate in this study, please continue on to the online consent form: http://www.surveymonkey.com/s.asp?u=885492533954

PAGE 206

194 About the Author Ms. Amy JoAnne Slifko-Hilbelink received her Bachelor of Science degree in Biological Sciences from Marshall University, Huntington, West Virginia, and her Master of Science degree in Natural Sciences from the University of South Florida, Tampa. Ms. Hilbelink has had an interest in the biological sciences since high school. Her work in the USF Health Sciences Center sparked her interest in medical and health education. She chose the field of Instructional Technology for her doctorate in order to investigate how health and biology courses could be successfully administered online. Ms. Hilbelink has served as Interim Director for the Virtual Instructional Team for the Advancement of Learning (VITAL) for the College of Education, and has co-taught graduate courses at USF on Web Design and Distance Learning Theory. This dissertation marks the culmination of her studies for a Doctor of Philosophy degree in Instructional Technology, within the Department of Secondary Education.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001921061
003 fts
005 20080117113219.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 080117s2007 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001876
035
(OCoLC)190861291
040
FHM
c FHM
049
FHMM
090
LB1028.3 (ONLINE)
1 100
Hilbelink, Amy JoAnne.
4 245
The effectiveness and user perception of 3-dimensional digital human anatomy in an online undergraduate anatomy laboratory
h [electronic resource] /
by Amy JoAnne Hilbelink.
260
[Tampa, Fla] :
b University of South Florida,
2007.
3 520
ABSTRACT: The primary purpose of this study was to determine the effectiveness of implementing desktop 3-dimensional (3D) stereo images of human anatomy into an undergraduate human anatomy distance laboratory. User perceptions of 2D and 3D images were gathered via questionnaire in order to determine ease of use and level of satisfaction associated with the 3D software in the online learning environment. Mayer's (2001, p. 184) principles of design were used to develop the study materials that consisted of PowerPoint presentations and AVI files accessed via Blackboard. The research design employed a mixed-methods approach. Volunteers each were administered a demographic survey and were then stratified into groups based upon pre-test scores. A total sample size of 62 pairs was available for combined data analysis.^ Quantitative research questions regarding the effectiveness of 2D versus the 3D treatment were analyzed using a doubly-multivariate repeated measures (Doubly- MANOVA) design. Paired test scores achieved by undergraduates on a laboratory practical of identification and spatial relationships of the bones and features of a human skull were used in the analysis. The questionnaire designed to gather user perceptions consisted of quantitative and qualitative questions. Response frequencies were analyzed for the two groups and common themes were noted. Results revealed a statistically significant difference in group means for the main effect of the treatment groups 2D and 3D and for the variables of identification and relationship with the 3D group outperforming the 2D group on both dependent variables. Effect sizes were determined to be small, 0.215 for the identification variable and 0.359 for the relationship variable.^ ^Overall, all students liked the convenience of using PowerPoint and AVI files online. The 3D group felt their PowerPoint was more realistic than did the 2D group and both groups appreciated the detailed labeling of the online images. One third of the volunteers in the 3D group indicated that "eye strain" was what they liked least about working with the 3D images. Results indicate that desktop, stereo imaging may be incorporated effectively into online anatomy and physiology courses, but that more work needs to be done to ensure less eye strain.
502
Dissertation (Ph.D.)--University of South Florida, 2007.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 194 pages.
Includes vita.
590
Adviser: Ann Barron, Ed.D.
653
Distance learning.
Stereo-imaging.
Dissection.
Nursing.
Spatial relationships.
Mental models.
0 690
Dissertations, Academic
z USF
x Instructional Technology
Doctoral.
773
t USF Electronic Theses and Dissertations.
856
u http://digital.lib.usf.edu/?e14.1876