USF Libraries
USF Digital Collections

Three studies of problem solving in collaborative software development

MISSING IMAGE

Material Information

Title:
Three studies of problem solving in collaborative software development
Physical Description:
Book
Language:
English
Creator:
Domino, Madeline Ann
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Systems
Distributed work
Cognition
Agile methods
Conflict
Dyads
Dissertations, Academic -- Business Administration -- Doctoral -- USF
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: A potential solution to producing quality software in an acceptable time frame may be found by using the newer, innovative methods, such as collaborative software development. The purpose of this dissertation is to examine the individual developer characteristics, developmental settings, collaborative methods and the processes during development that impact collaborative programming performance and satisfaction outcomes.Understanding individual differences in performance in the collaborative development setting is important, since it may help us understand how the collaborative setting may raise the lowest level of performance to much higher levels, as well as how to select individuals for collaborative development. Exploring the impact of the virtual setting on collaborative development processes is important as it may help us improve performance outcomes in different work settings. Investigating how adaptations of pair programming impact collaborative processes may^ assist in implementing changes to the method that enhance quality and individual satisfaction.A multi-phase methodology is used, consisting of an intensive process study (Study 1) and two laboratory experiments (Studies 2 and 3). Study 1 illustrates that collaborative programming (pair programming) outcomes are moderated by both individual developer differences and the processes used during development. While cognitive ability and years of IT experience are important factors in performance, the impacts of conflict and the faithful appropriation of the method are highlighted. Distributed cognition is used as a theoretical foundation for explaining higher performance.Study 2 findings suggest that while collaborative programming is possible in a virtual setting, performance is negatively impacted. Face-to-face programmers have significantly higher levels of task performance, as well as satisfaction with the method, when compared to virtual programmers.Study 3 results suggests that the u se of structured problem solving (preparing test cases before writing code) may be a key factor in producing higher quality code, while collaboration may be indusive to higher levels of developer satisfaction.By understanding how, why and when collaborative programming techniques produce better performance outcomes and what factors contribute to that success, we add to the body of knowledge on methodologies in the MIS domain.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2004.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Madeline Ann Domino.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 223 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001787672
oclc - 124082402
usfldc doi - E14-SFE0001428
usfldc handle - e14.1428
System ID:
SFS0025748:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Three Studies of Problem Solving In Collaborative Software Development by Madeline Ann Domino A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Information Systems and Decision Science College of Business Administration University of South Florida Co-Major Professors: Alan R. Hevner, Ph.D. Co-Major Professors: Rosann Webb Collins, Ph.D. Cynthia F. Cohen, Ph.D. Donald J. Berndt, Ph.D. Date of Approval: December 15, 2004 Keywords: systems, distributed work, c ognition, agile methods, conflict, dyads Copyright 2006, Madeline Ann Domino

PAGE 2

Dedication To Mother, In Loving Memory

PAGE 3

Acknowledgements I am very appreciative of the many indivi duals who offered suppor t during my program of study. I would like to thank my father Sam J. Domino, CWO USMC, retired, for his love and continual encouragement. He ha s instilled in his children the value of education, a strong work ethic a nd love of learning. My brot hers were also there for me during this time. In particular, my brothe r, Dr. Joseph V. Domino, has provided the most valuable guidance and wisdom during my time in the doctoral program. A special thank you to my committee for helpin g me to navigate through the dissertation process, each of whom I will remember for thei r individual talents: Dr. Alan R. Hevner, who first introduced me to my topic in the MIS Technical Seminar and who has helped me to focus on the big picture; Dr. Rosann Webb Collins, for her tireless efforts in providing feedback during every phase of the program and who so graciously has shared her knowledge with me; Dr. Cynthia Cohen, for her quick study, encouragement and enthusiasm, and Dr. Donald Berndt, for his cr eativity, keen insights and ability to make me laugh. This dissertation would not have been possible with out the assistance of many other individuals all of whom assisted me in a vari ety of capacities. These include: Dr. J. Ellis Blanton, Dr. Richard Will, Dr. Harold Webb, Dr. Joni Jones, Dr. Gary Holstrum, Dr. James Hunton, Dr. Stephanie Bryant, Dr. R obert West, Martin Springer, Gary Poe, Richard Callaby, Marc Coleman, Akin Hill, Tennille Roberts, Keith Garrison, Lori Wilder, Judy Oates, Tim Klaus, Murray C ohen, Cynthia Cano, Dr. John Fisher, Dr. Steven Wingreen, Sandra Newton, Steven Garrett, Francisco, George Steigner, Wesly Austin, Jay Mulki, Fernando Jarmarillo, Mars ha Hynes and Eddie Giles. A special thanks to Dr. Robert L. Anderson and Dr. Pa ul Cheney who encouraged me to enter the doctoral program of study. And last but not l east I am grateful for Hunter, whose high spirits and affection kept me going and reminded me of the truly important things in life.

PAGE 4

i Table of Contents List of Tables v List of Figures ix Abstract xi Chapter One Introduction 1 Problem and Its Importance 1 Purpose of the Study 2 Research Questions 3 Results 3 Contributions 4 Overview of the Dissertation 4 Chapter Two Literature Review 5 Background on Software Development 5 Software Development Methodologies 6 The Business Case for Innovative Development 7 Collaborative Programming 7 Prior Research on Software Development Methods 9 Theoretical Basis for Research 10 Collaborative Work 10 Collaboration 10 Brainstorming 11 Cognitive Ability 12 Cognition and Performance 13 Task Domain Experience and Performance 13 Performance. 14 Conflict 15 Conflict in ISD 17 Task Design 18 Virtual Development Setting 20 Media Richne ss and Communication Modalities 21 Faithfulness to Method 21 Distributed Cognition 22 Satisfaction 24 Chapter Three Research Design and Methodology 25

PAGE 5

ii Research Approach 25 High Level Research Model 25 Research Methodology 26 Study 1 26 Study 2 27 Study 3 28 Measurement 28 Demographics 28 Cognitive Ability 28 Conflict Handling Style 29 Years of IT experience 29 Collaborative Method 29 Faithfulness to Method 30 Task Conflict 30 Pair Task Performance 30 Individual Task Performance 31 Individual Satisfaction with the Method 31 Experimental Tasks 31 Chapter Four Study 1 32 Overview 32 High Level Research Model 32 Study 1 Research Models 33 High Level Research Question 34 Study 1 Phase 1 35 Research Question 35 Research Design 35 Data Collection 35 Subject Demographics 36 Measures 36 Data Analysis 38 Conclusions 43 Limitations 43 Study 1 Phase 2 44 Research Question and Hypotheses 44 Research Design 45 Data Collection 46 Subject Demographics 46 Measurement 46 Data Analysis 48 Conclusions 52 Limitations 53 Chapter Five Study 2 54 Overview 54 High Level Research Model 54

PAGE 6

iii Study 2 Research Models 55 Research Question and Hypothesis 56 Research Design 59 Data Collection 60 Measures 62 Subject Demographics 64 Data Analysis 67 Dependent Variables 69 Pair Task Performance Test Cases 70 Pair Task Performance Code 71 Pair Performance Average Satisfaction with Method 71 Statistical Test of Main Effects 74 Test of Hypothesis 1 2 74 Test of C ovariates (Hypothesis 3 7) 81 Results of Study 2 99 Limitations 101 Chapter Six Study 3 102 Overview 102 High Level Research Model 102 Study 3 Research Models 103 Research Question and Hypothesis 104 Research Design 107 Data Collection 108 Subject Demographics 110 Measures 112 Data Analysis 113 Dependent Variables 115 Statistical Test of Main Effects 117 Test of Hypothesis 1 3 120 Test of Hypothesis 4 122 Results of Study 3 146 Limitations 147 Chapter Seven Discussion 148 Significant Findings 148 Study 1 Findings 148 Study 2 Findings 150 Study 3 Findings 150 Overall Study Findings 151 Contributions 155 Contributions to Researchers 155 Contribu tions to Practitioners 157 Limitations to Study 157 Opportunities for Future Research 158

PAGE 7

iv References 159 Appendices 172 Appendix A Informed Research Consent Form 173 Appendix B Task I: Compute Mowing Time 176 Appendix C Task II Discount Invoice Module 178 Appendix D Task III Sales Report Module 179 Appendix E Questionnaire St udy 1, Roch Interpersonal Conflict Inventory 180 Appendix F Questionnaire Study 1, Satisfaction Scale 185 Appendix G Study 1 Phase 1 Template 186 Appendix H Initial Questionnaire Study 2 188 Appendix I Questionnaire II and Final Questionnaire Study 2 195 Appendix J Final Questionnaire Study 2 205 Appendix K Questionnaire Study 3 209 About the Author En d Page

PAGE 8

v List of Tables Table 4.1 Correlations between Perfor mance and Cognitive Ability, Episodes of Task Conflict and Faithfulness to the Methodology 40 Table 4.2 Performance by Pair, A ggregate Score All Tasks 41 Table 4.3 Performance by Pair, Aggregate Score Task I 41 Table 4.4 Performance by Pair, Aggregate Score Task II 41 Table 4.5 Performance by Pair, Aggregate Score Task III 42 Table 4.6 Date Analysis Re sults by Selected Pairs 43 Table 4.7 Phase 1 Summary of Research Questions and Findings 44 Table 4.8 Percentage of Inter-coder Agreement Distributed Cognition 49 Table 4.9 Performance Ranks by Pair 50 Table 4.10 Performance Outcomes by Pair Task III 50 Table 4.11 Correlations between Test Case Performance and Code Performance 50 Table 4.12 Data Analysis Re sults by Selected Pairs 52 Table 4.13 Phase 2 Summary of Research Questions and Findings 53 Table 5.1 Episodes of Task Conflict by Task 66 Table 5.2 Task II Episodes of Conflict 66 Table 5.3 Task III Episodes of Conflict 66 Table 5.4 Factor Analysis of Faith fulness to Method and Individual Satisfaction with Method 68 Table 5.5 Summary of Pair Task Performance by Task (Test Cases) 69 Table 5.6 Summary of Pair Task Performance by Group (Test Cases) 69 Table 5.7 Summary of Pair Task Performance Outcomes (Code) 70 Table 5.8 Summary of Pair Task Performance by Group (Code) 70 Table 5.9 Summary of Average Pair Satisfaction with Method 71 Table 5.10 Summary of Average Pa ir Satisfaction with Method by Group 71 Table 5.11 Shapiro-Wilk Test for Norm ality for the Dependent Variables (Test Cases) 72 Table 5.12 Shapiro-Wilk Test for Normality for the Dependent Variables (Code) 72 Table 5.13 Shapiro-Wilk Test for Normality for the Dependent Variables (Average Satisfaction with the Method) 72 Table 5.14 Kruskal-Wallis Median Rank for Pair Task Performance 74 Table 5.15 Kruskal-Wallis Test Statistics for Pair Task Performance (Test Cases) 74 Table 5.16 Median Table 5Test Frequenc ies by Individual Task Performance (Test Cases) 75 Table 5.17 Test Statistics for Median Test for Pair Task Performance (Test Cases) 75

PAGE 9

vi Table 5.18 Kruskal-Wallis Median Rank fo r Pair Task Performance (Code) 76 Table 5.19 Kruskal-Wallis Test Statistics for Pair Task Performance (Code) 76 Table 5.20 Median Test Frequencies by I ndividual Task Performance (Code) 76 Table 5.21 Test Statistics for Median Test for Pair Task Performance (Code) 77 Table 5.22 One Way ANOVA Average In dividual Satisfaction with the Method 78 Table 5.23 Comparison of Means Averag e Individual Satisfaction with the Method 78 Table 5.24 Kruskal-Wallis Medians Rank for Individual Average Satisfaction with the Method 79 Table 5.25 Kruskal-Wallis Test Statistics for Individual Average Satisfaction with the Method 79 Table 5.26 Median Test Frequencies for Individual Average Satisfaction with the Method 80 Table 5.27 Median Test Frequencies for Individual Average Satisfaction with the Method 80 Table 5.28 Pearson Correlation Matrix Test Cases Task II 82 Table 5.29 Pearson Correlation Ma trix Test Cases Task III 83 Table 5.30 Pearson Correlation Matrix Code Task II 84 Table 5.31 Pearson Correlation Matrix Code Task III 85 Table 5.32 Kruskal-Wallis Medians Rank fo r Hypothesis 3 (Task Test Cases) 86 Table 5.33 Kruskal-Wallis Test Statisti cs for Hypothesis 3 (Test Cases) 87 Table 5.34 Median Test Frequencies for Hypothesis 3 (Test Cases) 87 Table 5.35 Test Statistics for Median Test for Hypothesis 3 (Test Cases) 87 Table 5.36 Kruskal-Wallis Medians Rank for Hypothesis 3 (Test Cases) 88 Table 5.37 Kruskal-Wallis Test Statisti cs for Hypothesis 3 (Test Cases) 88 Table 5.38 Median Test Frequencies for Hypothesis 3 (Test Cases) 88 Table 5.39 Test Statistics for Median Test for Hypothesis 3 (Test Cases ) 88 Table 5.40 Kruskal-Wallis Medians Rank for Hypothesis 3 (Code) 89 Table 5.41 Kruskal-Wallis Test Statistics for Hypothesis 3 (Code) 89 Table 5.42 Kruskal-Wallis Medians Rank for Hypothesis 5 (Test Cases) 90 Table 5.43 Kruskal-Wallis Test Statisti cs for Hypothesis 5 (Test Cases) 90 Table 5.44 Median Test Frequencies for Hypothesis 5 (Test Cases) 90 Table 5.45 Test Statistics for Median Test for Hypothesis 5 (Test Cases) 91 Table 5.46 Kruskal-Wallis Medians Rank for Hypothesis 5 (Code) 91 Table 5.47 Kruskal-Wallis Test Statistics for Hypothesis 5 (Code) 92 Table 5.48 Median Test Frequencies for Hypothesis 5 (Code) 92 Table 5.49 Test Statistics for Medi an Test for Hypothesis 5 (Code) 92 Table 5.50 Kruskal-Wallis Medians Rank for Hypothesis 6 (Test Cases) 93 Table 5.51 Kruskal-Wallis Test Statisti cs for Hypothesis 6 (Test Cases) 93 Table 5.52 Median Test Frequencies for Hypothesis 6 (Test Cases) 94 Table 5.53 Test Statistics for Median Test for Hypothesis 6 (Test Cases) 94 Table 5.54 Kruskal-Wallis Medians Rank for Hypothesis 6 (Code) 95 Table 5.55 Kruskal-Wallis Test Statistics for Hypothesis 6 (Code) 95 Table 5.56 Median Test Frequencies for Hypothesis 6 (Code) 95 Table 5.57 Test Statistics for Medi an Test for Hypothesis 6 (Code) 96

PAGE 10

vii Table 5.58 Kruskal-Wallis Medians Rank for Hypothesis 7 (Test Cases) 96 Table 5.59 Kruskal-Wallis Test Statisti cs for Hypothesis 7 (Test Cases) 97 Table 5.60 Median Test Frequencies for Hypothesis 7 (Test Cases) 97 Table 5.61 Test Statistics for Median Test for Hypothesis 7 (Test Cases) 97 Table 5.62 Kruskal-Wallis Medians Rank for Hypothesis 7 (Code) 98 Table 5.63 Kruskal-Wallis Test Statistics for Hypothesis 7 (Code) 98 Table 5.64 Median Test Frequencies for Hypothesis 7 (Code) 98 Table 5.65 Test Statistics for Medi an Test for Hypothesis 7 (Code) 99 Table 5.66 Summary of Study 2 Hypotheses and Results 100 Table 6.1 Factor Analysis of Faith fulness to Method and Individual Satisfaction with Method 115 Table 6.2 Shapiro-Wilk Test for Normality for Dependent Variables 117 Table 6.3 Kruskal-Wallis Median Rank for Individual Task Performance 119 Table 6.4 Kruskal-Wallis Test Statistics for Individual Task Performance 119 Table 6.5 Median Test Frequencies by Individual Task Performance 119 Table 6.6 Test Statistics for Median Te st for Individual Task Performance 120 Table 6.7 Median Test Frequencies for Hypotheses 1 121 Table 6.8 Test Statistic for Median Test for Hypothesis 1 121 Table 6.9 Median Test Fre quencies Hypothesis 2 121 Table 6.10 Test Statistics for Me dian Test for Hypothesis 2 121 Table 6.11 Median Test Frequencies for Hypotheses 3 122 Table 6.12 Test Statistics for Median Test for Hypothesis 3 122 Table 6.13 Shapiro-Wilk Test for Norma lity Individual Satisfaction with the Method 123 Table 6.14 Kruskal-Wallis Median Rank for Hypothesis 4 124 Table 6.15 Kruskal-Wallis Test Statistics for Hypothesis 4 124 Table 6.16 Test Statistics for Median Test for Hypothesis 4 124 Table 6.17 Test Statistics for Median Test for Hypothesis 4 125 Table 6.18 Kruskal-Wallis Medi an Rank for Hypothesis 4 125 Table 6.19 Kruskal-Wallis Test Statistics for Hypothesis 4 126 Table 6.20 Test Statistics for Median Test for Hypothesis 4 126 Table 6.21 Test Statistics for Median Test for Hypothesis 4 126 Table 6.22 Pearson Correlation Matrix for Task II 127 Table 6.23 Pearson Correlation Matrix for Task III 128 Table 6.24 Kruskal-Wallis Median Rank for Hypothesis 5 (Task II) 129 Table 6.25 Kruskal-Wallis Test Statistics for Hypothesis 5 (Task II) 129 Table 6.26 Median Test for Hypothesis 5 (Task II) 129 Table 6.27 Median Test Statisti c for Hypothesis 5 (Task II) 130 Table 6.28 Kruskal-Wallis Median Rank for Hypothesis 5 (Task III) 130 Table 6.29 Kruskal-Wallis Median Rank for Hypothesis 5 (Task III) 130 Table 6.30 Median Test Frequencies for Hypothesis 5 (Task III) 131 Table 6.31 Test Statistic for Median Test for Hypothesis 5 (Task III) 131 Table 6.32 Kruskal-Wallis Median Rank for Hypothesis 6 (Task II) 132 Table 6.33 Kruskal-Wallis Median Rank for Hypothesis 6 (Task II) 132 Table 6.34 Median Test Frequencies for Hypothesis 6 (Task II) 132 Table 6.35 Test Statistics for Median Test for Hypothesis 6 (Task II) 133

PAGE 11

viii Table 6.36 Kruskal-Wallis Median Rank for Hypothesis 6 (Task III) 133 Table 6.37 Kruskal-Wallis Test Statis tic for Hypothesis 6 (Task III) 133 Table 6.38 Median Test Frequencies for Hypothesis 6 (Task III) 133 Table 6.39 Test Statistic for Median Test for Hypothesis 6 (Task III) 134 Table 6.40 Kruskal-Wallis Median Rank for Hypothesis 7 (Task II) 134 Table 6.41 Kruskal-Wallis Test Statis tic for Hypothesis 7 (Task II) 134 Table 6.42 Median Test Frequencies for Hypothesis 7 (Task II) 135 Table 6.43 Test Statistics Median Test for Hypothesis 7 (Task II) 135 Table 6.44 Kruskal-Wallis Median Rank for Hypothesis 7 (Task III) 135 Table 6.45 Kruskal-Wallis Test Statistic Hypothesis 7 (Task III 136 Table 6.46 Median Test Frequencies for Hypothesis 7 (Task III) 136 Table 6.47 Test Statistic for Median Test Hypothesis 7 (Task III) 136 Table 6.48 Kruskal-Wallis Medi an Rank for Hypothesis 8 137 Table 6.49 Kruskal-Wallis Test Statistics for Hypothesis 8 137 Table 6.50 Median Test Frequencies for Hypothesis 8 138 Table 6.51 Test Statistics for Median Test for Hypothesis 8 138 Table 6.52 Kruskal-Wallis Medi an Rank for Hypothesis 9 139 Table 6.53 Kruskal-Wallis Test Statistics for Hypothesis 9 139 Table 6.54 Median Test Frequencies for Hypothesis 8 139 Table 6.55 Test Statistics for Median Test for Hypothesis 9 140 Table 6.56 Kruskal-Wallis Medi an Rank for Hypothesis 10 140 Table 6.57 Kruskal-Wallis Test Statistics for Hypothesis 10 141 Table 6.58 Median Test Frequencies for Hypothesis 10 141 Table 6.59 Test Statistics for Median Test for Hypothesis 10 141 Table 6.60 Kruskal-Wallis Medi an Rank for Hypothesis 11 142 Table 6.61 Kruskal-Wallis Test Statistics for Hypothesis 11 142 Table 6.62 Median Test Frequencies for Hypothesis 11 142 Table 6.63 Test Statistics for Median Test for Hypothesis 11 143 Table 6.64 Kruskal-Wallis Medi an Rank for Hypothesis 12 143 Table 6.65 Kruskal-Wallis Test Statistics for Hypothesis 12 144 Table 6.66 Median Test Frequencies for Hypothesis 12 144 Table 6.67 Test Statistics for Median Test for Hypothesis 12 144 Table 6.68 Kruskal-Wallis Medi an Rank for Hypothesis 13 145 Table 6.69 Kruskal-Wallis Test Statistics for Hypothesis 13 145 Table 6.70 Median Test Frequencies for Hypothesis 13 145 Table 6.71 Test Statistics for Median Test for Hypothesis 13 146 Table 6.72 Summary of H ypothesis and Results 147 Table 7.1 Summary of Code Performance All Studies 152 Table 7.2 Summary of Code Performa nce by Study, Method and Setting 153 Table 7.3 Pearson Correlation Matrix All Studies 155

PAGE 12

ix List of Figures Figure 2.1 Heuristic Cycle of Human Problem Solving 11 Figure 2.2 The Most Important Individua l Difference Predictors in Job Performance 14 Figure 2.3 Conflict Handling Styles 16 Figure 2.4 The Task / Job. Role Dynamics Networks 19 Figure 2.5 The Complete Integrated Model 20 Figure 3.1 High Level Research Model 25 Figure 4.1 High Level Research Model 32 Figure 4.2 Study 1 Research Model: Phase 1 34 Figure 4.3 Study 2 Research Model: Phase 2 34 Figure 4.4 Subject Demographics 36 Figure 4.5 Descriptive Statistics Individual Differences 37 Figure 4.6 Descriptive Statisti cs Process Analysis 38 Figure 4.7 Descriptive Statistics Performance Outcomes 38 Figure 4.8 Subject Demographics 46 Figure 4.9 Descriptive Statistics Individual Differences 46 Figure 4.10 Descriptive Statisti cs Process Analysis 47 Figure 4.11 Coding Scheme 47 Figure 5.1 High Level Research Model 54 Figure 5.2 Study 2 Research Model: Main Effects 55 Figure 5.3 Study 2 Research Model: Mediating & Moderating Effects 56 Figure 5.4 Experimental Design 60 Figure 5.5 Number of Pairs and Task s in Each Experimental Group 63 Figure 5.6 Subject Demographics 63 Figure 5.7 Frequency Tables for Selected Demographic Variables 63 Figure 5.8 Descriptive Statistics Select ed Variables (Individual Developer Characteristics) 64 Figure 5.9 Frequency Tables for Selected Variables 64 Figure 5.10 Standardized Cronb achs Alpha for Measures 67 Figure 5.11 Summary of Study 2 Pairs by Dependent Variables and by Group 69 Figure 6.1 High Level Research Model 102 Figure 6.2 Study 3 Research Model: Main Effects 103 Figure 6.3 Study 3 Research Model: Mediating & Moderating Effects 104 Figure 6.4 Experimental Design 109 Figure 6.5 Number of Subjects and Tasks in Each Experimental Group 111 Figure 6.6 Subject Demographics 111 Figure 6.7 Frequency Tables for Selected Demographic Variables 111 Figure 6.8 Descriptive Statisti cs Selected Variables 112

PAGE 13

x Figure 6.9 Frequency Tables for Selected Variables 112 Figure 6.10 Standardized Cronbachs Alpha for Measures 114 Figure 6.11 Summary of Individu al Performance Outcomes 116 Figure 6.12 Descriptive Statistics Median Test 120 Figure 7.1 Box Plots of Findings by Me thod, Study & Setting Task II Code 154

PAGE 14

xi Three Studies of Problem Solving In Collaborative Software Development Madeline Ann Domino ABSTRACT A potential solution to producing quality softwa re in an acceptable time frame may be found by using the newer, innovative methods such as collaborative software development. The purpose of this disserta tion is to examine the individual developer characteristics, developmental settings, coll aborative methods and the processes during development that impact collaborative programming performance and satisfaction outcomes. Understanding individual differences in perf ormance in the collaborative development setting is important, since it may help us understand how the collaborative setting may raise the lowest level of performance to much higher levels, as well as how to select individuals for collaborative development. E xploring the impact of the virtual setting on collaborative development processes is importa nt as it may help us improve performance outcomes in different work settings. Invest igating how adaptations of pair programming impact collaborative processes may assist in implementing changes to the method that enhance quality and individual satisfaction. A multi-phase methodology is used, consisting of an intensive process study (Study 1) and two laboratory experiments (Studies 2 a nd 3). Study 1 illustrates that collaborative programming (pair programming) outcomes are moderated by both individual developer differences and the processes used during development. While cognitive ability and years of IT experience are impor tant factors in performance, the impacts of conflict and the faithful appropriation of the method are high lighted. Distributed cognition is used as a theoretical foundation for explai ning higher performance. Study 2 findings suggest that while collaborati ve programming is possible in a virtual setting, performance is negatively impacted. Fa ce-to-face programmers have significantly higher levels of task performance, as well as satisfaction with the method, when compared to virtual programmers. Study 3 results suggests that the use of struct ured problem solving (preparing test cases before writing code) may be a key factor in producing higher quality code, while collaboration may be indusive to highe r levels of developer satisfaction.

PAGE 15

xii By understanding how, why and when collaborative programming techniques produce better performance outcomes and what factors co ntribute to that succ ess, we add to the body of knowledge on methodologies in the MIS domain.

PAGE 16

1 Chapter One Introduction Method goes far to prevent trouble in business: For it makes the task easy, hinders confusion, saves abund ance of time and instructs those that have business depending, both what to do and what to hope. William Penn (1644 1718) The failure rate in software development c ontinues to remain high. A recent U. S. Department of Commerce study c oncludes that software bugs, or errors, cost the U. S. economy an estimated $59.5 billion dollars a nnually. Although not al l software errors are likely to be removed (Glass 2003), more than a third of these costs could be eliminated by an improved testing infrastructu re that enables earlier and more effective identification of software defects (Trembly 2002). It is widely recognized that the early detection of software errors in developmen t enhances quality, sin ce it reduces the risks and costs associated with devel opment processes (McConnell 1996). Problem and Its Importance Producing quality software, in an acceptable time frame, is not a new challenge. Since the early 1980s, it has been estimated that the information technology (IT) industry has an 85% failure rate in the development of larg e-scale, mission-critical software (Ambler 2000). Despite efforts of the industry to remedy these shortcomings, the problem persists. The quest for quality in software devel opment has been underscored by the Software Engineering Institutes (SEI) ongoing efforts to assist organizations and individuals in improving their software engineering management practices. Specific to the goals of the SEI are higher code quality, greater productivity of developers, faster delivery of code, lower costs of development and better morale among employees. Capability Maturity Models (CMMs) assist organizations in ma turing people, process and technology assets towards improving long-term busines s performance (SEI 2002). Views of why there is such a high failure rate are varied. Some maintain that the traditional code-and-fix models are inadequate to handle the complexities of large-scale software development (Ghezzi et al. 1991) common in todays turbulent business environment. Others contend that software development is a human endeavor and that traditional methods do not place enough emphasis on associated personnel issues (Cockburn 2000, Jordan et al. 1994).

PAGE 17

2 According to Fowler (2000) tr aditional processes are often viewed as rigid and changeresistant. As such, these methods may not always be the most appropriate for todays business climate and chaordic organizational structures. As a result, newer software development methodologies, such as collaborative programming have emerged. A potential solution to the problems of producing higher quality software, in reduced time, may be found by using the newer, innovative development methods. While collaboration during development has always been used, these techniques emphasize high levels of interpersonal collaboration duri ng the entire developm ent process (Fowler 2000). For example, an instance of collaborative programming, which is gaining interest, is pair programming (Beck 2000, Cockburn 2000, Williams et al. 2000). Anecdotally it is suggested that these de velopment methods produce better quality software in reduced time, with higher le vels of developer satisfaction (Beck 2000, Cockburn 2000). The limited empirical work to date on collaborative programming (pair programming) shows mixed results. Nosek (1998) and Williams et al. (2000) found a positive relationship between the use of pair programming and performance outcomes, i.e. software quality and developer satisfaction. However, Nawrocki and Wojciechowskis (2001) research does not show these same positive results. Additionally, little explanation has been offered to explai n collaborative programming outcomes (Domino et al. 2003). As companies strive to produce better quality software, more practitioners are beginning to experiment with and use the newer i nnovative development methods (Biggs 2000). Current practices suggest that some mana gers are using variations of pure pair programming. These practitioners contend that adaptations of the method produce equally good or better performance outcomes, with greater efficiency (Manzo 2002). While there continues to be growing intere st in and use of collaborative programming, many questions remain to be answered. Does collaborative programming produce higher performance outcomes? If so, what are the underlying factors that contri bute to this success? What is the impact of individual developer differences on collaborative programming success? What is the impact of the developmental setting on performance re sults? What impact if any, does the collaborative method have on successful performance outcomes? How do the processes used during development contribute to success? Given the continuing need to produce higher quality software, todays current de velopment climate offers an unprecedented opportunity to examine collaborative methods. Purpose of the Study The purpose of this study is to examine the individual developer characteristics, developmental settings, collaborative met hods and processes during development that impact collaborative programming performance outcomes, i.e. task performance and satisfaction. The underlying premise of this study is that successful collaborative outcomes, especially fewer defects, are driven by these factors.

PAGE 18

3 Understanding differences in performan ce and productivity between individual programmers is important, as it may help us understand how we may raise the lowest level of performance to much higher levels, as well as aid in the se lection of individuals for collaborative development. The current work environment often calls for virtual software development, in which pairs may not be in the same place at the same time. Therefore, exploring the impact of the development setting on collaborative development processes is important, as it may help us improve performance outcomes in different work settings. Investigating how adapta tions of pair programming method impact collaborative processes may assist in impl ementing changes to the method that enhance productivity, efficiency and individual satisfaction. Research Questions A multi-phase methodology is used, consisting of an intensive process study and two laboratory experiments. The results of these studies facilitat e our understanding of collaborative software development practi ces, with an eye towards improving these methods and related performance outcomes. Gaining an increased understanding of this innovative software devel opment method is of importance to researchers and practitioners alike. The major research questions are: Within the context of the collaborative programming, how do individual developer characteristics and the processes used during collaborative programming impact performance outcomes? Within the context of collaborative progr amming, does the developmental setting impact related performance outcomes and the processes used during collaborative programming? Within the context of collaborative programming, do variations in the developmental method impact related performance outco mes and the processes used during collaborative programming? Results The results of the three studies are now briefly presented. An analysis of the results of Study 1, which is a process study, provides evidence that collaborative programming (pair programming) outcomes are moderated by both individual developer differences and the processes used during development. The qualitative analysis shows that while cognitive ability and years of IT experience are important factors in performance, the impact of conflict and the faithful appropria tion of the method are important as well. Distributed cognition is used as a theoretical foundation for explaining higher performance when developers collaborate.

PAGE 19

4 Study 2 focuses on developmental setting. Th e results show that while collaborative programming (pair programming) is possible in a virtual setting, performance is negatively impacted. Face-to-face programmers ha ve significantly higher levels of task performance, as well as satisfaction with the method, when compared to virtual programmers. Study 3 focuses on variations, or adaptations, of the collaborative method (pair programming). The findings suggest that th e use of structured problem solving (test cases) before writing code may be a key f actor in producing higher quality writing. The study also suggests that collabo ration results in higher leve ls of developer satisfaction. Contributions This dissertation addresses the need for more research on the newer software development methodologies. By understa nding how, why and when collaborative programming techniques produce better pe rformance outcomes, it is hoped that IT (information technology) professionals may be tter address the quality issues that are prevalent in the industry today. Additionally, the st udy extends our knowledge of important organizational issues related to collaborative programming methods, personnel selection, and training. And fi nally, the research adds to the body of MIS (management information systems) knowledge, as resear chers continue to examine the newer, innovative software deve lopment methodologies. Overview of the Dissertation The remainder of the dissertation is organized as follows: Chapter Two contains a review of the literature on software development, innovative methods and related materials from a multi-theoretical perspective and a variety of domains. Chapter Three discusses the high-level research model and research questi ons. Chapter Four desc ribes the results of an intensive process study on collaborative programming (pair programming). Chapters Five and Six describe two laboratory experiments on collaborative programming. Chapter Seven presents research contributions and future research directions.

PAGE 20

5 Chapter Two Literature Review In order to perform this research, an exte nsive literature review was conducted. The literature review included in this chapter draws from a number of research domains including information technology, computer science, psychology, organizational psychology and management. This chapter is or ganized as follows: first, the research context is more clearly defined by giving a brief historical account of software development and related methodologies. Then an overview of the newer, innovative collaborative methods is discussed. This is followed by a presentation of the theoretical foundations for this research w ith a focus on the constructs and variables of interest used in the study. Background on Software Development According to Pressman (1992), an early defi nition of software engineering was proposed by Fritz Bauer at the first major conference dedicated to the subject. The definition included the application and use of sound engine ering principles in order to ensure that software could be developed economically, reliably and efficiently, in a machine-like manner. Although many more comprehensive de finitions have since been offered, all enforce the requirement for engineering disc ipline in software development (Pressman 1992). Central to this them e is the idea that the more disciplined the software methodology rules and practices, the better ones ability to create software with consistent quality and pred ictable results (Fowler 2000). Software has been defined as computer co de, or programs. Formally defined as information, software has the following three ch aracteristics: structured with logical and functional properties; created and maintained in various forms and representations during the software systems development life cycle; and tailored for machine processing in its fully developed state (Donaldson and Siegel 2001). While classical definitions of software deve lopment incorporate the necessary functional components, little light is shed on defining su ccessful software development practices. Donaldson and Siegel (2001) define successful software development as the ability to produce good quality software on a consistent basis. As such, successful development calls for an organizational way, or mechanis ms, of developing processes that promote effective communication and con tinually reduce associated ris k. This organizational way calls for well-defined business practices, yet must allow for adaptation. This organizational way is congruent with the ge neral quality movement, which continues to take place within the softwa re development industry.

PAGE 21

6 The quest for quality in software developm ent has been underscored by the Capability Maturity Models (CMMs) used to assess a groups capability to develop software in a disciplined, measured way that supports continuous process improvements. These standards attempt to incorporate a processpeople-technology triad ne eded to perform the discipline effectively. In this triad, process is defined as a set of practices, performed to act in a given purpose. As such, it may also be considered the unifying glue that holds together all the other compone nts needed to perform that discipline (SEI 2002). For example, a process related to collaborative programming is the protocol of producing test cases first, before writing the associated code. Why focus on process? Process provides a c onstructive, high-leverage focus, as opposed to focusing on people or technology. The underlying premise, according to Demming and Humphrey, is that the quality of a produc t is largely determined by the quality of the process that is used to devel op and maintain it (SEI 2002). Traditional management practices of software development view the development process as something that is planned and controlled, in order to achieve reliability of the planned results. The underlying premise is that if the process can be controlled, then it is beneficial to the outcome of the process (Rie hle 2000). Central to th is theme is the idea that the more disciplined the software me thodology rules and practices, the better ones ability to create software with consistent quality and predictabl e results (Fowler 2000). Software Development Methodologies Software development methodologies are defi ned as how an organization chooses to organize people and resources to create and maintain applic ations (McBreen 2003). Most of the systems methodologies used until the mid 1990s have their origins in a set of concepts that came to prominence in the 10year period between 1967 and 1977. In this context, various life cycle models have been offered. Early developers, typically scientific rese archers with mathematical or engineering backgrounds, developed their own programs to m eet their particular area of interest. Thus early programmers operated in an environment which Friedman (1989) characterizes by very loose re sponsibility and autonomy and w ith very little management control or focus. This paradigm for development changed with the advent of the systems development life cycle (SDLC). The classic life-cycle for software engin eering and development, called the waterfall model, demands a systematic and sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing and maintenance. The move to the systems development life cycle (SDLC) represented a shift towards tighter control in the development process. While there were benefits associated with these methods, such as high levels of documentation, it became apparent that they often lack the flex ibility needed to make changes quickly. As a result, innovative techniques and pro cesses were developed, the scope of which focuses on flexibility and responsiveness in meeti ng rapidly changing business needs.

PAGE 22

7 The late 1980s and 1990s saw the emergence of the object-orient ed methodologies and rapid development techniques (Fitzgeral d 1997). Hough (1993) proposed a rapid delivery approach to software development, with the main thrust of producing frequent tangible results every few months, as progres sive levels of func tional capability are delivered. Prototyping, the spiral model and fourth generation techniques combined the best features of the classic approach and their predecessors (Pressman 1992). Rapid Application Development (RAD) emerge d in the early 1990s. This development method promises shorter software delivery cycles. In pushing for speed, RAD methodology promotes a collaborative environment that thrives upon dealing with uncertainty, iterative learni ng, working with customers a nd synchronizing concurrent development methods (Highsmith III 2000). The need for flexibility and adaptation continued to grow, as did th e continuing quest for quality. As a result, the Adaptive Development Model (ADM) and other resultin g lightweight proce sses emerged in the late 1990s (Fowler 2000). These new lightweight processes have few ru les and a modest number of practices and place high levels of focus on people and colla boration. Additionally these methods help to create a clean, concise development e nvironment, with an emphasis on meeting changing business needs quickly (Fowler 2000). The Business Case for Innovative Development A significant failure rate exists in soft ware development projects (Ambler 2000) and hoped-for improvements in quality continue to disappoint. Produci ng quality code in acceptable time frames is of increasingly gr eater importance as the competitive business environment continues to intensify. Many of the methodologies in us e today are derived from pract ices and concepts relevant to a very different organizational and business environment. Accordingly there is a need to reconsider their role in view of newer organizational forms, work practices and ever increasing complexity of applications (Fitzge rald 1997). The demand for higher quality software production in shorter time periods continues. Traditi onal methodologies that worked well in the past may not always be a viable solution to todays business problems. Therefore, organizations must be open to im plementing new development techniques. Collaborative Programming Agile methods are a set of development methods, derived from good practices and organized in an innovative process structure. The most immediate differences in these new agile methods are that th ey are adaptive and they are people oriented (Fowler 2000). As such, these methodologies are designed to enable rapid response to change while producing quality code in less time.

PAGE 23

8 One variation of collaborative programming that is rapidly gaining interest is pair programming. Developer collaboration ha s long acknowledged by practitioners as a good development approach (Brooks 1985). One example of an agile method is collaborative programming. The pair pr ogramming method involves two developers, working together in intense collaboration, producing one artifact. One developer takes the role of the driver, writi ng the code and using the keyboa rd, while the other functions as the navigator, monitoring results, looking for specific details and strategic defects. Periodically each partner switches roles, resu lting in a highly interactive development process (Beck 2000). Another distinctive and differe ntiating feature of collaborative programming is that the developers write tests first and then write the associated code. This somewhat unconventional sequence of formulating test ca ses before writing code is believed to be the reason for the reduced defect rate in code developed with pair programming (McBreen 2003). The concept of jelling is thought to be an important element of pair programming. Humphrey (2000) speaks to the concept of jelling in his work on the team software development process. A jelled team seems to perform beyond itself, since the members support one another and intuitively know when and how to help each other. Membership is equally rewarding as people remember the joy of meeting a tough challenge and a job well done. DeMarco and Lister (1999) also believe that jelling makes people more productive and goal-directed. It is clear that agile methods are gaining great interest as the software industry strives for better production quality. Albeit on a limited basis, Ford Motor Company, Caterpillar Corporation (Biggs 2000) and John Hancock Corporation (2002) have reported using the collaborative programming technique. Of particular interest is how implementation is taking place. Variations, or adaptations, of the standard collaborative method appear to be gaining particular favor. For example, a recent issue of Crosstalk (Manzo 2002) cited several instances of developers brainstorming together and then writing code alone, claiming that technique produces better quality code. Many pract itioners view these variations in the collaborative programming method as more cost effective, than the standard collaborative technique which is criticized as being too resource intensive. It has further been suggested that the method is best suited for more difficult and complex programming assignments. John Hancock Corpor ation (2002) uses a variation of pair programming for technically complex deve lopment projects (per sonal communication 2003). Their work design incorporates brainstorming at a white board, while programmers and lead testers de velop user stories. Then developers program in pairs utilizing a variation of the standa rd collaborative protocol, in th at the test first, code later sequence is not followed. This process is followed for only the most difficult or complex modules. Hence, collaborative programming encompasses approximately twenty percent

PAGE 24

9 of total development time. For the remaini ng portions, developers c ode alone. Periodic brainstorming also takes place on an as needed basis. Prior Research on Software Development Methods In response to continued development pr oblems and failures, a significant body of research in management information systems (MIS) on software development methodologies has grown. Most prior research has centered on traditional methodologies, such as the waterfall model. Except for some research on object-oriented methods, minimal work in MIS has focused on the newer innovative software development models. The research to date on collabo rative programming is limited a nd the findings to date are mixed. Flor and Hutchins (1991) study two developers working on a software maintenance task. They theorize that distributed cognition explains enhanced performance outputs. Distribu ted cognition refers to the kno wledge representation both inside the heads of the indi viduals and in the world, and the propagation of knowledge between individuals and artifacts. In an intensive process study, the researchers video and audio taped the two programmers, coding their utterances and non-verbal behaviors to various themes of distributed cognition. These dimens ions include sharing goals, sharing memories, more efficient communicati on and expansion of search alternatives, i.e. based on different prior experiences a nd understanding of differe nt relationships and tasks. This study illustrates the numerous di mensions of distributed cognition as an explanation of higher le vels of performance. In a laboratory experiment, Nosek (1998) compares performance outcomes for five experienced programmers working alone on a prog ramming task to those of five pairs of experienced programmers. According to hi s findings, the collabor ative pairs produce higher quality code and complete the task forty percent more quickly. However, only one, 45-minute task was incl uded in this research. Williams et al. (2000) conducts an experiment in which computer science students utilize pair programming to complete three experimental tasks. They find that after an initial jelling period, paired program mers produce higher quality code (15% less defects), in shorter time (completed in half the time), w ith a reported rate of 95% higher levels of developer satisfaction. These study results are limited, however by the lack of control in the experiment. For example, subjects di d not perform the experimental tasks in a laboratory setting and time on task was self reported. The results of another experimental eval uation of pair programming, utilizing student subjects (Nawrocki and Wojciechowski 2001) calls into question the findings of Nosek (1998) and Williams et al. (2000). Their re sults show pair programming to be less efficient than claims made by earlier researchers with no real difference in quality occurring.

PAGE 25

10 Additionally, little empirical evidence has been offered (Nosek 1998, Williams et al. 2000) to explain the exact source s of the gains in quality that have been reported. Flor and Hutchins (1991) thesis on distributed cognition offers some glimpse as to why performance outcomes are improved. Given these inconclusive findings on collaborative programming and the limitations in the res earch designs employed, the call for more research on the topic is warranted. Theoretical Basis for the Research The essential elements that collaborat ive programming embrace are people working together in a highly interactive mode. As such, collaborative programming relies heavily on the interpersonal interactions of those w ho work together. These interactions may have impacts on both the processes used dur ing development and related performance outcomes. The constructs and variables that make up the research questions are now discussed. They are selected based on their importance to the collaborative programming methods, prior research and those of interest to the researcher. Collaborative Work Collaboration is widely used today in organizational settings and is an essential part of the programming method being stud ied. In fact, it is estimated that nearly all the Fortune 500 companies use some form of collaborative work for problem solving and conducting business (Dumaine 1994, Lawler and Cohen 1992) According to McGourty and Meuse (2001), the collaborative approaches to solving business problems add a powerful dimension to the workplace and are more than likely to continue to be prevalent in the business arena. The continued pressure to respond to increased global competition has stimulated the search for new ways to wo rk more effectively and efficiently. A prominent aspect of effectiveness in meeti ng customer demands often requires enhanced product development and innovation (Lawler 1994, Hammer and Champy 1993). Additionally, because competitive pressures have continued to ensue corporate restructuring, smaller and flatter organizations require that employees take a greater role in deciding how work gets done; thus, bei ng more self-directed (Manz and Sims 1993, Wellins et al. 1991). And finally, the increa sing complexity of many tasks and projects makes it increasingly difficult for individuals to perform them alone (McGourty and Meuse 2001). Theories of colla borative work are now presented. Collaboration Researchers have defined collaboration in multiple ways. The usual focus is on individuals acting jointly in the interests of solving some well-formed problem. Rochelle and Teasley (1994) define collaboration in the context of mutual engagements of individuals working together in a coordinated effort, in which problems are solved together. It is often said, how ever, that people are collaboratin g, even if it is not so clear that they are actually solving a problem. This modification of the definition allows for the notion that collaborative engagements re fer to coordinated efforts to build common knowledge.

PAGE 26

Straus (2002) uses a number of terms to define collaboration, such as collaborative action and collaborative problem solving, when referring to the process people use when working together in a group or organization to plan, create, solve problems and make decisions. A problem is defined as a situation someone wants to change, p. 19. Problem solving involves a changing situation, which encompasses decision-making and planning. All kinds of creative activities are also involved, such as designing, exploring new opportunities, engaging in appreciative inquiry, visioning, learning and communicating. Problem solving and, specifically collaborative problem solving, is a process that is largely independent of content (Straus 2002). A number of researchers have explored various frameworks relative to cognition and problem solving, particularly as it relates to computer programming. Newell and Simon (1972) suggest that human problem solving is an educated trial and error process, or heuristic problem solving (Figure 2.1). This theory is oriented towards explaining behaviors seen in protocols or transcriptions of verbal behavior as subjects talk aloud while performing programming tasks. Neisser (1967) maintains that is possible to describe how you are solving a problem and that it is helpful to do so. Problem Implementation Inventory of Heuristic Problem Solving Strategies Strategy Selection Evaluation Figure 2.1 Heuristic Cycle of Human Problem Solving (Newell and Simon 1972) Straus (2002) elaborated on Newell and Simons model by reviewing the works of researchers in different disciplines. According to Straus, although differences exist in terminology, problem-solving methodologies could be applied to many different contexts. One common method is brainstorming. Brainstorming Brainstorming involves a spontaneous expression of all ideas, with all individuals encouraged to rework or elaborate upon the initial results. Despite a general belief in the efficacy of group brainstorming, the research literature is mixed at best. Much of the research literature has found group brainstorming to be less effective than solo brainstorming (working alone), including electronic brainstorming (Pinsonneault et al. 1999, Dennis and Garfield 2003). Conceptually, however, in the context of the generation of novel ideas brainstorming may hold some promise in that the unique sharing of ideas of others should generate novel approaches not thought of alone. 11

PAGE 27

12 The concept of brainstorming as an interac tive session, specifically targeted on creative insight is offered as an effective method for collaborative activities (DeMarco and Lister 1999). The practice is a common way of genera ting new ideas. As such, it involves multiple steps and multiple heuristics, with distinctive steps such as expressing ideas aloud, listing or recording id eas, and deferring evaluation. Thus, the brainstorming technique can be viewed as a group of smaller components recombined in the heuristic process, or some other pr oblem solving method. Software developers are knowledge worker s whose work involves problem solving. Whenever two individuals work together on co mplex tasks, as in the case of collaborative programming, individual differences, such as cognitive ability, experience and conflict handling style can impact performance outcome s. The individual ch aracteristics used in this study are now discussed. Cognitive Ability Simply defined, cognitive ability is synonymous with problem solving ability. Although researchers have offered numerous definitions, a common theme is that cognitive ability refers to an individuals capacity to pr ocess and comprehend information (Murphy 1989, Walden and Spangler 1989). As such, cogniti on is of particular re levance to the study of the intellectual activities associated with software development (Kemerer 1997). A fundamental goal of cognitive sc ience is to develop a theore tical system that specifies how people function. The term theoretical system describes a model used to explain how information processing works. Theoretical systems involve specifications of basic cognitive operations that transform input information into new and more useful forms. Cognitive mechanisms are concerned with the parameters of the information processing system which limit its efficiency in dealing with large amounts of information. They determine the speed with which an indivi dual can encode and manipulate information (Davis and Anderson 1999). The processing of information can be conceptualized as occurring within theoretical systems called c ognitive architectures. Newell et al. (1989) define cognitive architectures in terms of a fixed system of mechanisms that underlies and produces cognitive behavior. These archit ectures define the na ture and organization of memory, primitive (easily pe rformed) cognitive operations and a control structure that sequences information proce ssing (Dunnette and Hough 1991). Numerous theories have been offered to explain cognitive activit y and related outputs. Atkinson and Shiffrin (1968) and Simon and Kaplan (1989) develop a standard cognitive architecture, which consists of ve ry short-term visual and auditory sensory stores, a limited capacity short-term me mory and a long-term memory with limitless capacity. This architecture emphasizes sym bolic process, which allows people to represent the world in terms of an internal mental model, applied to rules to form inferences (Johnson-Laird 1989).

PAGE 28

13 Cognition and Performance By far one of the most studied individual di fferences with linkage s to job performance has been cognitive ability. The findings of Sc hmidt et al. (1986) posit that job knowledge is the most immediate link between cognitive ability and performance. They found that individuals with higher cognitive ability tend to develop gr eater understandings of job duties, as compared to th eir counterparts with lo wer cognitive ability. General cognitive ability is a significant predictor of job performance in a variety of settings, with meta-analyses of this rela tionship reported at .3 0 (Bobko et al. 1999) and .29 (Schmitt et al. 1997). Individuals with highe r cognitive ability are typically better at problem solving, information processing (Schmitt et al. 1997) and learning and adapting to new situations (Hunter 1986). LePine et al. (2000) found th at cognitive ability explains most variance in job performance in intellective tasks, with that effect even stronger when the task is changed. General cognitive ability is a better predictor of performance in jobs that have a high level of complexity as compared to jobs with lower complexity (Hunter 1986). Wood (2000) find that the strong relationship between general cognitive ability and job performance holds in the specific context of system developers, with ability a stronger correlate of performance than experience. As previously mentioned, computer programming is of particular in terest to the study of problem solving. It is viewed as a complex task, one that is str ongly influenced by high mental demands and information processing. Metacognition is concerned with the methods or strategies that different individuals apply to tasks. Following the analogy between the human mind and a computer, the net cognitive level refers to the available software. When a task is reasonably complex, as intellectual tasks tend to be, there is considerable scope for subjects to choose diffe rent strategies, not all of which are equally efficient (Davis and Anderson 1999). Task Domain Experience and Performance Experience is another indivi dual difference that has been examined by researchers as a link to performance outcomes. Like cognitive ability, empirical ev idence suggests that experience has a strong positive linkage performa nce (Jex 2002). McDaniel et al. (1988) and Schmidt and Hunter (1988) find that the relationship between experience and job performance is mediated by job knowledge. Experience has been found to be a better pred ictor of performance in low rather than highly complex jobs. The importance of e xperience in explaining performance outcomes appears to diminish over the time of job in cumbency. McDaniel et al. (1988) found that the correlation between experi ence and performance was strongest in samples where the average level of experience was less than three years; however the correlation is found to be considerably less for samples where th e average level of experience is higher.

PAGE 29

Most studies of the impact of experience on performance measure years within an organization. Quinones et al. (1995) suggest that this relationship must be viewed both in terms of quantity and quality. Telsuk and Jacobs (1998) propose that experience should be viewed in terms of the density (the amount of exposure to developmental experiences), as well as the timing of developmental job experiences. Given these factors, research using this variable must be viewed with a critical eye. Since producing quality code, as is promised by collaborative programming, requires an understanding of both programming languages and general IT business knowledge, this study utilizes years of IT experience as a covariate. Performance At the most general level, job performance may be defined as encompassing all of the behaviors employees engage in at work. This imprecise definition includes many behaviors that may not be related to organizational goals or the task; however, task performance alone may exclude other related behaviors that impact performance outcomes (Jex 2002). Campbell (1990, 1994) proposes a job performance model, which incorporates the interaction of a number of variables including declarative knowledge, procedural knowledge (skill) and motivation. According to Campbell, declarative knowledge is the knowledge one processes about tasks and things and may be attributable to numerous factors including ability, personality, training, experience and interaction with others. Once a high level of procedural knowledge is obtained, it is possible for a high skill or knowledge level to be attained. This means that one is capable of high levels of job performance. Motivation determines if actual results are obtained. Figure 2.2 summarizes the most important individual differences related to job performance (Jex 2002). Conscientiousness refers to a personality trait (Barrick et al. 1993) which has been linked to higher levels of productivity. General Cognitive Ability Job Experience Job Knowledge Job Performance Goal Setting Conscientiousness Job Complexity Figure 2.2 The Most Important Individual Difference Predictors of Job Performance (Jex 2002) 14

PAGE 30

15 The empirical research to date is somewhat unclear. Although often linked directly to productivity in research studies (Dow ns et al. 1988, Belanger and Collins 1997) performance typically can be measured by the quality of the outputs. In the case of collaborative programming the quality of perf ormance has typically been measured by fewer code errors, as well as reductions in total time on task. Anecdotally, many practitioners have predicted or reported an increase in performance quality utilizing collaborative programming. Organizational psychologists have studied a variety of levels of analysis as it relates to workplace structure. Among these are the indi vidual, the dyad and the group (Triandis et al. 1994). The classic definition has consider ed a team as a group of three or more individuals working together for a common goal. In this context collaborative programming represents a small group or team. A number of researchers have studied performance at the gr oup level. McGourty and Meuse (2001) develop a model of team perf ormance that highlights the impact of both internal and external (incl uding task) factors that impact performance outcomes. Although related to teams (defined generally as three or more individuals) this model has relevance to the dyad structure used in collaborative programming. McGourty and Meuse (2001) identify four behaviors as key elements to performance outcomes: communication, decision-making, colla boration and self-management. Other researchers have elaborated on this framework. Campion et al. (1993) find that good communication involves a free exchange am ong members. They present a model of team performance that includes both in ternal factors (such as cohesiveness, communications, decision making) and extern al factors. Thompson (2000) finds that decision-making is done best by the team. When working in groups, highest performan ce is found when the average cognitive ability for the group is higher, although in some cases higher-cognitive-ability members of the group compensate for a low-cognitive-ab ility member (Barrick et al. 1998, Taggar et al. 1999). The strong and consistent importance of cogn itive ability in job performance, with increased impact when ta sks are novel, argues for including cognitive ability in studies of newer, less familiar software development methods. The findings from psychology suggest that in groups, cognitive ability may have both additive (group average) and compensatory (higher ability group member s help lower ability group members) impacts on performance to make it particularly relevant to collaborative development environments. Conflict Whenever two individuals work together, as in the case of colla borative programming, inevitably conflict arises. Conflict begins wh en one individual per ceives that his or her goals, attitudes, values or beli efs are incongruent with those of another individual and this incongruity produces interfer ence between individuals. Capozzoli (1999) finds that the presence of conflict does not always produce ne gative results, and in some instances, can

PAGE 31

enrich outcomes. Most apparent are the negative consequences of conflict which can include low work efficiency resulting from negative interpersonal interactions. Possible positive consequences include enhanced creativity and innovation, higher quality decision-making and improved mutual understanding (Rahim 1983b). Lewin (1948) identifies two distinct forms of conflict: one related to tasks, or goals, and the other to relationships, also termed affective conflict. Task or goal conflict occurs when the preferred outcomes between two parties appear to be incompatible while interpersonal or affective conflict arises from feelings or emotions that are incompatible. Past research finds that disagreement about a task is the most beneficial type conflict, with low to moderate levels of task conflict alone leading to the high performance. Task conflict can assist those involved by clarifying how work should be done and by the process of jointly determining how to proceed. On the other hand, affective conflict by definition involves emotional content between the parties that can undermine rational problem resolution, reduce communication, and strain relationships. Jehn (1995) found that relationship conflict is always detrimental, regardless of task. Pelled (1996) confirmed the findings that emotional conflict and performance are negatively related. In later studies, Milliken et al. (1996) and Jehn (1997) confirm that low to moderate levels of task conflict can be constructive and can positively impact outcomes, but interpersonal conflict causes negative, less desirable outcomes. One factor that has an important impact on conflict is the style that an individual uses to handle conflicts. Blake and Mouton (1964) present a conceptual schema for classifying modes of handling conflict into five distinctive modes or styles. Later, Rahim (1983b) and Rahim and Bonoma (1979) differentiate each mode based upon two basic dimensions: concern for self and concern for others (illustrated in Figure 2.3). 16 High Obliging Integrating Concern for Compromising Others Avoiding Dominating Low Low High Concern for Self Figure 2.3 Conflict Handling Styles (Rahim and Bonoma 1979)

PAGE 32

17 The five conflict ha ndling styles are: Integrating This mode involves high concern fo r self, as well as the other party, and has been described as problem so lving, collaboration, cooperation, solution oriented, or a win-win style. Obliging This mode, which involves low concern for self and high concern for the other party, has also been cal led accommodation, non-confrontational, yielding, or a lose-win style. DominatingThis mode involves high concern for self and low concern for the other party. It has also been called co mpeting, controlling, contending, and a winlose orientation, characterized by forcing behaviors in order to win ones position. Avoiding This mode involves low concern for self as well as the other party and is also called inaction, wit hdrawal, ignoring, buck-passing, or a sidestepping style. Compromising This mode involves concern for self, as well as the other party, and is also called mixed motive style, since it involves give and take, or sharing, where both parties give up so mething (Rahim 1983a and b, Rahim and Bonoma 1979). The literature indicates that the more coopera tive conflict management styles (in which a meaningful amount of concern is shown for the other party), and in particular the integrating style, are likely to produce pos itive individual and organizational outcomes, while less cooperative styles (in which littl e concern is shown for the other party) frequently result in the escalation of conflict and negative outcomes (Burke 1970, Korbanik et al. 1993, Rahim 1983b). This occurs because the integrating style attends to both the outcome as well as the effect of th e conflict process on th e relationship between the parties in conflict. The lack of concern that accompanie s less cooperative styles often leaves the parties with a lack of trust and little basis for a future relationship. Conflict in ISD A number of researchers have examined the impact of conflict on the information systems development (ISD) process. Cohe n et al. (2004) find significant task and affective conflict between software developers and testers. Newman and Robey (1992) state that the generation and re solution of conflict are of central theoretical interest to information systems development. They examine patterns or episodes of conflict (viewed as a set of events j udged as critical to the intera ction between developers and users) and find that conflict affects performance outcomes. Therefore, understanding the nature and effect of these c onflict episodes on work processe s is essential to achieving ISD success. Barki and Harwick (2001) also find that interp ersonal conflict consistently and negatively affect IS development outcomes. These findings on the impact of task and interpersonal conflict on ISD are also validated by Trimmer et al. (2000). Because conflict has the potential to interf ere with desired performance outcomes, and conflict is more likely to occur in intensely co llaborative work settings, it is important to

PAGE 33

18 understand how conflict impacts the collabo rative programming se tting. Also, the manner in which pairs handle conflict may be an important variable in success or failure. Task Design The importance of the study of an individual s tasks in an organi zational setting is underscored by the long-term interest it has ga rnered of organizational scientists and researchers alike (Taylor 1911, Walker a nd Guest 1952, Herzberg et al.1959, Hackman and Oldham 1976, 1980, Griffin 1987). Fre quently labeled as task design or job design, theory and research in this domain attempts to describe successful strategies to enhance such organizationally relevant criterion vari ables such as perfor mance, motivation and satisfaction (Griffin 1987). Task design is clearly an importa nt topic for research. First, and perhaps foremost, is the fact that an individuals task represents one of his or her most basic and fundamental points of contact with the organization. Second, by its very nature, task design has a potential for various change interventions, which could enhance organizational outcomes. Third, task design relates to employee well being, and has been identified as a key part of most quality of work life programs (Griffin 1987). Ang and Slaughter (2001) find th at job design emerge as an important factor influencing both permanent and contract information systems workers, influencing work attitudes, behaviors and performance. Their study breaks out job design by a number of constructs related to task including variety, identity (the opportunity to complete an entire piece of work), significance (to other task in the orga nization), autonomy (f reedom, independence, discretion) and feedback about effectiveness and performance. Their results imply that organizations should carefully design and ba lance the jobs tasks in order to improve workplace attitudes, behaviors and performance. Numerous theoretical perspectives and models have been developed which offer various perspectives on task design. A fundamental st ep in establishing a framework for task design is shown in Hackman and Lawlers ( 1971) Task / Job / Role Dynamics Network (Figure 2.4). In this model, antecedent fact ors such as task objective, task setting, individual characteristics a nd social setting influence perceived task and job dynamics.

PAGE 34

Antecedent Task / Job / Role Factors Network Objective Task Properties Physical Setting ( Job / Role Context ) Individual Attributes & Characteristics Social Setting (Job / Role Context) Perceived Task Dynamics Perceived Job / Role Dynamics Figure 2.4 The Task / Job / Role Dynamics Network (Hackman and Lawler 1971) Hackman and Lawler (1971) argue that task can be described in terms of certain attributes, which, in turn, influence employee motivation, satisfaction and performance, and that individual differences moderate the relationships. According to the job characteristics theory, specific attributes of the job that are presumed to affect these characteristics include autonomy, identify, variety and feedback. As the framework is more fully refined, Hackman and Oldham (1976, 1980) add significance to the list of attributes included in their model fully integrated model (Figure 2.5). Mediating, internal stable states and external expressed states are included as factors, which impact performance outcomes. In this research study, we explore the impact of individual developer characteristics, physical setting (development setting), and task on collaborative programming outcomes. 19

PAGE 35

Antecedent Task / Job / Role Mediating Internal / Stable External / Expressed Factors Network Factors States States Objective Task Properties Physical Setting (Job / Role Context) Individual Attributes & Characteristics Social Setting (Job / Role Context) Perceived Task D y namics Perceived Job / Role D y namics Other Work Place Characteristics Task / Job / Role Instrumentalities Social Comparison / Evaluation Processes Societal / Cultural Dimensions Cognitive Impression of Task / Job. / Role Specific & General Satisfactions Behavioral Propensities Emotive Expressions of Task / Job / Role Network Evaluations & Perceptions Affective Expressions of Feelings Toward Task / Job Role Elements Actual Behaviors Relative to the Task / Job / Role Stimuli for Assessment Figure 2.5 The Complete Integrated Model (Hackman and Oldham 1976, 1980) Virtual Developmental Setting Cockburn (1999, 2000) posits that people are a critical success element to the newer innovative software development practices, with communication standing out as the most significant factor. A common theme of the new technologies is face-to-face, collaborative communication, as when two or more developers work together at the same workstation. However, todays dynamic business environment does not always support face-to-face, same location collaboration. The virtual team is defined as a group of geographically and organizationally dispersed knowledge workers brought together across time and space through information and communication technologies on an as needed basis in response to specific customer needs or to complete unique projects (DeSanctis and Poole 1997, Jarvenpaa and Leidner 1998, Lipnack and Stamps 1997) such teams are fast becoming an ever-increasing facet of the business landscape. Given the current business climate, one can venture to predict that the use of virtual teams will only continue to increase at a rapid rate. Research interest in virtual teams has gained interest in recent years. 20

PAGE 36

21 Media Richness and Communication Modalities Media richness theory (Daft and Lengel 1984, Da ft et al. 1987) describes organizational communication channels as posse ssing a set of objective characteristics that determine each channels capacity to carry rich inform ation, with rich information being more capable than lean information of reducing equivocality in a message receiver. All communication channels (telephone, conventiona l mail, email) possess attributes that lead to distinct, objective richness capacities. Media richness then refers to channels relative abilities to convey messages that communicate rich information (Carlson and Zmud 1990). Media richness theory has generally been supported when tested on so-called traditional media, such as face-to-face communication, telephone, letters and memos (Lengel and Daft 1988). However, inconsistent empiri cal results have been obtained from the introduction of so-called new media such as electronic mail and voice mail (Markus 1988, Rice and Shook 1989, Trevino et al. 1990, Webster and Trevino 1995). Plowman (1995) and Sillince (1996) show th at communication eff ectiveness drops as modalities and timing are removed. Many software design techniques (CRC card modeling, role-playing, designi ng on a whiteboard) take adva ntage of a person talking, moving, and acting while thinking (termed kinesthic or multi-sensory thinking). Cockburn (2000) reports that pr actitioners repeated ly cite these collaborative design practices as very effective; how ever, little research has been d one on this topic relative to software development. Based on these theories of co mmunication effectiveness and the findings of Javenpaa et al. (1998), Javenpaa and Leidne r (1999) and Lipnack and Stamps (1997) relative to the lack of face-to-face interacti on in virtual teams, obstacles to effective coordination, collaboration and communication may be more salient. In short, in the virtual team setting it is anticipated that colla borative programming outcomes will suffer. Faithfulness to Method As previously mentioned, the collab orative programming methodology, requires developers to follow a prescribed set of st ructures or processes while performing their task. For example pair programming invol ves two individuals working together in distinct roles writing test cases before writing code. As such, how fa ithful the pair is to this set of structures or methods is belie ved to impact the collaborative process and resulting outcomes. In articulating Adaptive Structuration Theo ry (AST), Poole and DeSanctis (1989, 1990) point out that group outcomes, ra ther than resulting directly from the effects of variables such as technology and task, reflect the manner in which groups appropriate the structures of the technology and the context of its use. Appropriation refers to the manner in which structures are adapted by a group for its own use through a process

PAGE 37

22 called structuration, wherein structures ar e continuously produced and reproduced (or confirmed) as the groups interactive process occurs (Gopal et al. 1992-3). AST posits that the mode in which structur es are appropriated is determined by three dimensions: faithfulness to that appropriati on, the groups attitude towards the structures and the groups level of appropriation. Appropriation refers to the manner in which structures are adapted. Structures are the rules and resources used to generate and support the system. Faithfulness refers to the extent to which a group uses the process or system in keeping with the spirit in wh ich it was meant to be used. A faithful appropriation, therefore, invol ves adhering to the spirit of the method, while an ironic appropriation entails violati ons (Gopal et al. 1992-3). Thus in an ATS context, collaborative progr amming can be depicted in an input-outputprocess framework similar to that used by Poole and DeSanctis (1989, 1990) in studying group support systems. The i nput variables include many of the group work dimensions described by McGraths (1984) typology: individual differenc es, group size and the type of task. The process can be characterized by the modes of appropriati on defined in AST: faithfulness of appropriation, attitudes towards the colla borative programming method and the level of consensus on appropriation (Gopal et al. 1992-3). Distributed Cognition Past research on collaborative programming (F lor and Hutchins 1991) uses the theory of distributed cognition as a way of desc ribing why performance outcomes may be enhanced. The traditional view of cognition maintains that problem solving is exclusively an internal phenomenon (Salomon 1993) that is best explained in terms of information processing at the individual level (Rogers 1997). One alternative view of cognition that has been gaining interest over the last d ecade is distributed cognition. Originally conceptualized by Flor and Hutchins (1991) distributed cognition represents a new paradigm for rethinking all domains of c ognition (Greenberg and Dickleman 2000). According to Greenberg and Dickleman ( 2000) distributed cognition refers to the knowledge representation both inside the head s of the individual and in the world, and the propagation of knowledge between individu als and artifacts. Flor and Hutchins (1991) propose that cognition should be l ooked at as a distributed phenomenon how knowledge is represented both internally (inside ones head) and in the world (environment, culture, social interacti ons); the transmission of knowledge between different individuals as artifacts; and th e transformations through which external structures go when acted on by individuals a nd artifacts. By studying cognition in this way, it is hoped that an understand ing is gained as to how inte lligence is exhibited at the systems level, rather than at individual cognitive levels. The primary emphasis of distributed cogniti on is in understanding the coordination of thinking among individuals and arti facts. In this context, Flor and Hutchins (1991) study

PAGE 38

23 how two programmers coordinate the task of software maintenance among them, utilizing distributed cognition to e xplain their behavior. Nardi (1996) notes that distributed cognition is concerned with representation both inside and outside the mind. Because of this focus on both internal and external representations, much attention is paid to studying these re presentations. Past studies look at a finely detailed analysis of a particular artifact (Norma n 1988, Hutchins 1995) or at finding stable design principles that are widely accepted across design problems (Norman 1988, 1991). Hutchins (1995) studies distributed cognition in the context of his observations of the communication between U. S. sailors as they us e the necessary tools to navigate a ship. By documenting and describing the sailors use of tools, as well as social interactions, a number of principles emerge: cognition is mediated by tools; the critical role of the tool mediation in cognition means that cognition is rooted in the artificial; and cognition is a social affair that involves delicate variations and shades of communication learning and interpersonal interacts. Nardi (1998) dr awing upon Hutchinss work stresses the importance of functional systems, or systems that are made up of a persons or groups interaction with the tool. Tools may have a variety of meaningscomputer simulations, counting on ones fingers, or closing ones eyes when tr ying to remember something. Thus the social system becomes an important unit of an alysis (Flor 1994). According to Hutchins and Holland (1999) if the fundamentals of di stributed cognition are applied to observations of human activity in its natural st ate (such as how individuals do their jobs on a daily basis), at least thr ee kinds of cognition processes would become clear. First cognitive process may be di stributed among members of a social group. Second cognitive process may involve coordi nation between internal and external (environmental and / or material) structure. And third cognitive processes may be distributed through time in a way that the e nd results of earlier ev ents can change the nature of the events that come later. Greenberg and Dickleman (2000) believe that distributed cognition enhances or enables performance. In their view, if one believes that cognition is distri buted, one would agree that the individual, tool and artifacts, values rule, social and communication interactions and even the work environment constitute a complex, interacting system. Salomon (1993) believes that the goals of cultivati ng both a partnership, as well as individual capability, suggest a performance environmen t designed to foster a community of performers. Expertise becomes distributed in ways that provide an impetus for mutual appropriation (Brown et al. 1993). By creating a knowledge community knowledge sharing, training and performance support ar e enhanced (Greenberg and Dickleman 2000). According to Rogers and Ellis (1994), four areas require analysis for knowledge transitions with the system under examination: the work environment structures and work practices; the changes w ithin the representational media; the interactions of the individuals with each other; and the interact ions of the individuals with the system

PAGE 39

24 artifacts. As such, collaborative programm ing has much promise for the application of distributed cognition theory as it may e nhance our understanding of how individuals work together at the same time, in the sa me environment, to solve common problems (Hewitt and Scardamalia 1996). Of particular interest is the study of how cognitive processes may be distributed through time in a way that the end results of earlier events can change the nature of the events th at occur later (better quality code) and organizational learning. Satisfaction Organizational research often focuses on satisfaction. Satisfaction has been defined broadly as an individuals general attitude to ward his or her job. A narrower definition offered by Robbins (1998, p. 25) is the difference between th e amount of rewards workers receive and the amount they believe th ey should receive. This definition takes into account a variety of key elements that impact satisfaction. Among these factors are mentally challenging work, a supportive wo rk environment and theories related to personality job fit. Accord ing to Hollands (1987) theory of personality fit, a high agreement of fit between an individuals pe rsonality and occupation results in a higher satisfaction. Persons with personality types c ongruent to their chosen vocations are more likely to be successful and have a greater proba bility of high satisfac tion in their work. The importance of individual satisfaction as it relates to job performance is somewhat questionable however (Vroom et al.1985). Satisfaction has been consistently been related to absenteeism, i.e. moderated correlated at +0.40 (Locke 1984; Hackett and Guion 1985, Hackett et al. 1988, Petty et al. 1984). In an organizat ional context this construct is of importance to this study as organizations strive to reduce developer turnover.

PAGE 40

Chapter Three Research Design and Methodology This chapter describes the research design and methodology used in the study. This chapter is organized as follows: The first section describes the overall research approach. The second section describes the research model. The third section describes the research methodology and the fourth section discusses measurement. Research Approach A multi-phase research design is used in this dissertation. Three studies are conducted to explore the individual developer differences, developmental setting variations, collaborative methods and process differences that impact collaborative programming performance outcomes. Study results further our understanding of collaborative programming methods and what other factors influence performance outcomes. High Level Research Model The high-level research model (including the related constructs) used in this dissertation and in each of the three studies is shown in Figure 3.1. Individual Characteristics Cognitive Ability ConflictHandlingStyle Performance Outcomes Pair Task Performance Individual Task Performance Individual Satisfaction with Method Processes During Development Faithfulness to Method Task Conflict Distributed Cognition Developmental Setting Face-to-Face Virtual Collaborative Method Pure Pair Programming VariationsofPair Figure 3.1 High Level Research Model The model is based on a number of sources including Jex s summary of the most important individual characteristics that impact performance outcomes and Hackmans & Oldmans complete integrated job characteristics model (presented in Chapter 2). 25

PAGE 41

26 A summary of the variables used in each of the three studies is now presented. Study 1, (presented in Chapter Four) includes the follo wing individual charact eristics (covariates): cognitive ability, conflict handl ing style and years of IT e xperience. The faceto-face developmental mode and the collaborative method of pure pair programming are used. Processes during development are directly obs erved in Study 1: faithfulness to method, task conflict and distributed cognition. Performance outcome s are pair task performance and individual developer satis faction to the method. The variables in Study 2 (det ailed in Chapter Five) are now presented. The individual characteristics included are cognitive abilit y, conflict handling style and years of IT experience. In this study, the developmental modes are ma nipulated and include facetoface and virtual work settings. The collabora tive method used is pur e pair programming. The process during development is faithfuln ess to method, which is self reported by subjects. Performance outcomes are pair task performance and individual developer satisfaction to the method. In Study 3 (presented in Chapter Six), the individual characteris tics included are cognitive ability and years of IT experience. In this study variati ons of the impact of variations on the collaborative method (pure pair programming) on performance outcomes are explored. The variations to th e method are structured problem solving (the use of test cases) versus non-unstructured pr oblem solving (brainst orm) and collaborative versus non-collaborative development. The pr ocess during development is faithfulness to method, which is self reported by subjects. Performance outcomes are individual task performance and individual develope r satisfaction to the method. Research Methodology An overview of the three research studies is now presented. Studies 1, 2 and 3 are discussed in depth in Chapters F our, Five and Six, respectively. Study 1 Study 1 is an in-depth process analysis w ith twelve pairs of developers programming collaboratively (pair programming). This qualitative research focuses on how individual developer differences and processes dur ing development impact collaborative programming performance outcomes. Specifica lly, Study 1 is composed of two distinct investigations. First, we explore how task conflict impacts the collaborative software development process and performance outco mes, i.e. dyadic task performance and individual satisfaction to the method. S econd, we analyze how distributed cognition impacts collaborative software developmen t (pair programming) performance outcomes. Subjects completed a series of instruments designed to meas ure individual differences. All subjects received training in the collabora tive programming technique. Subjects were assigned to pairs and asked to complete three experimental tasks. Three tasks were used

PAGE 42

27 to give the pairs time to become accustomed to the pair programming setting, to jell with their partners, and to vary the difficulty of the tasks. Pseudocode was used in each task (to deal with unknown differences within pairs on specific programming languages), and participan ts were asked to follow the test-first, code-later sequence in completing the program ming experimental task. After completing the study tasks, subjects completed a series of instruments designed to measure process differences and satisfaction. Subjects were also audio a nd video taped while performing the experimental programming exercises. In order to investig ate the impact of task conflic t on collaborative programming (pair programming) the researchers viewed the audio and videotapes of the developers as they worked together on each programming ta sk. The process results are based on independent analyses of the developer inte ractions scored by using a pre-established rating form, which measured faithfulness to th e method and type and amount of conflict. Performance on task was based on correctness of test cases and code for the experimental tasks. Two raters evaluated all experiment al tasks. A detailed description of the experiment is presented in Chapter Four. In order to investig ate the impact of distributed c ognition on collaborative programming (pair programming) audio tapes were transc ribed and a coding scheme was developed by the researcher in order to measure this constr uct. The first step in the coding process was to identify episodes of distributed cognition. Each episode was then coded at two levels: one that describes what the pair was doing durin g that episode, and one that identifies the nature of the distributed cogni tion. Two raters were utilized to evaluate the impact of distributed cognition. Performance on task was based on correctness of test cases and code in for the experimental tasks. Two ra ters evaluated all experimental tasks. A detailed description of the experime nt is presented in Chapter Four. Study 2 Forty-two pairs participated in a laboratory study of collaborative software development (pair programming). This research is an initial attempt to investigate the impact of developmental setting on collaborative programming results. It also represents a continued exploration of the impact of i ndividual developer differences and process differences impact on collaborative programmi ng outcomes, i.e., task performance and developer satisfaction. The researchers randomly assigned classes of students to one of two treatment groups: face-to-face or virtual. Subjects completed a series of instruments designed to measure individual differences and r eceived appropriate training in the collaborative programming method. Pairs who were assigned to the vi rtual treatment group received additional training needed to work in this developmental setting. Within each treatment group, the researcher randomly assigned participants to wo rk together in pairs on three experimental programming tasks. Three tasks are used to give the pairs time to become accustomed to

PAGE 43

28 the pair programming setting, to jell with th eir partners, and vary the difficulty of the tasks. Pseudocode was used in each task (to deal with unknown differences within pairs on specific programming languages), and participants are asked to follow the test-first, codelater sequence in completing th e programming experimental task. Subjects completed a series of instruments after completing experimental Tasks II and III designed to measure faithfulness to the process during developmen t and individual satisfaction. Performance on task is based on both the number of correct test cases and the correct code produced (content and sequence) for each programming task. The greater the number of correct test cases and code, the higher the level of perf ormance of the pair. Two raters were used to evaluate task performance. A detailed de scription of the experi ment is presented in Chapter Five. Study 3 One hundred and twenty (120) subjects participat ed in a laboratory study of collaborative software development. The primary focus of Study 3 is to investigat e how variations, or adaptations, in collaborative programmi ng (pair programming) impact performance outcomes. Specifically, we explore the impact of structured problem solving (test cases) and unstructured problem solving (brainstor ming) development methods on performance outcomes, as well as the impact of collabo ration on performance outcomes. We also investigate how these variations in the deve lopmental method impact the processes used during development. A detailed description of the experiment is presented in Chapter Six. Measurement Measurement of the constructs and the variable s of interest included in the model are now summarized. First the covariates and indepe ndent variables are discussed, followed by a discussion of the process and performance meas ures. Finally, the experimental tasks are presented. Demographics To assess the demographic variables, subj ects are asked to provide the following information on the initial questionnaire: age, gender, IT programming experience, known programming languages and IT positions held. Cognitive Ability Individual cognitive ability is measured uti lizing the Wonderlic Personnel Test (WPT). The WPT is comprised of 50 questions to be administered in a timed 12-minute period. Raw scores are adjusted for age. This te st was chosen because it has demonstrated reliability (test retest reliabilities range from .82 to .94) and validity and the test is widely used by business and governmental orga nizations to evaluate job applicants for

PAGE 44

29 employment and occupational training programs (Wonderlic 1999). Empirical evidence suggests that cognitive ability has a strong positive linkage to performance. This covariate is measured in all of the studies included in this research. Conflict Handling Style Individual assessment of conflict handling st yle is to be measured utilizing the Rahim Organizational Conflict Inventory (ROCI-II). The ROCI-II is comprised of 35 items distributed across five subscal es that measure the integr ating, obliging, domination and compromising styles of managing conflict. Each item has a 5-point scale (where 1 = strongly disagree; 5 = strongly agree), and the responses to the items within each scale are averaged. Higher scores on each subscal e indicate a greater use of that style of managing conflict. The ROCI-II instrument thus enables measurement of an individuals mix of styles as well as their primary styl e. The test was chosen because it has demonstrated reliability (test-retest reliabi lities range from .60 .83) (Rahim 1983a) and is widely used in academic research on conflict. Prior research has shown that integrative conflict-handling styles have a positive li nk to problem solving and on performance outcomes. In Studies 1 and 2 participants are also asked to self assess their conflict handling style. Conflict handling style is not included in Study 3, since all particip ants did perform the experimental tasks collaborat ively in this experiment. Years of IT experience Years of IT job experience is measured by the number of years indicated by each subject in the initial questionnaire. This covariate is chosen si nce empirical evidence suggests that experience has a linkage to performance. This covariate is measured in all of the studies included in this research. Collaborative Method In Studies 1 and 2, the collaborative method is pure pair programming (all work done in pairs performing test cases firs t and then writing code, with developers in the defined roles of navigator and driver. In Study 3 the collaborative method is varied as follows: 1) developers work collaboratively utilizing a structured problem solving method (test cases) and then write code alone ; 2) developers work alone u tilizing a structured problem solving (test cases) and then write code alone ; and 3) developers work collaboratively utilizing an unstructured problem solving me thod (brainstorming) and then write code alone. Theories on collaborative work s uggest a positive linkage to task performance outcomes, as well as on individual satisfa ction. Brainstorming is also used for collaborative work. Prior research has shown mixed results related to task performance; however, higher levels of satisfaction ha ve been reported when utilizing the brainstorming for collaboration.

PAGE 45

30 Faithfulness to Method Faithfulness to method is based on the type and amount of interaction between partners and is measured in two ways. Faithfulness to the method was measured in Study 1 by observation. Utilizing a three question 5-point scale developed by the researchers (where 1 = not faithful; 5 = very faithful) participants were evaluated as to the amount of faithfulness to method during the developmen t process. In Study 2 this scale was included in the questionnaires completed by th e participants following the completion of the last two experimental task s. Further adaptation of the scale was made as appropriate for Study 3. Additionally, the researchers adapted the Likert scale developed by Salisbury and Chin (2002) in their studies of consensus on ap propriation for use in Study 3. Prior research has suggested that how methods are appropriated impacts performance outcomes. Task Conflict Task conflict during the colla borative programming process is measured based on the number of conflict episodes on each task. Pr ior research has demonstrated that low to moderate amounts of task conflict has favorable impacts on performance. Task conflict was measured in Study 1 by observation. The re searchers recorded the number of times task conflict occurred during the programming task. In Study 2, task conflict was measured with a three-item questionnaire, utilizing a 5-point Likert scale developed by the researchers (where 1 = no c onflict; 5 = more than 5 episodes of conflict). Participants reported little or no task conflict during colla borative processes. Due to the lack of findings in Study 2 and the short duration of the experiment in St udy 3, task conflict is not included in Study 3. Pair Task Performance Pair task performance is measured in all studies utilizing a template developed by the researchers. Study 1 was conducted in two phases. In Phase 1 of Study 1 pair performance on task is based on the number of test case and code errors in each programming exercise. The more errors note d, the lower the performance of the pair. Two raters were used to evaluate pa ir task performance in Phase 1. In Phase 2 of Study 1 pair task performance was based on the completed correct test cases and psuedocode produced (content a nd sequence) for each programming task by each pair. The greater the number of correct te st cases and code, the higher the level of performance. Two raters were used to evaluate pair task performance. In Study 2, pair task performance was based on the completed correct test cases and psuedocode produced (content and sequence) for each programming task by each pair. The greater the number of correct test cas es and code, the higher the level of performance. Two raters were used to evalua te pair task performan ce. Prior research

PAGE 46

31 has evaluated the quality of collaborative programming outco mes on the number of code errors, i.e. more errors, poorer quality or more correct code, higher quality. Individual Task Performance Individual task performance is measured in Study 3, based on completed correct psuedocode produced (content and sequence) fo r each programming task by each subject. The greater the amount of correct code, the high er the level of performance. Two raters were used to evaluate individual task performance. Prior research has evaluated the quality of programming outcomes on the number of code errors, i.e. more errors, poorer quality or more correct code, higher quality. Individual Satisfaction with the Method Individual satisfaction with the method is m easured in all studies. In Studies1 and 2, individual developer satisfac tion to the method is measured utilizing a 7-point Likert scale adapted from Venkatesh and Vitalari (1992) and Watson-Fritz, et al (1996). In Study 3, individual satisfaction to the method is measured utilizing a 7-point Likert scale adapted from McGrath (1988). Prior resear ch has shown that developers working collaboratively on programming tasks have higher levels of satisfaction than developers working alone. Satisfaction has also been linked to staff retention. Experimental Tasks An overview of the experimental tasks is now presented. These tasks have been used in prior research. Three tasks were used in Studies 1 and 2 (Tasks I, II, and III) to give the pairs time to become accustomed to the coll aborative programming setting and to jell with their partners, as well as to vary the diffi culty of the tasks. Je lling is not part of Study 3, which uses includes two experimental tasks (Tasks II and II I). Pseudocode was used in each task (to deal with unknown differences within pairs on specific programming languages), and participants were asked to follow the test-first, code-later sequence in completing all programming exercises. Task I was designed to be a warm up tas k. For Task I, subjects were given the pseudocode and test data sets and asked to check the module for accuracy. This required completion of the test data and additional coding. Task II is a program module in which two discounts are computed for an invoice. Subjects were given the program specifications and asked to crea te the test data sets and write pseudocode. Task complexity is derived from the interaction from the two discounts. For Task III, subjects were asked to cr eate a sales report. They were given the specifications and asked to crea te the test data sets and write the pseudocode. Task complexity is derived from the need to so rt and calculate data prior to output.

PAGE 47

Chapter Four Study 1 Chapter Four describes an intensive process study that focuses on how the processes during development and individual developer differences impact collaborative programming (pair programming) performance outcomes. An overview of the study is presented, followed by a discussion of the research models, research hypotheses, data collection, data analysis and study results. Overview The primary focus of Study 1 is to investigate how the processes during development impact collaborative programming (pair programming) performance outcomes, utilizing a qualitative approach for data analysis. We also investigate the impact of individual developer characteristics on performance. High Level Research Model The high level research model used in this dissertation is shown in Figure 4.1. Individual Characteristics Cognitive Ability Conflict Handlin g St y le Processes During Development Faithfulness to Method Task Conflict Distributed Cognition Developmental Setting Face-to-Face Virtual Collaborative Method Pair Programming VariationsofPair Performance Outcomes Pair Task Performance Individual Task Performance Individual Satisfaction with Method Figure 4.1 High Level Research Model The underlying premise of Study 1 is that successful outcomes in collaborative software development (pair programming) are driven by a number of factors, including the processes used during development and the individual developer characteristics of the developers. Study 1 represents an initial attempt to gain an in-depth understanding of the collaborative programming (pair programming) process. 32

PAGE 48

33 Study 1 is composed of two distin ct investigations or phases. In phase 1, we explore how individual developer differences and the proc esses used during development (faithfulness to the method and task conflict) impact the collaborative software development process and related performance outcomes. In Ph ase 2, we analyze how distributed cognition impacts collaborative software development (pair programming) pair task performance. A laboratory experiment is conducted, in whic h developers are audio and video taped. This method is utilized to give the re searchers a window in which to view the collaborative programming (pair programmi ng) process. In Phase 1, pair task performance is measured as follows: corre ctness of the test cases produced by the programming dyad and correctness of the code produced by the programming dyad. Additionally, individual satisfa ction with the method is measured. We also explore a number of other factors that are believed to impact successful programming outcomes in collaborative software development. These include faithfulness to the method and task conflict during development and individual de veloper characteristics (cognitive ability, conflict handling style). In phase 2, we analyze how the process of distributed cognition impacts collaborative programming (pair programming) pair task pe rformance. Pair task performance is measured as follows: correctness of the test cases produced by the programming dyad and correctness of the code produced by the programming dyad. The reasoning behind the selection of these constructs and variable s, as well as details on these measures, is provided in Chapter Three. Study 1 Research Models Each study contained in this dissertation fo cuses on a different part of the high-level research model shown in Figure 4.1. The resear ch model utilized in Phase 1 of Study 1 is shown in Figure 4.2. In Phase 1, we explore the impact of faithfulness to the method, task conflict and individual developer diffe rences (cognitive ability and conflict handling style) on performance outcomes.

PAGE 49

Individual Characteristics Cognitive Ability Processes During Development Task Conflict Faithfulness to Method Developmental Setting (Controlled) Face-to-Face Collaborative Method (Controlled) Performance Outcomes Pair Task Performance Satisfaction with the Method Figure 4.2 Study 1 Research Model: Phase 1 In phase 2 of study 1, we investigate how distributed cognition between the dyad during development impacts task performance outcomes. The research model utilized in Phase 2 of Study 1 is shown in Figure 4.3. Individual Characteristics Cognitive Ability Processes During Development Distributed Cognition Developmental Setting (Controlled) Face-to-Face Collaborative Method (Controlled) Performance Outcomes Pair Task Performance Figure 4.3 Study 1 Research Model: Phase 2 High Level Research Question The primary research question addressed in Study 1 is as follows: Within the context of the collaborative programming technique, how do individual developer characteristics and the processes used during collaborative programming impact performance outcomes? 34

PAGE 50

35 Data analysis is conducted in two phases for St udy 1. In order to increase the sample size for Study 1, additional data was collected s ubsequent to the completion of Phase 1. Study 1 Phase 1 As previously mentioned, in Phase 1, we e xplore how individual developer differences and process used during development (faith fulness to the method and task conflict) impacts the collaborative software devel opment process and performance outcomes. Phase 1 research questions are presented follo wed by a discussion of data collection, data analysis and study results. Research Questions The specific research questions addr essed in Phase 1 of Study 1 are: RQ1: Will developers w ith higher cognitive ability have higher performance outcomes? RQ2: Will developers with more integr ative conflict handling styles have higher performance outcomes? RQ3: Does faithfulness to the collaborative process positively impact performance outcomes? RQ4: Does task conflict during deve lopment impact performance outcomes? Research Design We conducted an intensive process study in a la boratory setting at a university located in the southern United States. In Phase 1 of Study 1, seven pairs (14 subjects) participated in the quasi-experiment. (It should be noted that additional data ar e collected for Phase 2 of Study 1.) The participants were parttime undergraduate and graduate MIS students who were given monetary incentive or extr a credit for participation in the study. We allowed subjects to self-select into pairs where possible; otherwise, pairs were assigned at random. Subjects were randomly assigned to th e role (driver or navigator) that they would assume during the experimental tasks. These roles remained constant for the first two tasks; partners switched roles for the la st collaborative exercise. All subjects were assigned three experimental tasks: Task I, Task II and Task III. Task I was designed to be a warm up task. Two other tasks were incl uded in the experiment in order to vary the difficulty of the tasks and allow for jelling. Data Collection As previously mentioned, data collection fo r Study 1 was done in two phases. Prior to beginning the research, we conduc ted a pilot study of all in struments and experimental tasks. Pairs of programmers were studied in the laboratory over a 4-week time frame. Each session took place in one day, over four-hours. Each day of the study, the sessi on began with a team building activity and an introduction to the study. Participants read and si gned an Informed Consent Form (all study

PAGE 51

procedures and materials have been reviewed by our Institutional Research Review Board). Subjects completed measures of general cognitive ability and their conflict handling style. Training in the collaborative programming technique (Pair Programming) followed. Pairs of subjects were then assigned to a computer lab. As previously mentioned, we allowed subjects to self-select into pairs where possible; otherwise, pairs were assigned at random. Subjects were randomly assigned to the role (driver or navigator) that they would assume during the experimental tasks. These roles remained constant for the first two tasks; however, partners switched roles for the last collaborative exercise. Subjects were audio and videotaped while working on the study tasks. Subjects were given the experimental tasks in both hard copy and electronic form, but were asked to save all final work on a diskette. Three tasks were used to give the pairs time to become accustomed to the pair programming setting and to jell with their partners, as well as to vary the difficulty of the tasks. Pseudocode was used in each task (to deal with unknown differences within pairs on specific programming languages), and participants were asked to follow the test-first, code-later sequence in completing the programming exercises. Twenty minutes was given to complete Task I, which was designed to be a warm up exercise, while 1 hour was allotted for the completion of each of the two remaining programming assignments. Following the completion of Tasks II and III, subjects were instructed to save all work and complete a questionnaire on individual satisfaction. Subjects were debriefed at the end of the session. Subject Demographics The participants of Phase 1 of Study 1 had a mean age of thirty-one years of age and six years of work experience. Subjects also had both knowledge of multiple programming languages and industry experience in programming. Subject demographics are found in Figure 4.4. Age: 31 (mean) Sex: 4 females, 10 males Years of IT Work experience: 6 (mean) Years of Programming experience: 6 (mean) Programming Languages: C, C++, Java, Pascal, VB, HTML, Fortran, Cobol Professions: Consultant, Web Designer, Help Desk, Developer / Team Leader, Maintenance, Hardware / Software Tech, Students (primarily part-time) Figure 4. 4 Subject Demographics Measures Measures are discussed in detail in Chapter Three. The results of individual differences (cognitive ability and conflict handling style) are found in Figure 4.5. There is variation across subjects in cognitive ability and conflict handling style. The mean WPT score for 36

PAGE 52

all programmers is 29 (Wonderlic 1999); in this study the mean WPT score was 28, with scores ranging from 17 36. Additionally, self assessed conflict-handling style varied between subjects. It should be noted that all subjects ranked themselves highest on the integrating style for handling conflict. This tendency to evaluate ones self as integrative is reflected in the norms for this measure. However, it is not necessarily true that others would agree with these self-assessments. Cognitive Ability All Programmers: 29 (Mean) Study Subjects: 28 (Mean); 17 36 (Range) __________________________________________________ Conflict Handling Style (1 5 scale) Sample ROCI-II for Style Range Mean One Subject Integrating (2.9 4.9) 4.1 4.3 Avoiding (1.0 4.6) 3.0 1.9 Dominating (1.6 4.6) 3.2 3.4 Obliging (2.8 4.3) 3.4 3.0 Compromising (2.8 4.3) 3.5 3.8 Figure 4.5 Descriptive Statistics Individual Differences In order to measure the collaborative process, one of the researchers and an assistant viewed the audio and videotapes of the developers as they worked together on each programming task. The process results were based on independent analyses of the interactions, and were scored using a pre-established rating form developed by the researchers. Inter-rater reliability varied by pair (75% 100%), and is based on the percentage of agreement for each item rated. The lower inter-rater reliability (75%) reflects differences between the raters in the amount of interaction considered as a single episode of conflict. Conflict during the collaborative programming process was measured based on the number of conflict episodes on each task, as well as the type of conflict present in each episode (task or relationship), the conflict handling style exhibited by each participant (integrating, obliging, dominating, avoiding, compromising) and if and how conflict was resolved. Faithfulness to the pair programming method was measured based on the amount of interaction between partners (equal vs. dominate). Each rating item has a 5-point scale (where 1 = Not Faithful; 5 = Very Faithful). Additionally, the work patterns were measured as either: (a) read task first, then planned and worked together throughout; (b) read task and do preliminary work alone, then combine; or (b) divide the task and work separately. Figure 4.6 outlines the descriptive statistics for process variables. 37

PAGE 53

38 Figure 4.6 Descriptive Statis tics Process Analysis Performance outcomes measured included the number of test case and coding errors made by each pair on each task and individua l developer satisfaction with the method. Satisfaction was measured utilizing a 7-point Likert scale adapte d from Venkatesh and Vitalari (1992) and Watson-Frit z, et al (1996). Pair task performance was based on the completed correct test cases and pseudocode produced (content a nd sequence) for each programming task by each pair. The greater the number of correct test cases and code, the higher the level of performance. Two raters were used to evaluate pair task performance. As previously mentioned, sa tisfaction with the method was based on the self-assessments of each developer. Descriptive statistics for the performance out come are found in Figure 4.7. As is evident from the data, there was variation in outcome s within and across subjects. Satisfaction with the method was not measured after Ta sk I, since it was a warm up exercise. Task I Task II Task III Amount of Interaction (Range) (2 5) (2 5) (1.5 4.5) (Mean) 4 3.9 3.6 Work Patterns: Equal or Dominant: Had bot h, within and across pairs Test First, Code Later: Only 1 pa ir did not follow, for Task I only Conflict Episodes (Range) (1 4) (1 12) (1 6) (Mean) 1.1 5.2 2.8 Task I Task II Task III Correct Test Cases (Range) (1 7) (0 9) (0 9) (Mean) 3.8 6.5 4.2 Correct Code (Range) (2 7) (0 9) (1 3) (Mean) 4.7 3.5 2.3 Satisfaction (Range) N/A* (4.8 7) (4.8 7) (Mean) N/A* 6.2 6.2 *Not measured Figure 4.7 Descriptive Statis tics Performance Outcomes Data Analysis The first phase of the data analysis is eval uation of the three posited, direct effects on performance from cognitive ability, conflict handling, episodes of task conflict and faithfulness to the method. Table 4.1 reports the ranked order of the pairs on these variables, as well as the Spearmans rank corr elation analysis result s of the correlation between each independent variab le and performance. (For one of the pairs there is no process data available because of technical recording problems.) The performance ranks shown here are an aggregate score (correct test cases and correct code) across the three

PAGE 54

39 tasks and the cognitive ability rank is based on the average Wonderlic score of the two developers in each pair. Satisfaction with the method is not included in this analysis because the range of combined scores is so small. Pairs Phase 1 Study 1 Aggregate Performance All Tasks Correct Test Cases & Code (Higher rank is better) Cognitive Ability (Higher score is Higher Ability) Self Report Conflict Handling Style (Subject 1 / Subject 2) Observed Conflict Handling Style (Subject 1 / Subject 2) Episodes of Task Conflict (Higher rank is low conflict) Faithfulness to Method (Higher rank is more faithful) A 23.5 2 Integrating / Integrating Integrating/ Dominating 2 Tie 3 B 22 7 Integrating / Integrating C 12 3 Avoiding / Integrating Avoiding/ Dominating 1 2 D 19 1 Integrating / Integrating Integrating/ Dominating 4 1 E 27 5 Integrating / Integrating Integrating / Obliging 3 Tie 4 F 33 4 Dominating / Integrating Dominating/ Integrating 5 Tie 3 G 32 6 Dominating / Dominating Integrating/ Integrating 6 Tie 4 Spearmans r .679 .686 -.229 Critical value at = .05 .714 N = 7 .829 N = 6 .829 N = 6 Table 4.1 Correlations between Performance and Cognitive Ability, Episodes of Task Conflict, and Faithfuln ess to the Methodology Spearmans rank correlation is a nonparametric measure that is based on the differences in rank between subjects and ranges from to +1. None of the relationships posed in the research questions are significant at = .05. However, both cognitive ability and the number of episodes of task conflict have a pos itive relationship with performance. As a result, further, interpretive analysis of the data was done in a second phase of the analysis. Detailed summaries of the performance by pair for each task are found on Table 4.2 through Table 4.5.

PAGE 55

40 Pairs Phase 1 Study 1 Performance Outcomes Correct Test Cases All Tasks (Higher score is better) Performance Outcomes Correct Code All Tasks (Higher score is better) Performance Outcomes Aggregate Score Correct Test Cases & Correct Code All Tasks (Higher score is better) A 11.5 12 23.5 B 13 9 22 C 5 7 12 D 11.5 7.5 19 E 19 8 27 F 15 18 33 G 21 11 32 Table 4.2 Performance by Pair, Aggregate Score All Tasks Pairs Phase 1 Study 1 Performance Outcomes Correct Test Cases Task I (Higher score is better) Performance Outcomes Correct Code Task I (Higher score is better) Performance Outcomes Aggregate Score Correct Test Cases & Correct Code Task I (Higher score is better) A 5 7 12 B 7 2 9 C 5 2 7 D 1 2 3 E 3 7 10 F 2 7 9 G 9 0 9 Table 4.3 Performance by Pair, Aggregate Score Task I Pairs Phase 1 Study 1 Performance Outcomes Correct Test Cases Task II (Higher score is better) Performance Outcomes Correct Code Task II (Higher score is better) Performance Outcomes Aggregate Score Correct Test Cases & Correct Code Task II (Higher score is better) A 5 2 7 B 1 4 5 C 0 3 3 D 8 3 11 E 7 0 7 F 9 9 18 G 9 8 17 Table 4.4 Performance by Pair, Aggregate Score Task II

PAGE 56

41 Pairs Phase 1 Study 1 Performance Outcomes Correct Test Cases Task III (Higher score is better) Performance Outcomes Correct Code Task III (Higher score is better) Performance Outcomes Aggregate Score Correct Test Cases & Correct Code Task III (Higher score is better) A 1.5 3 4.5 B 5 3 8 C 0 2 2 D 2.5 2.5 5 E 9 1 10 F 4 2 6 G 3 3 6 Table 4.5 Performance by Pair Aggregate Score Task III The second step in the data analysis is to an alyze the results for patterns of relationships between the study variables, in particular for those pa irs whose performance ranked either very high or very low (Pairs G a nd C). Some interesting patterns emerge.

PAGE 57

42 Measures Pair G Pair C Individual Differences Cognitive Ability Score by Subject Conflict Handling Self Assessed Style Cognitive Ability Both Subjects at or above Programmer Mean Both Subjects Scored High on Self Assessed Conflict Style of Dominating Cognitive Ability Both Subjects at or close to Programmer Mean; One Subject Scored High on Self Assessed Conflict Style of Avoiding Process Differences Faithfulness to Method Amount of Interaction Type of Interaction Test First, then Code Task Conflict Number of Episodes Resolved / Not resolved Observed Conflict Handling Style Highest Faithf ulness Equal Influence Low Conflict Resolved Integrating / Integrating Low Faithfulness Dominance by one Subject, low interaction by other Subject Escalation to Very High Conflict by Task III Resolved by Withdrawal Dominance / Withdrawal Performance Outcomes Task I* Task II* Task III* Satisfaction *Number of correct test case & code i.e. Higher rate equates to higher performance Medium High High* High *For Testing, Coding Missing High Low Lowest Lowest* *Subject with High Avoidance Score Table 4.6 Data Analysis Results by Selected Pairs As illustrated in Table 4.6, all subjects had c ognitive abilities that approximated or were higher than the population parameter for pr ogrammers (Wonderlic 1999); however Pair G had consistently high performance outcomes fo r all experimental tasks, while Pair C had performance outcomes that declined consistent ly as the experiment progressed (high for task I, low for Task II and the lowest for Task III as compared to all performance outcomes). This suggests that cognitive ability alone does not account for performance, and that examination of Pair Cs processe s may reveal important additional factors. Pair G had high faithfulness to the method with equal influence of partners as well as low rates of task conflict that were resolved. Pair C, however, had low faithfulness to the collaborative method with one subject pr ogressively dominating the other partner throughout the experiment. This work patter n, combined with a high conflict avoidance score of the dominated partner led to high task conflict, resolved by escalated withdrawal of this subject. As a result the performance outcomes suffered. Additionally, the

PAGE 58

43 participant with the high avoidance score re ported low satisfaction ratings for the pair programming exercises. Figure 4.7 summar izes the research questions and related findings. Study 1 Phase 1 Research Questions Findings RQ1: Will developers with higher cognitive ability have higher performance outcomes? RQ2: Will developers with more integrative conflict handling styles have hi gher performance outcomes? RQ3: Does faithfulness to the collaborative process in the agile methods positively impact performance outcomes? RQ4: Does task conflict during development impact performance outcomes? No statistically significant relationship. Most subjects had high cognitive ability Performance outcomes were moderated by faithfulness to the method and conflict. No statistically significant relationship. One pair of subjects with observed avoiding / dominating match-up were not as effect relative to performance outcomes. No statistically significant relationship. One pair of subjects who had high faithfulness to the collaborative process had higher pe rformance outcomes. No statistically significant relationship. One pair of subjects who had high levels of conflict during the collaborative process had lower performance outcomes. Table 4.7 Phase 1 Summary of Research Questions and Findings Conclusions Phase 1 of Study 1 contributes to the rese arch on innovative software methodologies, and specifically collaborative development (pair pr ogramming) in a number of ways. First it makes an initial attempt to explore why this technique results in better outcomes, i.e. specifically fewer errors and higher deve loper satisfaction. Although there were no statistically significant relati onships, a detailed look at the patterns of performance across all tasks suggests that cognitive ability and faithfulness to the methodology are related to development success. Additionally, the role of conflict and interpersonal conflict handling styles are observed in at least one pa ir. The study demonstrat es that high levels of task conflict and the less cooperative conflict handling styles negatively impact performance. Specifically, those indivi duals who have high avoidance conflict management styles may not produce high leve ls of performance when paired with dominators. And finally, the research also offers an initial glimpse to management of potential strategies for staffing collabo rative development so as to maximize performance. Limitations An inherent limitation of the study is the lo w number of participants. Further work on these research questions is included in a la boratory study with larger numbers of pairs (Study 2). In this experiment, we also ma nipulate the developmen t setting, as well as collect additional data using th is process-focused methodology. Some of the measures included in the study were self-reported. Subjects were allowed short periods of time to complete the experimental programming tasks. And finally,

PAGE 59

44 since subjects were audio and video taped, th eir behavior may not be representative of their behavior in a non-contrived setting. Study 1 Phase 2 In Phase 2 of Study 1 we test distributed cognition theory in the program development context and explore whether or not how developers work together during pair programming explains the improved task performance outcomes reported for this agile method. Phase 2 research questions are presented followed by a discussion of data collection, data analysis and study results. Research Question and Hypotheses The specific research question addr essed in Phase 2 of Study 1 is: Do developer dyads produce better pr ograms when they (a) make program requirements concrete and visible and (b) communicate via positive perspective making and perspective taking ? Rogers (1997) posits that cognition is best ex plained in terms of information processing at the individual level and traditionally, cognition has been thought of as problem solving ability of the individual. An alternative view of cognition that has gained interest over the last decade is distributed cognition. C onceptualized by Flor and Hutchins (1991), distributed cognition may be thought of a ne w paradigm for the traditional view of cognition (Greenberg and Dickleman 2000). Distributed cognition refers to the knowledge representation both inside the head of an individual and in the worl d, and the propagation of knowle dge between individuals and artifacts (Greenberg and Dickleman, 2000). Central ideas of this theory are that collaborative work is more effective when individuals represent their task knowledge in a concrete, visible form (Nardi, 1996) a nd when knowledge is transmitted between individuals is a truly collaborative way. True collaboration is evidenced when individuals offer their knowledge and expertise (termed perspective making ) that is received and appropriated by th e other individual(s) (termed perspective taking ) (Brown et al. 1993, Flor and Hutchins 1991, Greenberg and Dickleman 2000). In a study of developer dyads working on a software maintenance task, Flor and Hutchins (1991) found a relationship between perfor mance and communication among developers that demonstrated key distributed cognition dimensions: sharing goals, sharing memories, expansion of search alternatives. Distribu ting work across groups of agents (as in the programming dyad) requires co-ordinate ac tivity through some form of communication, such as language or the transmission of artif acts (Hutchins 1995, Perry 1997). In the case of collaborative programming, these cognitive ar tifacts are represente d by test cases and code. Since an important goal of collaborative programming is higher quality code, the quality of task outcomes is the dependent variable. In studies of collaborative programming the

PAGE 60

45 quality of performance has typically been viewed as accuracy, measured by fewer errors in the code produced (e. g. Domino et al. 2003). There are two primary reasons, according to distributed cognition, why pair programming should be an effective development technique. First, test cases are concrete, visual representations of how a program should process data and as such they are more easily shared than abstractions. Each test case represents a kind of narrative of a single operation of the program (an even t, a situation). According to Perry (1997), narrative is a fundamental mode of human cognition that is as powerful as more abstract information processing modes. Thus, we hypothesize that: H1: Developer dyads that create more correct test cases will create more accurate programs than dyads that create few or no test cases. Second, the pair programming method, when faithfully employed, means that each developer shares his/her knowledge about the task ( perspective making ) and that his/her partner then reacts appropriately ( perspective taking ). These reactions may be statements of agreement, encouragement or appreciation of the perspective, elaboration on the idea, an expression of appreciation of the pair s mutual dependency, or disagreement. Negative communications duri ng collaboration include statements that express domination or control over the other pers on regarding the rightness of ones own perspective and failure to react to a perspective taken by the pa rtner. Thus, we hypothesize that: H2: Developer dyads who communicate while working on the task with more sequences of positive perspective making and perspective taking will create more accurate programs than dyads who either have more negative comm unications or do not communicate (one person doe s the work while the other watches). Research Design In Phase 2, we continued the intensive pr ocess study (described in Phase 1) in a laboratory setting at a university located in th e southern United States. As stated earlier, additional data was collected for Phase 2 of Study 1. A number of the subjects included in Phase 1 of Study 1 were also included in Phase 2 of Study 2. Six pairs (12 subjects) participated in Phase 2 of the quasi-experi ment. The participants were part-time undergraduate and graduate MIS students. We allowed subjects to self-select where possible; otherwise, pairs were assigned at random. Subjects were randomly assigned to the role (driver or navigator) that they w ould assume during the e xperimental tasks. These roles remained constant for the first tw o tasks; partners switc hed roles for the last collaborative exercise. All subj ects were assigned three experi mental tasks: Task I, Task II and Task III. Task I was designed to be a warm up task. Two other tasks were included in the experiment in order to vary the difficulty of the tasks and allow for jelling.

PAGE 61

Data Collection As previously mentioned, data collection for Study 1 was conducted in two phases. Subsequent to the completion of Phase 1, additional data was collected in order to increase the sample size of Study 1. In Phase 2, the same experimental protocol that had been used in Phase 1 was followed, which is elaborated upon in Phase 1 of Study 1. Subject Demographics The researchers selected six pairs at random for inclusion and analysis in Phase 2 of Study 1. The participants in Phase 2 of Study 1 had a mean age of thirty-nine years of age and approximately six years of work experience. Subjects also had both knowledge of multiple programming languages and industry experience in programming. Subject demographics are summarized in Figure 4.8. 46 Figure 4.8 Subject Demographics Measurement Measures are discussed in detail in Chapter Three. The results of individual differences (cognitive ability and years of IT experience) are found in Table 4.9. There is variation across subjects in cognitive ability. The mean WPT score for all programmers is 29 (Wonderlic 1999); in this phase of Study 1 the mean WPT score was 33, with scores ranging from 17 39. The mean years of experience of IT experience was approximately 6 years with a range of no experience to 14 years. Age: 39 (mean) Sex: 2 females, 10 males Programming Languages: C, C++, Java, Pascal, VB, HTML, Fortran, Cobol Professions Held: Consultant, Web Designer, Systems Analyst, Senior System Analyst, Programmer, Graduate Students (primarily full -time) Cognitive Ability All Programmers: 29 (Mean) Study Subjects: 33 (Mean); 17 39 (Range) Years of IT Experience Study Subjects: 5.7 (Mean); 0 14 (Range) Figure 4.9 Descriptive Statistics Individual Differences Pair performance on task was based on the correct test cases and code produced in each programming task. A scoring template was developed by the researchers to rate the programming outcomes. A score of 1 10 was possible on both test cases and code for each programming task. The more complete and accurate the code, the higher the level of performance of the pair. Two independent raters evaluated all test cases and code for accuracy (inter-rater reliability = 90%).

PAGE 62

Descriptive statistics for th e pair task performance out comes are found in Figure 4.11. As is evident from the data, there was varia tion in outcomes within and across subjects. Task I was not included in the analysis as it was designed to be a warm up Task. Task II Task III Correct Test Cases (Range) (8 9) (0 10) (Mean) 8.7 4.1 Correct Code (Range) (2 10) (2 9) (Mean) 5.8 5.2 Figure 4.10 Descriptive Statisti cs Performance Outcomes Distributed cognition is the primary focus of Phase 2 of Study 1. A coding scheme was developed by the researchers to explore th e impact of distribu ted cognition on pair performance. In this coding, the first step is to identify expressi ons or passages that could be defined as episodes of distributed cognition. Then for each of those episodes, there should be two levels of coding: one that describes what the pair is doing during that episode, and one that identifies the nature of the distributed c ognition. Figure Table 4.15 shows the details of the coding scheme utilized in Phase 2. The first level of coding categorizes the type of activity being done or discussed: RI: Reading of instructions TP: Task planning TC: Working on test cases PS: Working on pseudocode IR: Interpersonal relationship OT: Other The second level of coding describes the na ture of the distributed cognition. PM: An individual is expressing his/her own understanding of what is to be done or how to do it or actually does the work ( perspective making ) o D: An individual is expressing domina tion or control over the other person regarding the rightness of his/her perspective or work PT: An individual is reacting to the other persons expression of understanding or work (perspective taking). This reaction may take the form of: o A: Agreement o E: Encouragement or appreciation of the perspective o EL: Elaboration of the idea, may be agreement and mild disagreement o MD: Expression of appreciation of the pairs mutual dependency o D: Disagreement o I: Ignore (i.e., no reaction from the other person) Figure 4.11 Coding Scheme 47 In order to measure the communication proces ses associated with distributed cognition, the audiotapes for Task III were transcri bed and analyzed using the pre-established coding scheme (Figure 4.11). Task III was se lected at random by the researchers for analysis. The transcripts were then scored by two coders. The coders included an MIS graduate student and an indepe ndent consultant holding a und ergraduate degree in MIS. The consultant was paid for his efforts. Bo th coders had an understanding of the tasks

PAGE 63

48 and had some experience in programming. Th e coders used the pre-established coding scheme prepared by the researchers to anal yze the transcriptions of the pairs while working on experimental Task III. An ex ample of the completed coding scheme for a portion of one task for one pa ir is shown in the Appendix. Prior to beginning the coding, the raters recei ved two days of trai ning. Training focused on the levels of analysis and content as outli ned in the coding scheme. Additionally, each coder independently read and coded one tran scription during training. The two coders discussed the differences and practiced reso lving the differences. Upon completion of the training, the raters completed the coding of the transcripts. The researcher calculated the percentage of inter-rater reliability between the coders. The level of agreement was acceptable, with an overall percentage of agreement of approximately 76 percent. The percentage of inter-rater agreement by team on Task III is shown on Table 4.8. Team Task III Percentage of Agreement D 100% F 73% H 80% I 78% K 81% L 71% Overall 76% Table 4.8 Percentage of Inter-code r Agreement Distributed Cognition Data Analysis Non-parametric statistics are used because of the relatively small number of subjects typical in this kind of proce ss study and since we have little reason to assume that the distributed cognition variables are normally distributed. The first phase of the data analysis is evaluation of the posited, direct effects on performance from distributed cognition. Table 4.9 reports the ranked order of the pairs on these variables. (For one of the pairs there is no process data available because of technical recording problems.) The performa nce ranks shown here are an aggregate task performance score for correct test cases and co rrect code for Task III for each pair. Table 4.10 shows a break down of the task performance results by pair for Task III.

PAGE 64

49 Pair Phase 2 Study 1 Aggregate Pair Performance Score Correct Test Cases & Correct Code Task III (Higher rank is better) Distributed Cognition H1 % of the on-task utterances dedicated to test cases: (Higher % is better) Distributed Cognition H2 % of positive or negative on-task utterances dedicated to code (Higher % is better) Average Pair Cognitive Ability (Higher Rank is better) Average Pair Years of IT Experience (Higher Rank is better) D 1 1 3 F 2 3 5 Tie 3 1 H 6 Tie 4 Tie 1 5 5 I 5 2 Tie 1 Tie 3 2 K 3 Tie 4 Tie 1 2 4 L 4 1 Tie 1 4 6 Spearmans r .508 .971** Critical value at = .05 .304 N = 6 .001 N = 6 ** Correlation is significant at the 0.01 level (2-tailed). Table 4.9 Performance Ranks By Pair Pair Performance Outcomes Correct Test Cases Task III (Higher score is better) Performance Outcomes Correct Code Task III (Higher score is better) Performance Outcomes Aggregate Score Correct Test Cases & Correct Code Task III (Higher score is better) D 2.5 2.5 5 F 4 3 7 H 10 9 19 I 8 2 10 K 0 8 8 L 0 9 9 Table 4.10 Performance Outcomes by Pair Task III Spearmans rank correlation is a nonparametric measure that is based on the differences in rank between subjects and ranges from to +1. None of the relationships posed in the research questions are significant at = .05. There is little ev idence of distributed cognition, or cognitive ability, have a positiv e impact on code performance for Task III. It is interesting to note, that at = .01, experience does appear to have a positive relationship with performance (Table 4.11). Table 4.11 shows the correlations between test case performance and code performance for Task III. The results show that the correlation is negative and minimal.

PAGE 65

50 Code Task III Test Case Task III Correlation Coefficient 1.000 -.224 Sig. .670 Code Task III N 6 6 Correlation Coefficient -.224 1.000 Sig. .670 Spearman's rho Test Cases Task III N 6 6 Table 4.11 Correlations between Test Case Performance and Code Performance The second step in the data analysis is to an alyze the results for patterns of relationships between the study variables, in particular for those pa irs whose performance ranked either very low or very high (Pairs F and H). Some interesting patterns emerge, as illustrated in Table 4.12.

PAGE 66

51 H2 Process Differences Sequence Pair F Pair H H2: % of positive or negative ontask utterances Total Number of Coded Utterances = 139 Sequence 1Utterances = 16 positive Discussions are primarily about getting the data Subject 1 (driver) seems to know more Subject 2 is in teractive, but has limited input into the process, ie characterized by short statements of agreement. Sequence 2 Utterances = 76 positive Discussions about processing, calculating and computations of averages Even sharing, although more direction from subject 1 and short acknowledgements of agreement from subject 2 Sequence 3 Utterances = 47 negative Discussions about output of the report Subject 1 basically does all of the work, with a few minor exceptions; subject 2 basically agrees; Again, very short acknowledgements of agreement from Subject 2 Total Number of Coded Utterances = 131 Sequence 1Utterances = 25 positive Deals with getting the records from the file not area Subject 2 has knowledge, but subject 1 questions, suggests corrections, learns Sequence 2 Utterances = 35 positive Early on subject 2 worries that he / she is doing too much, and shows concern for partner Deals with processing / calculation data; developed iteration Sequence 3 Utterances = 37 positive Deals with code for printing out report totals Again subject 2 knows more, but subject 1 is active and learning Sequence 4 Utterances = 34 positive Subject 2 goes through their work and reviews it for Subject 1 explaining what was done Subject 1 is active again and asks questions Subject 2 explains alternative coding approaches Performance Outcomes Task III Code Number of correct code sequences; more correct equates to higher performance 3 (Very low score) 9 (Highest score) Table 4.12 Data Analysis Results by Selected Pairs All subjects had cognitive abilities that approximated or were higher than the population parameter for programmers (Wonderlic 1999). Pair H, which had scores much higher than the programmer average, had the highest high performance outcome for Task III code and on all other experimental tasks. The cognitive ability of Pair F approximated

PAGE 67

52 the population parameter of programmers. This suggests that cognitive ability is an important factor in accounting for perfor mance, but alone does not account for performance. Relative to IT experience, Pair F had averag e IT experience of 1 year, while Pair H had average IT experience of 7 years. Pairs K and L also had relatively high levels of performance, suggesting that IT experience is an important factor in accounting for performance. We now focus on the study vari ables of distribute d cognition during the coding for Task III. Pair H had very high levels of positive interaction between the developers, while Pair F had limited interaction which was both positive and negative. Pair H exhibits many instances of perspective making and taki ng between the programmers. Pair F was interactive, but one of the s ubjects clearly was in control of the work and leading all activity. The other subject te nded to be intera ctive, but in a passive manner. His interaction during programming was composed primarily of utterances in which he merely acknowledged that his partner had made statements. He did not verbalize that he understood what was being said nor did he offer and any substantive input on how the work should be done. The work pattern also reflects negative inte raction between the two developers. Table 4.13 summarizes the research questions and related findings. Study 1 Phase 2 Research Questions Findings H1: Developer dyads that create more correct test cases will create more accurate programs than dyads that create few or no test cases. H2: Developer dyads who communicate while working on the task with more sequences of positive perspective making and pe rspective taking will create more accurate programs than dyads who either have more nega tive communications or do not communicate (one person does the work while the other watches). Not supported Task III. There does not appear to be a relationship between the creation of test cases and correct code. Not Supported Task III. Additional analysis of selected pairs with large contrasts in performance reveals the hypothesized pattern. Table 4.13 Phase 2 Summary of Research Questions and Findings Conclusions The goal of Phase 2 of Study 1 is to test di stributed cognition theory in the program development context and to explore whether or not how developers work together during pair programming explains the improved perf ormance outcomes reported for this agile method. In this test of the theory of distributed cognition, which is a relatively new way to understand cognition in collaborative wo rk, we contribute to our understanding of human cognition by illustrating how this vari able impacts performance outcomes. The process study suggests that ther e are linkages to performance for pairs of developers who

PAGE 68

53 are highly interac tive in their sharing of information and knowledge. The results of this study suggest that interactive pairs who are actively engaged in perspective making and perspective taking may have higher levels of performance. The study also suggests that while cognitive ability and years of IT experien ce appear to be impor tant factors, they do not alone explain performance. Additionally, there is no evidence that the preparation of test cases will produce better code. From a managerial standpoint, the findings suggest that if we gain a more detailed understanding of distributed cognition, we may be able to develop specific strategies for training which will assist individuals in order to enhance performance outcomes. Limitations While the experimental nature of the study offe rs a more controlled test of the theory, it also creates some limitations. The results are not generalizable to a known population, and the relatively short duration of work on the programming tasks (compared to a normal work setting) may result in weaker effects (the novelty of working together may make positive distributed cognition more difficu lt to achieve, but also mask individuals negative communications, such as domination).

PAGE 69

Chapter Five Study 2 Chapter Five describes a laboratory experiment that explores how developmental setting impacts the collaborative software development processes and related outcomes. An overview of the study is presented, followed by a discussion of the research model, research hypotheses, data collection, data analysis and study results. Overview The primary focus of Study 2 is to investigate how differences in the developmental setting impact performance outcomes for collaborative programming (pair programming). Specifically, we explore the impact of face-to-face and virtual developmental settings on performance outcomes. In addition we explore how differences in the developmental setting impact the processes used during development. And finally, we continue to explore the impact of individual developer characteristics on performance outcomes. High Level Research Model The high level research model used in this dissertation is shown in Figure 5.1. Individual Characteristics Cognitive Ability Conflict Handlin g St y le Processes During Development Faithfulness to Method Task Conflict Distributed Cognition Developmental Setting Face-to-Face Virtual Collaborative Method Pair Programming VariationsofPair Performance Outcomes Pair Task Performance Individual Task Performance Individual Satisfaction with Method Figure 5.1 High Level Research Model The underlying premise of Study 2 is that differences in the developmental setting will impact performance outcomes. Increasingly, systems development is taking place in a virtual setting. The primary focus of Study 2 is to investigate the impact of differences in 54

PAGE 70

the developmental setting as follows: face-to-face and virtual environments. A laboratory experiment is conducted in which three performance outcomes are studied. Pair task performance is measured as follows: correctness of the test cases produced by the programming dyad and correctness of code produced and by the programming dyad. Individual satisfaction with the collaborative method (pair programming) is also measured. Additionally, we also explore a number of other factors that are believed to impact successful programming outcomes in collaborative software development. These include the processes during developments (faithfulness to the method and task conflict) and individual developer characteristics (cognitive ability, conflict handling style and years of IT experience). The reasoning behind the selection of these constructs and variables, as well as details on these measures, is provided in Chapter Three. Study 2 Research Models Each study contained in this dissertation focuses on a different part of the high-level research model shown in Figure 5.1. Two research models are utilized to study the variables and constructs in Study 2. Thus, the primary research model utilized in Study 2 (Figure 5.2) focuses on the main effects of the manipulation of the developmental setting on performance outcomes. The developmental settings are face-to-face and virtual. H1, H2+ Performance Outcomes Pair Task Performance Individual Satisfaction with Method Developmental Setting (Manipulated) Face-to-Face Virtual Figure 5.2 Study 2 Research Model: Main Effects In Study 2 we also investigate the impact of processes used during development (faithfulness to the method and task conflict) and individual developer differences (cognitive ability, conflict handling style and years of IT experience) when there are differences in the developmental setting. The research model used to explore the mediating effect of processes during development and the moderating effect of individual differences is shown in Figure 5.3. 55

PAGE 71

H3, H4 56 H5, H6, H7+ Developmental Method (Manipulated) Face to Face Performance Outcomes Pair Task Performance Individual Satisfaction with Method Processes During Development Faithfulness to Method Task Conflict Individual Characteristics Cognitive Ability Conflict Handling Style YearsofIT Figure 5.3 Study 2 Research Model: Mediating & Moderating Effects Research Question and Hypotheses The primary research question addressed in Study 2 is as follows: Within the context of collaborative programming, does the developmental setting impact related performance outcomes and the processes used during collaborative programming? The primary focus of this research question relates to the issue of the developmental setting in which collaborative programming takes place and how to predict dyadic performance and individual satisfaction with the method. As previously mentioned, in Study 2 the developmental method is manipulated in two conditions. The research hypotheses provide a method to test the degree to which developmental setting may facilitate pair task performance (correct test cases and correct code) and individual satisfaction with the method (pair programming). Based on the definitions used in the research literature, we define virtual setting as those pairs (dyads) brought together for a limited period of time to work on a programming task, separated by space. This definition is adapted from the normative definition of virtual teams (DeSanctis and Poole 1997, Jarvenpaa and Leidner 1998, Lipnack and Stamps 1998). Virtual development is becoming more prevalent in the management information systems domain and is viewed as an important issue to researchers and practitioners alike. Little research has explored the issue of developmental setting as it relates to collaborative programming (pair programming). As presented in the literature review (Chapter 2) the research on virtual work suggests that individuals and teams working in a virtual setting are subject to greater impediments related to coordination and communication (Daft 1988). As a consequence, virtual

PAGE 72

57 workers do not always attain the same levels of performance as individuals working in a face-to-face environment. Plowman (1995) and Sillince (1996) show that communication effectiveness drops as modalities and timing are removed Additionally, lower levels of satisfaction have generally b een reported when indivi duals work virtually. Thus, we hypothesize: H1: Developers working in a face-to-face deve lopmental setting will have higher levels of pair task performance than developers working in a vi rtual developmental setting. H1a: Developers working in a face-to-face developmental setting will have higher levels of correct test cases than devel opers working in a virtual developmental setting. H1b: Developers working in a face-to-f ace developmental setting will have higher levels of correct code than developers wo rking in a virtual developmental setting. H2: Developers working in a face-to-face develo pmental setting will have higher levels of individual satisfaction with the method than developers wo rking in a virtual developmental setting. Collaborative programming (pair programmi ng) requires developers to follow a prescribed set of structures or processes wh ile performing the programming tasks. In pair programming, test cases are prepared before writing code and each developer takes a distinctive role in the interactive process. Adaptive Structuration Theory posits that faithfulness to the appropriation of the work method is an important factor in performance. Faithfulness refers to the exte nt to which a group (dyad) uses the process or system, in keeping with the spirit in which it was meant to be used (Poole and DeSanctis (1989, 1990; Gopal et al. 1992-3). Lit tle research has explored faithfulness to the method in the context of collaborative programming. Virtual developers are more likely to face greater obstacles in coordination and communication during the development process, given the diminished richness of the communication channel and their separati on of space while working together, as compared to face-to-face deve lopers. Thus, we hypothesize: H3: Developers working in a face-to-face deve lopmental setting will have higher levels of perceived faithfulness to the method than developers working in a virtual developmental setting and higher levels of perceived faithfulness will be related to higher levels of pair task performance. A number of researchers have examined the impact of conflict on the information systems development (ISD) process (Cohen et al. 2002; Newman and Robey 1992). It has been well established that low to moderate le vels of task conflict can be constructive and can positively impact outcomes; however, in terpersonal conflict causes negative, less desirable outcomes (Milliken 1996; Jehn 1997). Given that two indi viduals continually work together during collaborative programmi ng, the opportunity for c onflict to interfere

PAGE 73

58 with desired performance outcomes is heightened. Since virtual developers are more likely face greater obstacles in coordi nation and communicat ion we hypothesize: H4: Developers working in a virtual developmental setting will have higher levels of perceived conflict during collaborative programming than developers working in a faceto-face developmental setting and higher levels of perceived conflict will be related to lower levels of task performance. H4a: Developers working in a virtua l developmental setting will have higher levels of perceived conflict during colla borative programming than developers working in a face-to-face developmental setting and higher levels of perceived conflict will be related to lower leve ls of test case performance. H4b: Developers working in a virtua l developmental setting will have higher levels of perceived conflict during colla borative programming than developers working in a face-to-face developmental setting and higher levels of perceived conflict will be related to lower le vels of code performance. Each programming dyad is composed of de velopers with distinctive individual characteristics. Prior res earch has shown that job knowledge is the most immediate link between cognitive ability and performance. Individuals with higher cognitive ability tend to develop greater understandings of job duties as compared to their counterparts with lower cognitive ability (Schmi dt et al. 1986). Prior rese arch on conflict (Rhaim 1988b) indicates individuals that possess a highly integrative conflict management style are more likely to produce positive individual and organi zational outcomes. Like cognitive ability and conflict handling style, empirical eviden ce also suggests that experience has a strong positive linkage performance (Jex 2002) and is of particular interest to intellective tasks, such as programming. A review of the psychology literature suggests that in groups (dyads), individual differences may have both additive (group average) and compensato ry (higher ability group members help lower ability group me mbers). For Study 2, we view these individual differences as comp ensatory. Thus, we hypothesize: H5: When developer dyad cognitive ability is determined by the higher cognitive ability individual in the dyad, developer cognitive ability and developmental setting will interact to impact pair task performance. H5a: Developer cognitive ability and devel opmental setting will interact to impact test case performance. H5b: Developer cognitive ability and devel opmental setting will interact to impact code performance. H6: When developer dyad cognitive ability is determined by the higher integrative conflict management style individual in th e dyad: Developer integrative conflict

PAGE 74

59 management style and developmental setti ng will interact to impact pair task performance. H6a: Developer integrative conflict manageme nt style will interact to impact test case performance. H6b: Developer integrative c onflict management style will interact to impact code performance. H7: When developer dyad IT experience is determined by the higher IT experience individual in the dyad: Developer IT expe rience and developmental setting will interact to impact pair task performance. H7a: Developer IT experience and developm ental setting will interact to impact test case performance. H7b: Developer IT experience and devel opmental setting will interact to impact code performance. Research Design In order to examine and test the research hypotheses, we conducted a quasi-experiment at a university located in the s outhern United States during the fall term of 2002. One hundred and forty (140) subjects, or seventy pairs (70), were recruited to participate in the research. As an incentive to participate in the study, subjects who completed the experiment received 10% towards their final course grade. Subjects who choose not to participate were allowed to comple te an alternative assignment. Participants were full time and part time underg raduate students majoring in management information systems (MIS) enrolled in one of the following courses: Management of Information Resources (capstone undergraduate class) and Global Information Systems. Prior to beginning the study, each class of participants was assigned to one of two treatment groups: Group I (face-to-face) and Group II (virtual). Next, subjects were randomly assigned to a designated programming pa ir (dyad) in which they would remain for the duration of the study. Subjects were also randomly assigned to their roles (i.e. driver and navigator within each pair), computer labs and works stations by the researchers. All subjects were assigned three experimental tasks: Task I, Task II and Task III. Task I was designed to be a warm up tasks. Two other tasks were included in the experiment in order to vary the diffi culty of the tasks and allow for jelling.

PAGE 75

Data Collection Prior to beginning the study scripts, questionnaires and experimental tasks were pre-tested. As outlined in Chapter 3 (Measures), the experimental tasks had been used in prior research and many of the items in the questionnaires were adapted from existing instruments. After a review of the pretest results, changes were incorporated into the experimental materials as appropriate. Copies of the final scripts, questionnaires and experimental tasks used in the Study 2 are found in the Appendices. Subjects participating in the research were studied over a two-month time frame, during the assigned one hour and 15 minute class period for the course in which they were enrolled. Multiple sessions were required in order to complete data collection. Figure 5.4 outlines the experimental design, with the explanation of notations. Group II Virtual O1 Xc XvO2 O3O4 O5 O6 Explanation of Notations ________________________________________________________________ Symbol Notations __________________________________________________ O1 Questionnaire I (Initial questionnaire) Demographics (age, gender, languages known) Covariates: cognitive ability, conflict handling style, years of IT experience Xc Training in collaborative method (pair programming) Xv Training in virtual method (collaborative software and communication devices) O2 Programming Task I O3 Programming Task II O4 Questionnaire II Processes: faithfulness to method, perceived conflict Individual responses: satisfaction with method O5 Programming Task III O6 Questionnaire III (Final questionnaire) Processes: faithfulness to method, perceived conflict Individual responses: satisfaction with method _________________________________________________________ Treatment Group Observations _________________________________ Group I Face-to-Face O1 Xc O2 O3O4 O5 O6 Figure 5.4 Experimental Design The day of study, participants reported to pre-assigned computer lab(s) as instructed by the researcher. The first session began with an introduction to the study. Before being given their pair assignments, participants were asked to read and to sign an Informed Consent Form. All study procedures and materials had been reviewed and approved by the universitys Institutional Research Review Board. Next, participants were given their pre-assigned subject number and team number. Subjects were instructed to use this identification throughout the study, in order to ensure their confidentiality would be preserved. 60

PAGE 76

61 Demographic information about subjects was th en collected. Subjects also completed a series of instruments, which measured thei r general cognitive ability and self assessed conflict handling style. Training in colla borative programming (pair programming) followed. Additional training on how to use the virtual collaboration tools (i.e. the Groove software program and headsets) was given to participants who programmed in the virtual setting. Upon completion of these tasks, subjects were given their role (drive r or navigator), and work station (computer lab and computer) assign ments. Participants were instructed to remain in their respective roles for the firs t two experimental tasks. The roles were switched for the last experimental task. Next, pairs of subjects were brough t to computer labs to their pre-assigned work stations. Subjects who participated in the face-to-f ace development setting were assigned to one computer lab for all sessions. Pairs of subject s participating in the virtual setting were assigned to two adjacent computer labs for all sessions. Subjects were also pre-assigned to a specific computer for all sessions. To facilitate programming in a virtual setting, each pairs computer had been configured to be on the same communications channel. Collaborative software (Groove) was utilized during the session, enab ling subjects to view the test cases and code that was being written by the driver for each experimental programming task. Dyads communicated with each other verbally by using headsets equipped with microphones. In each session in which an experimental pr ogramming task was assigned, subjects were given the experimental task in both hard copy and electronic form. They were also instructed to save all final wo rk on a diskette. Using three tasks allowed for there to be variation in difficulty of the tasks. Particip ants were instructed to follow the test-first, code-later sequence in completing all programming exerci ses and use pseudocode for each programming module. Pseudocode was used in each task, in order to deal with unknown differences within pairs on specifi c programming languages. During the experimental tasks, only the drivers in each pair had access to the keyboard. It should be noted that the use of Groove software create d certain limitations in that the navigators were not able to point directly to the code but had to dir ect their partners verbally. Participants were given twenty minutes to co mplete Task I, which was designed to be a warm up exercise. Forty-five minutes was allo tted for the completion of Task II. Forty minutes was allotted to complete Task III, since additional time was needed for debriefing following the final programming ex ercise. Following the completion of Tasks II and III, subjects were instructed to save all work and complete questionnaires that measured their perceived faithfulne ss to method, perceived task conflict, and individual satisfaction. S ubjects were debriefed at th e end of the final session.

PAGE 77

62 Measures Measurement is discussed in detail in Chapter Three. Programming outcomes measured task performance for each pair of developers as well as individual developer satisfaction with the method (pair programming). Pair performance on task was measured in two ways: the correct test cases produced by the dyad on each programming task and the correct code produced by the dyad on each programming task. A scoring template was developed by the researchers to rate the two programming outcomes. A score of 1 10 was possible for each performance measure. The greater the number of correct test cases and the more complete and accurate the code, the higher the level of performance of the pair. Two independent raters were tr ained and used to evaluate task performance. There was a high level of inter-rater reliability on all tasks. Inter-rater reliability varied by pair (95% to 100%) and is based on the percentage of agreement for each item rated. In Study 2, the researchers adapted Venkatesh and Vitalaris (1992) five item scale to measure individual satisfaction with the met hod. A possible score of 1 7 was possible on both Likert scales (1 = not satisfied; 7 = very satisfied). The processes measured during development included faithfulness to the collaborative programming method (type and amount of in teraction between the developers) and the amount and type of conflict during developmen t. The scales used to measure these variables were developed by the researchers. An eight item que stionnaire was utilized to measure each subjects perceived faithfulne ss to the method. The questionnaire asked participants to evaluate faithfulness to the method in a number of ways (overall faithfulness to pair programming, amount of in fluence by each develope r in the pair and work pattern). For perceived faithfulness a possible score of 1 5 was possible on the Likert scale (1 = not faith ful; 5 = very faithful). A three item scale was utilized to measure each subjects percei ved conflict during development. The conflict scale measured the type of conflict (task or interpersonal), the number of episodes of conflict during the programming session and if conflict episodes were resolved or not resolved. Individual cognitive ability was measured util izing the Wonderlic Personnel Test (WPT). The WPT is comprised of 50 questions to be administered in a timed 12-minute period. The Rahim Organizational Conflic t Inventory was used to measure self assessed conflict handing style. Data was also collected regarding each particip ants years of IT experience. Subject Demographics Of the original 170 participants, eighty-six (86) subjects, or fort y-three (43) pairs, completed all three experimental tasks with th e same partner. The experimental sessions were held during the normal class time. The mo rtality rate (50%) reflects the fact that

PAGE 78

because of absences, subjects could not always be paired with the same partner for all three experimental tasks. There was no indication from the subjects that they dropped out of Study 2 because they did not choos e to participate in the research. Of the final Study 2 participants, forty-six subj ects (23 pairs) were included in Treatment Group I, while forty subjects (t wenty pairs) were included in Treatment Group II. These subjects completed all aspects of the experime nt together and produced test cases and / or code for each experimental programming modul e. Since the treatment groups were not equal, the experiment is considered an unbalanced experimental design. Table 5.5 presents the breakdown of pairs a nd tasks by experimental group. Figure 5.5 Number of Pairs and Ta sks in Each Experimental Group 63 Subject demographics are presented in Figur e 5.6 and Figure 5.7. The average age of the subjects participating in the study was 28 y ears of age. Thirty six percent of all participants were female with the remain ing sixty four percent were male. Figure 5.6 Subject Demographics Group Number of Pairs Number of Tasks Completed Total 43 129 Group I Face-to-Face 23 69 Group II Virtual 20 60 _____________________________________________________ Variable N Mean Std Dev Min Max Age 86 27.7 7.4 21 58 Variable Percent Gender Female 36 Male 64 Figure 5.7 Frequency Tables for Selected Demographic Variables The results of individual diffe rences (cognitive ability, confli ct handling style and years of IT experience) are found in Figure 5. 8. Variation is noted across subjects for all items. While the mean cognitive ability scor e for the population of all programmers is 29 (Wonderlic 1999), Study 2 participants mean score was 28, with scores ranging from 10 to 44. Additionally, self asse ssed conflict-handling style vari ed between subjects. It should be noted that all subjects ranked themselves highest on the integrating style for handling conflict. This tendency to evaluate ones self as integrative is reflected in the

PAGE 79

norms for this measure. However, it is not ne cessarily true that others would agree with these self-assessments. Figure 5.8 Descriptive Statistics (Individual Devel oper Characteristics) 64 As shown in Figure 5.9, the subjects in the group exhibited a wide variation in IT experience. Approximately 43% of the subjects had more than one year of IT experience, while 24% of the subjects reported five or more years of IT experience. Participants also had knowledge of wide variety of program ming languages. Visual Basic and C / C++ were sited as the languages with which they had the most knowledge. Programming languages studied or used by study participants include C, C++, Java, Pascal, Visual Basic, FORTRAN and COBOL. ____________________________________________________________ Variable N Mean Std Dev Min Max___ Cognitive Ability 86 27.8 6.7 10 44 Integrating conflict handling style 86 4.1 .7 1 5 Obliging conflict handling style 86 3.6 .6 2 4.8 Dominating conflict handling style 86 3.2 .8 1 4.8 Avoiding conflict handling style 86 3.3 .8 1.5 4.8 Compromising conflict handling style 86 2.8 .5 1.2 4 Years of IT experience 86 1.4 1.4 0 >7 ___________________________________________________________________ Variable Percent Years of IT experience None No experience 37.2 Less than one Low experience 19.8 One to four Moderate experience 18.6 Five to seven High experience 12.8 More than seven Very high experience 11.0 Figure 5.9 Frequency Tables for Selected Variables Twenty five percent of th e participants reported no perceived conflict during development for Task II, while 35% of the participants reported no perceived conflict during development for Task III. Approximately 9% of the participants reported high to extremely high levels of perceived conflict during development. The conflict that was reported was perceived as task, rather than interpersonal conflict, and was reported as resolved in most instances. Given the limited variation in task conflict, this measure is excluded from Study 2. It is believed th at the limited amount of time spent working together on developmental tasks may have cont ributed to the limited amount of perceived task conflict reported by the subjects. These findings are summarized in Table 5.1, Table 5.2 and Table 5.3.

PAGE 80

65 N Minimum Maximum Mean Std. Deviation Task II Episodes of Conflict 84 0 5 1.12 1.124 Task III Episodes of Conflict 84 0 5 1.00 1.087 Valid N (listwise) 84 Table 5.1 Episodes of Task Conflict by Task Frequency Percent Valid Percent Cumulative Percent Valid No Conflict 22 25.6 26.2 26.2 Low Conflict 45 52.3 53.6 79.8 Moderate Conflict 10 11.6 11.9 91.7 High Conflict 2 2.3 2.4 94.0 Very High Conflict 2 2.3 2.4 96.4 Extremely High Conflict 3 3.5 3.6 100.0 Total 84 97.7 100.0 Table 5.2 Task II Episodes of Conflict Frequency Percent Valid Percent Cumulative Percent Valid No Conflict 30 34.9 35.7 35.7 Low Conflict 37 43.0 44.0 79.8 Moderate Conflict 9 10.5 10.7 90.5 High Conflict 4 4.7 4.8 95.2 Very High Conflict 3 3.5 3.6 98.8 Extremely High Conflict 1 1.2 1.2 100.0 Total 84 97.7 100.0 Table 5.3 Task III Episodes of Conflict Data Analysis The preliminary focus of the data analysis is the evaluation of the main effects of developmental setting on programming out comes (correct code) and individual satisfaction with the method (H1 and H2). The second step in the data analysis is to

PAGE 81

66 analyze the potential impact of covariates (H3 H7). Since the first experimental module (Task I) was designed to be a warm up exercise, task performance outcomes for Task I are not included in data anal ysis. Additionally, individual responses for satisfaction with the method were not collected for Task I. The design of the study is classified as a quasi-experiment. A quasi-experiment is an investigation that has all the elements of an experiment, except that subjects are not randomly assigned to groups (Pehauzur and Schmelkin 1991). In the study subjects were assigned to the developmen tal setting by class and section. Three dependent variables are included in the st udy: pair task performance on test cases, pair task performance on code and individual sa tisfaction with the method. Prior research has shown no correlation of i ndividual satisfaction to perfor mance (Vroom et al.1985). The existence of cognitive, conflict handling style, years of IT experience and faithfulness to the method as mediating a nd moderating variables (covariates) makes ANCOVA (Analysis of Covariates) the corr ect method for statistical analysis. ANCOVA is used to test the main effects and interaction effects of a variable on a continuous dependent variable, controlling for the effects of the sele cted other variables which co-vary with the dependent variable. In Study 2, we view the impact of pairing as compensatory. Therefor e, covariates to be analyzed include the impact of high devel oper faithfulness in th e pair to the method during the collaborative programming process (pair programming) and high individual developer characteristics in the pair (hi gh cognitive ability, high integrative conflict handling style and high years of IT experience within each dyad). As previously mentioned, perceived conflict was dropped from Study 2, since there was little variation in the amount of conflict that was reporte d by the participants. The SPSS system was used for all statistical analysis. To determine the reliability of the scales, Cronbachs alpha was computed for each measure used in the questionnaires. A Cronbach s alpha of .70 or greater is considered to be an acceptable measure of reliability. Based on these criteria, reliability scores for the following measures are acceptable or cl ose to acceptable: self-assessed conflict handling style (overall measure and four of the five dime nsions), perceived conflict during development, perceived faithfulness to the method and individu al satisfaction with the method (pair programming). The reliabilit y score reported for perceived faithfulness to the method reflects the removal of five ite ms from the analysis. The reliability score reported for satisfaction reflects the removal of three items from the analysis. A summary of the reliability scores for Study 2 measures is Figure 5.10.

PAGE 82

67 _________________________________________________________________________________ Items and Related Survey Initial Survey Survey Survey Task II Task II __ Overall Conflict Handling Style .85 Dimensions of Conflict: Integrating conflict handling style .75 Obliging conflict handling style .63 Dominating conflict handling style .67 Avoiding conflict handling style .83 Compromising conflict handling style .47 Faithfulness to the method .70 .70 Conflict during development .79 76 Individual Satisfaction with the method .89 .89 Figure 5.10 Standardized Cronb achs Alpha for Measures The ROCI-II instrument has demonstrated reliab ility (test-retest reliabilities range from .60 .83) and validity and the test is widely used in academic research on conflict. The Wonderlic Personnel test has demonstrated re liability (test retest reliabilities range from .82 to .94) and validity and the test is widely used by business and governmental organizations to evaluate job applicants for employment and occupational training programs. In order to assess construct validity for fa ithfulness to the method and satisfaction with the method factor analysis was performed. F actor loadings are the correlation of each variable and the factor. For the variable fa ithfulness to the method, factor loadings for these items ranged from .48 to .98. While th e score of .48 reflects a low loading on this factor, it clearly reflects a di fferent loading from the variable satisfaction. For the variable individual satisfaction, values indi cate that all items reflect a common theme (convergent validity) of individual satisfaction with the developmental method when applied in the real world. For the variable satisfaction with the f actor loadings for all items ranged from .78 to .85. These values indicate that all ite ms reflect a common theme (convergent validity) of satisfaction wi th the developmental method when applied in the real world. A factor analysis was also conducted to de termine if two distin ct constructs exist (divergent validity) for faithfulness to the method and satisfaction with the method. Two factors were extracted. The results of the factor analysis are shown on Table 5.4.

PAGE 83

Construct Item Item Wording Factor 1 Factor 2 Faithfulness to Method Faith4 During todays session we exerted equal influence in completing the task. .368 .475 Faith7 We read the task first, then planned and worked together throughout. .199 .979 Satisfaction with Method Sat32 I am satisfied with the pair programming work setting. .846 .173 Sat33 The pair programming work setting allows me to get help from my partner when needed .797 .319 Sat34 The pair programming work setting makes me feel like I belong to the development team. .776 .297 Total Eiganvalues* 2.129 1.405 % of Variance* 42.58 28.10 Cumulative %* 42.58 70.60 Note: Extraction Method: Maximum Likelihood; Varimax Rotation; *Rotation sum of square loadings Table 5.4 Factor Analysis of Faithfulness to Method and Individual Satisfaction with Method Dependent Variables There are three dependent variables, or performance outcomes, in Study 2: pair task performance on test cases, pair task performance on code and individual satisfaction with the method. There is not a significant correlation between task performance and satisfaction with the method. A Pearson correlation matrix revealed a -.201 correlation of satisfaction to the method with code for Task II. A -.126 correlation of satisfaction to the method with code was noted for Task III. Pair task performance represents the dyadic score of each programming team and is the number of correct test cases or correct and complete code segments completed for each experimental programming module. Individual satisfaction with the method is the self assessed average satisfaction score for each developer in the dyad to the collaborative method (pair programming). For Study 2, an average satisfaction score was computed for each programming dyad. Prior to applying further statistical analysis, the data were reviewed for appropriateness and the presence of any outliers that may affect the data. The performance results for each dependent variable were reviewed for propriety. A summary of the data collected for each pair, for each dependent variable and by treatment group is shown in Figure 5.11. 68

PAGE 84

______________ _________________________________ Group Initial Test Cases Code Satisfaction Total 43 23 35 43 Face-to-Face 23 12 21 23 Virtual 20 11 14 20 Figure 5.11 Summary of Study 2 Pairs by Dependent Variables and by Group Pair Task Performance Test Cases A review of the data showed that twenty (20) of the forty three (43) pairs who initially participated in the in the study failed to complete any test cases. Therefore, these pairs were dropped from the statistical analysis for the dependent variable pair task performance on test cases. Of the remaining twenty three pairs (23), nearly half programmed in each developmental setting. Twelve of the dyads worked in a face-to-face environment, while 11 of the dyads worked in a virtual setting. A summary of these findings is found in Table 5.5 and Table 5.6. N Minimum Maximum Mean Std. Deviation Task II Test Cases 23 1 10 7.02 2.741 Task III Test Cases 23 1 9 3.63 2.356 Valid N (listwise) 23 Table 5.5 Summary of Pair Task Performance by Task (Test Cases) Setting Task II Test Cases Task III Test Cases Face-to-Face Mean 7.33 4.500 N 12 12 Std. Deviati on 2.462 2.8284 Virtual Mean 6.68 2.682 N 11 11 Std. Deviati on 3.101 1.2303 Total Mean 7.02 3.630 N 23 23 Std. Deviati on 2.741 2.3559 Table 5.6 Summary of Pair Task Performance by Group (Test Cases) 69

PAGE 85

70 Pair Task Performance Code A review of the data showed that eight (8) of the forty three (43) pairs who initially participated in the in the study failed to comp lete any code. Therefore, these pairs were dropped from the statistical analysis for the dependent variable pair task performance on code. Fourteen pairs of developers (40%) programmed in a virtual setting while 21 pairs of developers (60%) programmed in a faceto-face developmental setting. A summary of the performance outcomes by group is s hown in Table 5.7 and Table 5.8. N Minimum Maximum Mean Std. Deviation Task II Code 35 1 9 3.10 1.814 Task III Code 35 1 10 2.70 1.836 Valid N (listwise) 35 Table 5.7 Summary of Pair Task Performance Outcomes (Code) Developmental Setting Task II Code Task III Code Virtual Mean 2.214 2.214 N 14 14 Std. Deviation 1.1217 1.2967 Face-to-Face Mean 3.690 3.024 N 21 21 Std. Deviation 1.9652 2.0885 Total Mean 3.100 2.700 N 35 35 Std. Deviation 1.8142 1.8359 Table 5.8 Summary of Pair Task Performance by Group (Code) Pair Performance Average Satisfaction with Method All of the subjects (N = 86) in the study complete d the questionnaire on individual satisfaction with the method. In order to meas ure the dependent variable for satisfaction, and average satisfaction for each pair was com puted for each task. Twenty three of the pairs (53%) programmed in a face-to-face setting while the remaining 20 pairs of developers (47%) programmed in a virtual developmental setti ng. A summary of the performance outcomes by group is shown in Table 5.9 and Table 5.10.

PAGE 86

71 N Minimum Maximum Mean Std. Deviation Task II Average Pair Satisfaction with Method 43 1.6 6.9 4.895 1.0972 Task III Average Pair Satisfaction with Method 43 3.4 6.5 4.836 .7454 Valid N (listwise) 43 Table 5.9 Summary of Average Pair Satisfaction with Method Set Task II Average Pair Satisfaction with Method Task III Average Pair Satisfaction with Method Face-to-Face Mean 5.230 4.893 N 23 23 Std. Deviation .8396 .7779 Virtual Mean 4.510 4.770 N 20 20 Std. Deviation 1.2460 .7205 Total Mean 4.895 4.836 N 43 43 Std. Deviation 1.0972 .7454 Table 5.10 Summary of Average Pa ir Satisfaction with Method by Group Next the assumptions related to ANCOVA were checked. Four assumptions are to be met for ANCOVA as follows: 1) the dependent variable is normally distributed for each treatment group; 2) the vari ance of the dependent variab le is constant among the treatment groups; 3) the sum of the errors is zero; and 4) the errors are independent. The underlying assumptions of normality fo r each dependent variable for the two treatment groups were tested using graphi cal representations (histograms and normal probability plots). A review of the graphi cal representations for each dependent task variable (test cases and code ) showed severe deviations (bimodal and tri-modal) from normality when plotted by group. A review of the normality plots for average satisfaction did not reflect severe departures from normality. A number of statistical tests may be used for normality. The Shapiro-Wilk test for normality (recommended if the sample size is less than 2000) also confirmed instances of non normal data. The null hypothesis of a normality test is that ther e is not significant departure form normality. When the p value is more than .05, it fails to reject the null hypothesis and thus the assumption holds (M endenhall and Sincich 1996). Many of the

PAGE 87

72 tests for normality for task performance were rejected, reflecting severe departures from normality for test case performance and code performance. The tests for normality for average satisfaction did not reflect the severe departures from normality. These results are summarized in Table 5.11, Table 5.12 and Table 5.13. Developmental Setting Shapiro-Wilk Statistic df Sig. Task II Test Cases Virtual .839 11 .031 Face-to-Face .836 12 .024 Task III Test Cases Virtual .930 11 .410 Face-to-Face .870 12 .065 This is a lower bound of the true significance. a Lilliefors Significance Correction Table 5.11 Shapiro-Wilk Test for Normality for the Dependent Variables (Test Cases) Developmental Setting Shapiro-Wilk Statis tic df Sig. Task II Code Virtual .862 28 .002 Face to Face .794 42 .000 Task III Code Virtual .758 28 .000 Face to Face .728 42 .000 a Lilliefors Significance Correction Table 5.12 Shapiro-Wilk Test for Normality for the Dependent Variables (Code) Developmental Setting Shapiro-Wilk Statistic df Sig. Task II Average Satisfaction with Method Virtual .976 20 .869 Face-to-Face .953 23 .337 Task III Average Satisfaction with Method Virtual .941 20 .250 Face-to-Face .965 23 .567 This is a lower bound of the true significance. a Lilliefors Significance Correction Table 5.13 Shapiro-Wilk Test for Normality for the Dependent Variables (Average Satisfaction with the Method)

PAGE 88

73 Statistical Test of Main Effects If the distribution does not appear to be normal and the sample size is small, other statistical procedures that do not require th e assumption of normality are to be used. Kruskal-Wallis and the Median Test are non pa rametric techniques that may be utilized for a non-parametric MANCOVA. Kruskal-Wallis compares between the medians of two or more samples to determine if the samp les come from different populations. If the distributions are not normal then the Kruskal-Wallis test should be used to compare the groups. If a significant difference is found then there is a differen ce between the highest and lowest median (Conover 1999). Data types that can be analyzed with Kruska l-Wallis must meet the following criteria: 1) the data points must be independent from each other; 2) the distribut ions do not have to be normal and the variances do not have to be equal; 3) there are more than five data points per sample; 4) all indivi duals must be selected at ra ndom from the population; 5) all individuals must have equal chance of be ing selected and 6) sample sizes should be equal as possible, but some di fferences are allowed. Since the assumptions are met, in Study 2 the Kruskal-Wallis test is appropriate. Kruskal and Wallis (1952) found that for small alpha (less than about 0.10) and for selected small values of 1, 2 and 3 the true level of significance is smaller than the stated level of significance a ssociated with the chi-square d distribution, which indicates that the chi-squared approximation furnishes a conservative test in many, if not all situations. The p-value is a pproximately the probability of a chi-squared random variable with k-1 degrees of freedom exceeding th e observed value of T (Conover 1999). Based on this information, the data were analyzed using non-parametric statistical techniques. The data were analyzed by the Kruskal-Wallis analysis of ranks and the Median Test to test the main effects of the developmental me thod on individual task performance. These tests represent the nonparametric equiva lents to ANOVA (Stat Soft 2003). Tests of Hypothesis 1 2 The next step in the analysis was to determ ine if there was a significant difference in the treatment groups for pair task performa nce (test cases and code) and individual satisfaction with the method. In order to test hypothesis 1a, the Kurskal-Wallis and Median Tests were conducted and interpreted as follows: For Task II, task performance (test case s) between the developmental settings: Ho: there are no differences betw een the medians of the samples ( 1 = 2 ) (median 1 [face-to-face] = median 2 [virtual] Ha: There is a difference between the medians of the samples ( 1 2 ) (median 1 [face-to-face] median 2 [virtual]

PAGE 89

74 For Task III, pair task pe rformance (test cases) between the developmental settings: Ho: there are no differences betw een the medians of the samples ( 1 = 2 ) (median 1 [face-to-face] = median 2 [virtual] ) Ha: There is a difference between the medians of the samples ( 1 2 ) (median 1 [face-to-face] median 2 [virtual] ) At an alpha level of .10 (p value of less than .10), the Kruskal-Wallis Test indicates that there is a not significant diffe rence between the medians of the developmental methods for Task II test cases (p value of .777). Kruskal-Wallis assumes equal variances in the groups. Therefore the Median Test is used fo r further analysis. At an alpha level of .10, the Median Test shows there is not a signi ficant difference between the medians of the developmental methods for Task II test cases (p value of 1.000). At an alpha level of .10, both the Kruskal-Wallis (p value of .163) and Median Test (p value of .193) indicate that th ere is not a significant difference between the medians of the developmental methods for Task III test cas es. The results of the Kruskal-Wallis and the Median Tests are found in Table 5.14 though Table 5.17. Developmental Setting N Mean Rank Virtual 11 11.59 Face-to-Face 12 12.38 Task II Test Cases Total 23 Virtual 11 9.95 Face-to-Face 12 13.88 Task III Test Cases Total 23 Table 5.14 Kruskal-Wallis Median Rank for Pair Task Performance (Test Cases) Task II Test Cases Task III Test Cases Chi-Square .080 1.950 df 1 1 Asymp. Sig. .777 .163 a Kruskal Wallis Test b Grouping Variable: Developmental Setting Table 5.15 Kruskal-Wallis Test Statistics for Pair Task Performance (Test Cases)

PAGE 90

75 Developmental Setting Virtual Face-to-Face > Median 4 5 Task II Test Cases <= Median 7 7 > Median 2 6 Task III Test Cases <= Median 9 6 Table 5.16 Median Test Fre quencies by Individual Task Performance (Test Cases) Task II Test Cases Task III Test Cases N 23 23 Median 8.00 3.000 Exact Sig. 1.000 .193 a Grouping Variable: Developmental Setting Table 5.17 Test Statistics for Median Test for Pair Task Performance (Test Cases) In order to test hypothesis 1b, the KurskalWallis and Median Tests were conducted and interpreted as follows: For Task II, pair task performance (code) between the developmental settings: Ho: there are no differences betw een the medians of the samples ( 1 = 2 ) (median 1 [face-to-face] = median 2 [virtual] Ha: There is a difference between the medians of the samples ( 1 2 ) (median 1 [face-to-face] median 2 [virtual] For Task III, pair task performance (code) between th e developmental settings: Ho: there are no differences betw een the medians of the samples ( 1 = 2 ) (median 1 [face-to-face] = median 2 [virtual] ) Ha: There is a difference between the medians of the samples ( 1 2 ) (median 1 [face-to-face] median 2 [virtual] ) At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis Test (p value of .011) indicates that there is a significan t difference between the medians of the developmental methods for Task II code. The Median Test (p value of .040) also reflects that there is a significant difference between the medians of the developmental methods for Task II code. At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis Test (p value of .130) indicates that there is not a signifi cant difference between the medians of the

PAGE 91

76 developmental methods for Task III code. Kruskal-Wallis assumes equal variances in the groups. For Task III code the variance assump tion is not met. Therefore the Median Test is used for further analysis. At an alpha level of .10, the Median Te st (p value of less than .053) indicates that there is a significant difference between the medians of the developmental methods for Task III code. The results of the Kruskal-Wallis and the Median Tests are found in Table 5.18 though Table 5.21. Developmental Setting N Mean Rank Virtual 14 12.68 Face-to-Face 21 21.55 Task II Code Total 35 Virtual 14 14.89 Face-to-Face 21 20.07 Task III Code Total 35 Table 5.18 Kruskal-Wallis Mean Rank for Pair Task Performance (Code) Task II Code Task III Code Chi-Square 6.504 2.296 df 1 1 Asymp. Sig. .011 .130 a Kruskal Wallis Test b Groupi ng Variable: Deve lopmental Setting Table 5.19 Kruskal-Wallis Test Statistics for Pair Task Performance (Code) Developmental Setting Virtual Face-to-Face > Median 1 8 Task II Code <= Median 13 13 > Median 4 13 Task III Code <= Median 10 8 Table 5.20 Median Test Frequencies by Individual Task Performance (Code)

PAGE 92

77 Task II Code Task III Code N 35 35 Median 3.000 2.000 Chi-Square 4.213 3.736 df 1 1 Asymp. Sig. .040 .053 a Grouping Variable: Developmental Setting Table 5.21 Test Statistics for Median Test for Pair Task Performance (Code) In order to test hypothesis 2, the individual score for individual satisfaction with the method of each developer in the pair was averaged. Since there were not severe departures from normality a one way ANOVA was conducted to te st the main effects of the developmental setting on average individua l developer satisfaction. Analysis of variance (ANOVA) was conducted and resu lts are interpreted as follows: For Task II, average individual devel oper satisfaction between the settings: Ho: F = V (F = face to face, V = virtual) Ha: At least two means ( ) are not equal At an alpha level of .10, the minimum signifi cant difference is .030; therefore, the means of total task performance between the two developmental settings are significantly different. The mean average satisfaction w ith the method for the face-to-face pairs of 5.23 is statistically different from the mean average satisfaction with the method for the virtual pairs of 4.51. For Task III, average individua l developer satisfacti on between developmen tal settings: Ho: F = V (F = face to face, V = virtual) Ha: At least two means ( ) are not equal At an alpha level of .10, the minimum signifi cant difference is .594; therefore, the means of average individual developer satisfaction between the two developmental settings are not significantly different. The mean averag e individual developer score for the face-toface subjects of 4.89 is not statistically different from the mean average individual developer score for the virtual subjects of 4.77. These results are summarized in Table 5.22 and Table 5.23.

PAGE 93

78 Sum of Squares df Mean Square F Sig. Task II Average Satisfaction with Method Between Groups 5.552 1 5.552 5.058 .030 Within Groups 45.007 41 1.098 Total 50.559 42 Task III Average Satisfaction with Method Between Groups .163 1 .163 .289 .594 Within Groups 23.174 41 .565 Total 23.337 42 Table 5.22 One Way ANOVA Average Indivi dual Satisfaction with the Method Developmental Setting Task II Average Satisfaction with Method Task III Average Satisfaction with Method Mean 4.510 4.770 N 20 20 Virtual Std. Deviation 1.2460 .7205 Mean 5.230 4.893 N 23 23 Face-to-Face Std. Deviation .8396 .7779 Mean 4.895 4.836 N 43 43 Total Std. Deviation 1.0972 .7454 Table 5.23 Comparison of Medians Average Individual Satisfacti on with the Method In addition, hypothesis 2 was tested utilizing non-parametric results. If these results are the same for the one way ANOVA and the non-pa rametric test, this provides further confirmation of the statistical findings. In order to test hypothesis 2, the Kruskal-Wallis and Median Tests were conducted and interpreted as follows: For Task II, average individual satisfaction with the method between the developmental settings: Ho: there are no differences betw een the medians of the samples ( 1 = 2 ) (median 1 [face-to-face] = median 2 [virtual] ) Ha: There is a difference between the medians of the samples

PAGE 94

79 ( 1 2 ) (median 1 [face-to-face] median 2 [virtual] ) For Task III, average indivi dual satisfaction with the method between the developmental settings: Ho: there are no differences betw een the medians of the samples ( 1 = 2 ) (median 1 [face-to-face] = median 2 [virtual] ) Ha: There is a difference between the medians of the samples ( 1 2 ) (median 1 [face-to-face] median 2 [virtual] ) At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .019) indicates that there is a significant di fference between the medians of the average satisfaction with the method for Task II. The Median Test (p value of .214). At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .212) indicates that there is not a signifi cant difference between the medians of the average satisfaction with the method for Task III. Kruskal-Wallis assumes equal variances in the groups. For Task III code the variance assumption is not met. Therefore the Median Test is used for further analysis The Median Test (p value of .332) also indicates that there is not a significant difference between the medians of the average satisfaction with the method for Task III. The results of the Kruskal-Wallis and the Median Tests are found in Table 5.24 though Table 5.27. Developmental Setting N Median Rank Virtual 14 13.04 Face-to-Face 21 21.31 Task II Average Individual Satisfaction with Method Total 35 Virtual 14 15.36 Face-to-Face 21 19.76 Task III Average Individual Satisfaction with Method Total 35 Table 5.24 Kruskal-Wallis Medians Rank for Individual Average Satisfaction with the Method Task II Average Individual Satisfaction with Method Task III Average Individual Satisfaction with Method Chi-Square 5.490 1.557 df 1 1 Asymp. Sig. .019 .212 a Kruskal Wallis Test b Grouping Variable: Developmental Setting Table 5.25 Kruskal-Wallis Test Statistics fo r Individual Average Satisfaction with the Method

PAGE 95

80 Developmental Setting Virtual Face-to-Face > Median 5 12 Task II Average Satisfaction with Method <= Median 9 9 > Median 5 11 Task III Average Satisfaction with Method I <= Median 9 10 Table 5.26 Median Test Frequenc ies for Individual Average Satisfaction with the Method Task II Average Individual Satisfaction with Method Task III Average Individual Satisfaction with Method N 35 35 Median 4.700 4.600 Chi-Square 1.544 .940 df 1 1 Asymp. Sig. .214 .332 Table 5.27 Median Test Frequenc ies for Individual Average Satisfaction with the Method Tests of Covariates (Hypothesis 3 7) Empirical evidence suggests that cognitive ab ility, the integrative conflict management handling style, experience and faithfulness to the method have been shown to have a strong positive linkage with performance (Jex 2002; Rahim 1988b; Gopal et al. 1992-3). The data were tested for correlation between th e covariates and the dependent variables. For Task II, moderate linear correlations (Pearson Correlation Matr ix) were noted as follows: faithfulness and test cases (.329); sa tisfaction with method and test cases (.378); years of IT experience and c ode (.325); and integrative st yle and satisfaction (.360). A negative correlation is shown between years of IT experi ence and satisfaction with the method (-.442). For Task III, moderate linear correlations (Pearson Correlation Matrix) were also noted as follows: cognitive ability (Wonderlic scor e) and test cases (.258); cognitive ability and code (.268); cognitive ability and years of IT experience (.419) and integrative style and satisfaction (.360). A negativ e correlation is shown between years of IT experience and satisfaction with the method (-.442). These results are shown in Table 5.28 through 5.31

PAGE 96

Task II Test Cases Faithfulness Task II Cognitive Ability Integrative Conflict Style Years of IT Experience Task II Satisfaction with Method Pearson Correlation 1 .329 .077 -.355 -.257 .378 Sig. (2-tailed) .126 .726 .097 .236 .076 Task II Test Cases N 23 23 23 23 23 23 Pearson Correlation .329 1 -.105 .139 .104 .285 Sig. (2-tailed) .126 .634 .527 .636 .187 Faithfulness Task II N 23 23 23 23 23 23 Pearson Correlation .077 -.105 1 -.127 -.053 .308 Sig. (2-tailed) .726 .634 .564 .810 .153 Cognitive Ability N 23 23 23 23 23 23 Pearson Correlation -.355 .139 -.127 1 .106 .192 Sig. (2-tailed) .097 .527 .564 .629 .379 Integrative Conflict Style N 23 23 23 23 23 23 Pearson Correlation -.257 .104 -.053 .106 1 -.050 Sig. (2-tailed) .236 .636 .810 .629 .822 Years of IT Experience N 23 23 23 23 23 23 Pearson Correlation .378 .285 .308 .192 -.050 1 Sig. (2-tailed) .076 .187 .153 .379 .822 Average Satisfaction with Method Task II N 23 23 23 23 23 23 Table 5.28 Pearson Correlation Matrix Test Cases Task II 81

PAGE 97

Task III Test Cases Faithfulness Task III Cognitive Ability Integrative Conflict Style Years of IT Experience Task III Satisfaction with Method Pearson Correlation 1 .176 .258 .049 -.059 -.201 Sig. (2-tailed) .421 .235 .826 .788 .357 Task III Test Cases N 23 23 23 23 23 23 Pearson Correlation .176 1 -.082 .074 -.017 .073 Sig. (2-tailed) .421 .710 .738 .937 .741 Faithfulness Task III N 23 23 23 23 23 23 Pearson Correlation .258 -.082 1 -.127 -.053 .025 Sig. (2-tailed) .235 .710 .564 .810 .910 Cognitive Ability N 23 23 23 23 23 23 Pearson Correlation .049 .074 -.127 1 .106 .278 Sig. (2-tailed) .826 .738 .564 .629 .199 Integrative Conflict Style N 23 23 23 23 23 23 Pearson Correlation -.059 -.017 -.053 .106 1 -.229 Sig. (2-tailed) .788 .937 .810 .629 .294 Years of IT Experience N 23 23 23 23 23 23 Pearson Correlation -.201 .073 .025 .278 -.229 1 Sig. (2-tailed) .357 .741 .910 .199 .294 Task III Average Satisfaction with Method N 23 23 23 23 23 23 Table 5.29 Pearson Correlation Matrix Test Cases Task III 82

PAGE 98

Task II Satisfaction with Method Task II Faithfulness Task II Cognitive Ability Integrative Conflict Style Years of IT Experience Code Pearson Correlation 1 -.091 .177 .035 .325 -.126 Sig. (2-tailed) .603 .308 .840 .057 .471 Task II Code N 35 35 35 35 35 35 Pearson Correlation -.091 1 -.131 .019 .002 .159 Sig. (2-tailed) .603 .454 .915 .992 .362 Faithfulness Task II N 35 35 35 35 35 35 Pearson Correlation .177 -.131 1 -.177 .229 -.183 Sig. (2-tailed) .308 .454 .308 .185 .293 Cognitive Ability N 35 35 35 35 35 35 Pearson Correlation .035 .019 -.177 1 -.012 .360(*) Sig. (2-tailed) .840 .915 .308 .944 .034 Integrative Conflict Style N 35 35 35 35 35 35 Pearson Correlation .325 .002 .229 -.012 1 -.442(**) Sig. (2-tailed) .057 .992 .185 .944 .008 Years of IT Experience N 35 35 35 35 35 35 Pearson Correlation -.126 .159 -.183 .360(*) -.442(**) 1 Sig. (2-tailed) .471 .362 .293 .034 .008 Task II Average Satisfaction with Method N 35 35 35 35 35 35 Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed). Table 5.30 Pearson Correlation Matrix Code Task II 83

PAGE 99

84 Task III Code Faithfulness Task III Cognitive Ability Integrative Conflict Style Years of IT Experience Task III Satisfaction with Method Pearson Correlation 1 .211 .268 .082 .419(*) .006 Sig. (2-tailed) .224 .120 .639 .012 .972 Task III Code N 35 35 35 35 35 35 Pearson Correlation .211 1 -.157 .019 -.110 .141 Sig. (2-tailed) .224 .368 .912 .530 .420 Faithfulness Task III N 35 35 35 35 35 35 Pearson Correlation .268 -.157 1 -.177 .229 -.183 Sig. (2-tailed) .120 .368 .308 .185 .293 Cognitive Ability N 35 35 35 35 35 35 Pearson Correlation .082 .019 -.177 1 -.012 .360(*) Sig. (2-tailed) .639 .912 .308 .944 .034 Integrative Conflict Style N 35 35 35 35 35 35 Pearson Correlation .419(*) -.110 .229 -.012 1 -.442(**) Sig. (2-tailed) .012 .530 .185 .944 .008 Years of IT Experience N 35 35 35 35 35 35 Pearson Correlation .006 .141 -.183 .360(*) -.442(**) 1 Sig. (2-tailed) .972 .420 .293 .034 .008 Task III Average Satisfaction with Method N 35 35 35 35 35 35 Table 5.31 Pearson Correlation Matrix Code Task III Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed).

PAGE 100

85 Because the data are not normally distribu ted and the Pearson Correlation Matrices indicate that that there may be a moderate correlation between cogni tive ability and code, non parametric testing was applied for testing covariates (Hypothesis 3 7). In Study 2 we view the impact of pairing of the individual characteristics and perceived processes during development as compensatory to the dyad. Therefore, a high score for each measure was computed for each covariate. Perceived faithfulness scores of 4.5 or higher were considered high, based on a 5 point Likert scale. These scores represent the uppe r third of the possible perceived faithfulness score. Wonderlic scores (cognitive ability) of 30 or higher were considered high. This is based on the mean score of 29 for the popul ation of programmers (Wonderlic 1999). Integrative conflict handling styl e scores in the high range we re scores of 4.5 or higher, based on a 5 point Likert scale. High IT experience was based on experience levels of five (5) years or greater. Hypothesis 3 deals with comparisons between the groups for developers with high levels of perceived faithfulness to the collaborative method (pair programming). Developers with high faithfulness during development are those developers in the pair who had a score of 4.5 or higher (Likert sc ale = 5). In order to test hypothesis 3a, non-parametric statistical tests were conducte d and interpreted. At an alpha level of .10 (p value of less than .10), the Kruskal-Wallis test (p value of .262) indicates that there is not a significant difference in task performance between the tw o groups for Task II test cases. KruskalWallis assumes equal variances in the groups. For Task II test cases, the variance assumption is not met. Therefore the Median Test is used for further analysis. The Median Test (p value of .565) indicates that there is not a significant difference between the medians between the two groups for Task II test cases. At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .792) indicates that there is not a significant difference in task performance between the two groups for Task III test cases. Krus kal-Wallis assumes equal variances in the groups. For Task III test cases, the variance assumption is not met. Therefore the Median Test is used for further analysis. The Medi an Test (p value of 1.00) also indicates that there is not a significant di fference between the medians between the two groups for Task III test cases. The results of the Kruskal-Wa llis and the Median Tests are found in Tables 5.32 through Table 5.39. Between Group Faithfulness Task II N Median Rank Virtual High Faithfulness Task II 5 5.50 Face-to-Face Average to Low Faithfulness Task II 8 7.94 Task II Test Cases Total 13 Table 5.32 Kruskal-Wallis Medians Rank fo r Hypothesis 3 (Task Test Cases)

PAGE 101

86 Task II Test Cases Chi-Square 1.257 df 1 Asymp. Sig. .262 a Kruskal Wallis Test b Grouping Variable: Between Group Faithfulness Task II Table 5.33 Kruskal-Wallis Test Statisti cs for Hypothesis 3 (Test Cases) Between Group Faithfulness Task II Virtual High Faithfulness Task II Face-to-Face Average to Low Faithfulness Task II > Median 1 4 Task II Test Cases <= Median 4 4 Table 5.34 Median Test Frequencies for Hypothesis 3 (Test Cases) Task II Test Cases N 13 Median 8.00 Exact Sig. .565 a Grouping Variable: Between Group Faithfulness Task II Table 5.35 Test Statistics for Median Test for Hypothesis 3 (Test Cases)

PAGE 102

87 Between Group Faithfulness Task III N Median Rank Virtual High Faithfulness Task III 3 4.67 Face-to-Face High Faithfulness Task III 6 5.17 Task III Test Cases Total 9 Table 5.36 Kruskal-Wallis Medians Rank for Hypothesis 3 (Test Cases) Task III Test Cases Chi-Square .070 df 1 Asymp. Sig. .792 a Kruskal Wallis Test b Grouping Variable: Between Group Faithfulness Task III Table 5.37 Kruskal-Wallis Test Statisti cs for Hypothesis 3 (Test Cases) Between Group Faithfulness Task III Virtual High Faithfulness Task III Face-to-Face High Faithfulness Task III > Median 1 3 Task III Test Cases <= Median 2 3 Table 5.38 Median Test Frequencies for Hypothesis 3 (Test Cases) Task III Test Cases N 9 Median 3.000 Exact Sig. 1.000 a Grouping Variable: Between Group Faithfulness Task III Table 5.39 Test Statistics for Median Test for Hypothesis 3 (Test Cases )

PAGE 103

88 In order to test hypothesis 3b, non-parametric statistical tests were conducted and interpreted. There ar e not enough valid cases to perfor m the Kruskal-Wall Test or the Median Test for Task II code; therefore, no st atistics are computed. At an alpha level of .10 (p value of less than .10), the Kruskal-Wallis test (p value of .687) indicates that there is not a significant difference in task perf ormance between the two groups for Task III code. Kruskal-Wallis assumes equal variances in the groups. For Task III code, the variance assumption is not met. Therefore the Median Test was conducted. However, there are not enough valid cases to perform the Medium Test for Task III code. Hence, no statistics are computed. The result of th e Kruskal-Wallis Test is found in Tables 5.40 through Table 5.41. Between Group Faithfulness Task III N Median Rank Virtual High Faithfulness Task III 1 4.00 Face-to-Face High Faithfulness Task III 8 5.13 Task III Code Total 9 Table 5.40 Kruskal-Wallis Median s Rank for Hypothesis 3 (Code) Task III Code Chi-Square .162 df 1 Asymp. Sig. .687 a Kruskal Wallis Test b Grouping Variable: Between Group Faithfulness Task 3 Table 5.41 Kruskal-Wallis Test Stat istics for Hypothesis 3 (Code) Hypothesis 4 deals with comparisons betw een the groups for developers with high perceived task conflict during development. As previously menti oned, there was little variation in the amount of c onflict reported by the participants in Study 2. Therefore, hypothesis 4 is excluded from statistical analysis. Hypothesis 5 deals with comparisons betw een the groups for developers with high cognitive ability. Developers with Wonderlic scores of scores of 30 or higher were considered developers with high cognitive ab ility. In order to test hypothesis 5a, nonparametric statistical tests were conducted a nd interpreted. At an alpha level of .10 (p value of less than .10), the Kruskal-Wallis test (p value of .630) indicates that there is not a significant difference in task performance betw een the two groups for Task II test cases. Kruskal-Wallis assumes equal variances in the groups. For Task II test cases, the variance assumption is not met. Therefore the Me dian Test is used fo r further analysis.

PAGE 104

89 The Median Test (p value of 1.00) also indica tes that there is not a significant difference between the medians between the two groups for Task II test cases. At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .240) indicates that there is not a significant difference in task performance between the two groups for Task III test cases. Krus kal-Wallis assumes equal variances in the groups. For Task II test cases, the variance assumption is not met. Therefore the Median Test is used for further analysis. The Median Test (p value of .400) also indicates that there is not a significant di fference between the medians between the two groups for Task II test cases. The results of the Kruskal-Wa llis and the Median Tests are found in Tables 5.42 through Table 5.45. High / Average to Low Cognitive Ability N Median Rank High Cognitive Ability 14 12.54 Average to Low Cognitive Ability 9 11.17 Task II Test Cases Total 23 High Cognitive Ability 14 13.43 Average to Low Cognitive Ability 9 9.78 Task III Test Cases Total 23 Table 5.42 Kruskal-Wallis Medians Rank for Hypothesis 5 (Test Cases) Task II Test Cases Task III Test Cases Chi-Square .232 1.614 df 1 1 Asymp. Sig. .630 .204 a Kruskal Wallis Test b Grouping Variable: High / Average to Below Cognitive Ability Table 5.43 Kruskal-Wallis Test Statisti cs for Hypothesis 5 (Test Cases) High / Average to Low Cognitive Ability High Cognitive Ability Average or Low Cognitive Ability > Median 6 3 Task II Test Cases <= Median 8 6 > Median 6 2 Task III Test Cases <= Median 8 7 Table 5.44 Median Test Frequencies for Hypothesis 5 (Test Cases)

PAGE 105

90 Task II Test Cases Task III Test Cases N 23 23 Median 8.00 3.000 Exact Sig. 1.000 .400 a Grouping Variable: High / Average to Low Cognitive Ability Table 5.45 Test Statistics for Median Test for Hypothesis 5 (Test Cases) In order to test hypothesis 5b, non-parametric statistical tests were conducted and interpreted. At an alpha leve l of .10 (p value of less than .10), the Kruskal-Wallis test (p value of .297) indicates that there is not a significant di fference in task performance between the two groups for Task II code. Kr uskal-Wallis assumes equal variances in the groups. For Task II code, the variance assump tion is not met. Therefore the Median Test is used for further analysis. The Median Te st (p value of .025) indicates that there is a significant difference between the medians be tween the two groups for Task II code. At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .297) indicates that there is not a significant difference in task performance between the two groups for Task III code. Kruskal-Wallis assumes equal variances in the groups. For Task III code, the variance assumption is not met. Therefore the Median Test is used for further analysis. The Median Test (p va lue of .404) also indicates that there is not a significant difference between the medians betw een the two groups for Task II code. The results of the Kruskal-Wallis and the Medi an Tests are found in Tables 5.46 through Table 5.49. High / Average to Below Cognitive Ability N Median Rank High Cognitive Ability 16 19.94 Average or Below Cognitive Ability 19 16.37 Task II Code Total 35 High Cognitive Ability 16 19.78 Average or Below Cognitive Ability 19 16.50 Task III Code Total 35 Table 5.46 Kruskal-Wallis Median s Rank for Hypothesis 5 (Code)

PAGE 106

91 Task II Code Task III Code Chi-Square 1.089 .953 df 1 1 Asymp. Sig. .297 .329 a Kruskal Wallis Test b Grouping Variable: High / Average to Below Cognitive Ability Table 5.47 Kruskal-Wallis Test Stat istics for Hypothesis 5 (Code) High / Average to Below Cognitive Ability High Cognitive Ability Average or Below Cognitive Ability > Median 7 2 Task II Code <= Median 9 17 > Median 9 8 Task III Code <= Median 7 11 Table 5.48 Median Test Freque ncies for Hypothesis 5 (Code) Task II Code Task III Code N 35 35 Median 3.000 2.000 Chi-Square 5.019 .696 df 1 1 Asymp. Sig. .025 .404 a Grouping Variable: High / Average to Below Cognitive Ability Table 5.49 Test Statistics for Medi an Test for Hypothesis 5 (Code) Hypothesis 6 deals with comparisons between the groups for developers with high self assessed integrating conflict management styles Developers with hi gh integrative styles are those developers in the pair who had a score of 4.5 or higher (L ikert scale = 5). In order to test hypothesis 6a non-parametric statistical tests were conducted and interpreted. At an alpha leve l of .10 (p value of less than .10), the Kruskal-Wallis test (p value of .054) indicates that th ere is a significant difference in task performance between the two groups for Task II test cases. Kr uskal-Wallis assumes equal variances in the

PAGE 107

92 groups. The Median Test (p value of .383) indicates that there is not a significant difference between the medians between the two groups for Task II test cases. At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .589) indicates that there is not a significant difference in task performance between the two groups for Task III test cases. Krus kal-Wallis assumes equal variances in the groups. For Task III test cases, the variance assumption is not met. Therefore the Median Test is used for further analysis. The Medi an Test (p value of .657) also indicates that there is not a significant di fference between the medians between the two groups for Task III test cases. The results of the Kruskal-Wa llis and the Median Tests are found in Tables 5.50 through Table 5.53. High / Average to Low Integrative Conflict Style N Median Rank High Integrative Conflict Handling Style 14 9.86 Average to Low Integrative Conflict Handling Style 9 15.33 Task II Test Cases Total 23 High Integrative Conflict Handling Style 14 11.39 Average to Low Integrative Conflict Handling Style 9 12.94 Task III Test Cases Total 23 Table 5.50 Kruskal-Wallis Medians Rank for Hypothesis 6 (Test Cases) Task II Test Cases Task III Test Cases Chi-Square 3.718 .292 df 1 1 Asymp. Sig. .054 .589 a Kruskal Wallis Test b Grouping Variable: High / Low Integrative Conflict Style Table 5.51 Kruskal-Wallis Test Statisti cs for Hypothesis 6 (Test Cases)

PAGE 108

93 High / Average to Low Integrative Conflict Style High Integrative Conflict Handling Style Average to Low Integrative Conflict Handling Style > Median 4 5 Task II Test Cases <= Median 10 4 > Median 4 4 Task III Test Cases <= Median 10 5 Table 5.52 Median Test Frequencies for Hypothesis 6 (Test Cases) Task II Test Cases Task III Test Cases N 23 23 Median 8.00 3.000 Exact Sig. .383 .657 a Grouping Variable: High / Average to Low Integrative Conflict Style Table 5.53 Test Statistics for Median Test for Hypothesis 6 (Test Cases) In order to test hypothesis 6b, non-parametric statistical tests were conducted and interpreted. At an alpha leve l of .10 (p value of less than .10), the Kruskal-Wallis test (p value of .460) indicates that there is not a significant di fference in task performance between the two groups for Task II code. Kr uskal-Wallis assumes equal variances in the groups. For Task III code cases, the varian ce assumption is not met. Therefore the Median Test is used for further analysis. The Median Test (p value of .774) also indicates that there is not a significant difference between the medians between the two groups for Task II code. At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .597) indicates that there is not a significant difference in task performance between the two groups for Task III code. Kruskal-Wallis assumes equal variances in the groups. For Task III code cases, the variance assumption is not met. Therefore the Median Test is used for further analysis. The Median Test (p value of .238) also i ndicates that there is not a significant difference between the medians between the two groups for Task III code. The results of the Kruskal-Wallis a nd the Median Tests are found in Tables 5.54 through Table 5.57.

PAGE 109

94 High / Low Integrative Conflict Style N Median Rank High Integrative Conflict Handling Style 17 19.29 Average to Low Integrative Conflict Handling Style 18 16.78 Task II Code Total 35 High Integrative Conflict Handling Style 17 18.91 Average to Low Integrative Conflict Handling Style 18 17.14 Task III Code Total 35 Table 5.54 Kruskal-Wallis Median s Rank for Hypothesis 6 (Code) Task II Code Task III Code Chi-Square .545 .280 df 1 1 Asymp. Sig. .460 .597 a Kruskal Wallis Test b Grouping Variable: High / Average to Low Integrative Conflict Style Table 5.55 Kruskal-Wallis Test Stat istics for Hypothesis 6 (Code) High / Average to Low Integrative Conflict Style High Integrative Conflict Handling Style Average to Low Integrative Conflict Handling Style > Median 4 5 Task II Code <= Median 13 13 > Median 10 7 Task III Code <= Median 7 11 Table 5.56 Median Test Freque ncies for Hypothesis 6 (Code)

PAGE 110

95 Task II Code Task III Code N 35 35 Median 3.000 2.000 Chi-Square .083 1.391 df 1 1 Asymp. Sig. .774 .238 a Grouping Variable: High / Average to Low Integrative Conflict Style Table 5.57 Test Statistics for Medi an Test for Hypothesis 6 (Code) Hypothesis 7 deals with comparisons between the groups for developers with high IT experience. Developers with 5 or more year s of IT experience are considered to be developers with high IT experience. In orde r to test hypothesis 7a non-parametric testing was conducted. At an alpha level of .10 (p valu e of less than .10), the Kruskal-Wallis test (p value of .040) indicates that there is a significant difference in task performance between the two groups for Task II test cases Kruskal-Wallis assumes equal variances in the groups. The Median Test (p value of .120) indicates that ther e is not a significant difference between the medians between the two groups for Task II test cases. At an alpha level of .10 (p value of less th an .10), the Kruskal-Wallis test (p value of .200) indicates that there is not a significant difference in task performance between the two groups for Task III test cases. Krus kal-Wallis assumes equal variances in the groups. For Task III test cases, the variance assumption is not met. Therefore the Median Test is used for further analysis. The Medi an Test (p value of .379) also indicates that there is not a significant di fference between the medians between the two groups for Task III test cases. The results of the Kruskal-Wa llis and the Median Tests are found in Tables 5.58 through Table 5.61. High / Average to Low IT Experience N Median Rank High IT Experience 13 9.50 Average to Low IT Experience 10 15.25 Task II Test Cases Total 23 High IT Experience 13 13.58 Average to Low IT Experience 10 9.95 Task III Test Cases Total 23 Table 5.58 Kruskal-Wallis Medians Rank for Hypothesis 7 (Test Cases)

PAGE 111

96 Task II Test Cases Task III Test Cases Chi-Square 4.230 1.644 df 1 1 Asymp. Sig. .040 .200 a Kruskal Wallis Test b Grouping Variable: High / Average to Low IT Experience Table 5.59 Kruskal-Wallis Test Statisti cs for Hypothesis 7 (Test Cases) High / Low IT Experience High IT Experience Average to Low IT Experience > Median 3 6 Task II Test Cases <= Median 10 4 > Median 6 2 Task III Test Cases <= Median 7 8 Table 5.60 Median Test Frequencies for Hypothesis 7 (Test Cases) Task II Test Cases Task III Test Cases N 23 23 Median 8.00 3.000 Exact Sig. .102 .379 a Grouping Variable: High / Average to Low IT Experience Table 5.61 Test Statistics for Median Test for Hypothesis 7 (Test Cases) In order to test hypothesis 7b non-parametric te sting was conducted. At an alpha level of .10, the results of these statistical tests indicate that there is a significant difference in task performance between the groups for Task II co de. For Task II, the p values are as follows: Kruskal-Wallis (p value = .043) and Median Test (p value = .070). At an alpha level of .10, the results of th ese statistical test s indicate that there is a significant difference in task performance between the grou ps for Task III code. For Task III, the p values are as follows: Kruskal-Wallis (p value = .004) and Median Test (p value = .008). The results of the Kruskal-Wallis and the Median Tests are found in Tables 5.62 through Table 5.65.

PAGE 112

97 High / Average to Low IT Experience N Median Rank High IT Experience 11 23.09 Average / Below Average IT Experience 24 15.67 Task II Code Total 35 High IT Experience 11 25.05 Average / Below Average IT Experience 24 14.77 Task III Code Total 35 Table 5.62 Kruskal-Wallis Median s Rank for Hypothesis 7 (Code) Task II Code Task III Code Chi-Square 4.093 8.114 df 1 1 Asymp. Sig. .043 .004 a Kruskal Wallis Test b Grouping Variable: High / Low IT Experience Table 5.63 Kruskal-Wallis Test Stat istics for Hypothesis 7 (Code) High / Low IT Experience High IT Experience Average / Below Average IT Experience > Median 5 4 Task II Code <= Median 6 20 > Median 9 8 Task III Code <= Median 2 16 Table 5.64 Median Test Freque ncies for Hypothesis 7 (Code)

PAGE 113

98 Task II Code Task III Code N 35 35 Median 3.000 2.000 Chi-Square 3.272 7.098 df 1 1 Asymp. Sig. .070 .008 a Grouping Variable: High / Low IT Experience Table 5.65 Test Statistics for Medi an Test for Hypothesis 7 (Code) Results of Study 2 The results of Study 2 suggest that the de velopmental setting significantly impacts collaborative programming (pair programming) outcomes. This research demonstrates that there is a significant difference in code performance. Additionally, developers working in a face-to-face setting have significantly higher satisfaction with the collaborative method (pair programming). Wh ile collaborative programming is possible in a virtual setting, both pair code performance and individual developer satisfaction with the method are substantially lower fo r developers working virtually. As previously stated in Study 2, we view th e impact of covariates (processes during development and individual developer differe nces within the dyad) as compensatory. The results of Study 2 suggest that high levels of perceived faithfulness to the method do not significantly impact pair task perfo rmance. The impact of conflict during development was not examined in Study 2 due to fact that there was li ttle variation in the perceived episodes of conflict re ported by the participants. Study 2 also investigates the impact of individual developer characteristics on task performance, in both face-to-face and virt ual developmental settings. The findings of Study 2 suggest that that for Task II, high integrative conflict management styles and cognitive ability positively influence pair te st case performance. Additionally, for Task II, high cognitive ability and IT experience are positively linked to pair code performance. And finally, high IT experience influences positive pair code performance for Task III. A summary of the study hypot heses and results are found in Table 5.66.

PAGE 114

99 Study 2 Hypotheses Results H1: Developers working in a face-to-face developmental setting will have higher levels of pair task performance than developers working in a virtual developmental setting. H1a: Developers working in a face-to-face developmental setting will have higher levels of correct test cases than developers work ing in a virtual developmental setting. H1b: Developers working in a face-to-face developmental sett ing will have higher levels of correct code than deve lopers working in a virtua l developmental setting. Not Supported Supported Task II & Task III H2: Developers working in a face-to-face developm ental setting will have higher levels of levels of individual satisfaction with the method than devel opers working in a virtual developmental setting. Supported Task II H3: Developers working in a face-to-face developmental setting will have higher levels of perceived faithfulness to the method than developers working in a virtual developmental setting and higher levels of perceived faithfulness will be re lated to higher levels of pair task performance. H3a: Developers working in a face-to-face developmental setting will have higher levels of correct test cases than developers work ing in a virtual developmental setting. H3b: Developers working in a face-to-face developmental sett ing will have higher levels of correct code than de velopers working in a virtua l developmental setting. Not Supported Not enough valid cases; thus, unable to compute statistics H4: Developers working in a virtual developmental setting will have higher levels of perceived conflict during collaborative programming than developers working in a face-to-face developmental setting and higher levels of perceived conflict will be related to lower levels of task performance. H4a: Developers working in a virtual devel opmental setting will ha ve higher levels of perceived conflict during collaborative programming than developers working in a faceto-face developmental setting and higher levels of perceived conflict will be related to lower levels of test case performance. H4b: Developers working in a virtual developmental setting will have higher levels of perceived conflict during coll aborative programming than developers working in a faceto-face developmental setting and higher levels of perceived conflict will be related to lower levels of code performance. Low variation in task conflict, no statistics computed H5: When developer dyad cognitive ability is determined by the higher cognitive ability individual in the dyad, developer cognitive ability and developmental setting will interact to impact pair task performance. H5a: Developer cognitive ability and developmental setting will interact to impact test case performance. H5b: Developer cognitive ability and developmental setting will interact to impact code performance. Not Supported Supported Task II H6: When developer dyad conflict handling style is determined by the higher integrative conflict management style individual in the dyad: Deve loper integrative conflict management style and developmental setting will interact to impact pair task performance. H6a: Developer integrative conflict management style will interact to impact test case performance. H6b: Developer integrative conflict management style will interact to impact code performance. Supported Task II Not Supported H7: When developer dyad IT experience is determined by the higher IT experience individual in the dyad: Developer IT experience and developmental setting will interact to impact pair task performance. H7a: Developer IT experience and developmental setting will interact to impact test case performance. H7b: Developer IT experience an d developmental setting will interact to impact code performance. Supported Task II Supported Task II & Task III Table 5.66 Summary of Study 2 Hypotheses and Results

PAGE 115

100 Limitations There are a number of inherent limita tions to this study. Although laboratory experiments allow for greater precision in the control and measuremen t of subjects they lack generalizability to the field. Participan ts used in the study were students. Sixty percent of the subjects had less than one year or no IT experience. Additionally, participants were allotted short period s of time to complete the experimental programming tasks, which may not be fully representative of programming projects used in industry. It should be noted the collabora tive software utilized duri ng the experimental tasks may have impacted virtual performance outcomes What impact, if any, this phenomenon may have had on performance outcomes is unclear. And finally, since subjects were students working in a laboratory setting, their behavior may not be representative of their behavior in a non-contrive d work environment.

PAGE 116

101 Chapter Six Study 3 Chapter Six describes a laboratory experiment that tests how variations in the collaborative programming method (pair programming) impact performance outcomes. An overview of the study is presented, followed by a discussion of the research models, research hypotheses, data collection, data analysis and study results. Overview The primary focus of Study 3 is to investigate how variations, or adaptations, in the developmental method impact performance outcomes for collaborative programming (pair programming). Specifically, we explore the impact of structured problem solving (test cases) and unstructured problem solving (brainstorming) development methods on performance outcomes. We also investigate the impact of collaboration on performance outcomes. In addition, we explore how these variations in the developmental method impact the processes used during development. And finally, we continue to explore the impact of individual developer characteristics on performance outcomes. High Level Research Model The high level research model used in this dissertation is shown in Figure 6.1. Figure 6.1 High Level Research Model Individual Characteristics Cognitive Ability Conflict Handlin g St y le Processes During Development Faithfulness to Method Task Conflict Distributed Cognition Developmental Setting Face-to-Face Virtual Collaborative Method Pair Programming VariationsofPair Performance Outcomes Pair Task Performance Individual Task Performance Individual Satisfaction with Method

PAGE 117

102 The underlying premise of Study 3 is that variations, or adaptations, in the collaborative developmental method (pair programming) will impact performance outcomes. As described in Chapter 2, system development methods are often adapted, or varied, by developers who use them in organizational settings. Thus the primary focus of Study 3 is to investigate the impact of variations to the method on individual performance as follows: structured problem solving (the use of test cases) versus non-unstructured problem solving (brainstorming) and collaborative (pairs of developers) versus non-collaborative development (developers working alone). In Study 3 a laboratory experiment is conducted in which two performance outcomes are studied: individual task performance (measured by the correctness of the pseudocode), and individual satisfaction with the method. Additionally, we explore a number of other factors that are believed to impact successful programming outcomes. These include the impact of faithfulness to the programming method and individual developer differences (cognitive ability and years of IT experience). The reasoning behind the selection of these constructs and variables, as well as details on these measures, is provided in Chapter Three. Study 3 Research Models Each study contained in this dissertation focuses on a different part of the high-level research model shown in Figure 6.1. Two research models are utilized to study the variables and constructs in Study 3. The primary research model utilized in Study 3 (Figure 6.2) focuses on the main effects of the manipulation of the developmental method on performance outcomes. The variations in developmental method are as follows: 1) developers work collaboratively utilizing a structured problem solving method (test cases) and then write code alone; 2) developers work alone utilizing a structured problem solving (test cases) and then write code alone; and 3) developers work collaboratively utilizing an unstructured problem solving method (brainstorming) and then write code alone. H1, H2, H3, H4 Figure 6.2 Study 3 Research Model: Main Effects Developmental Method (Manipulated) Collaborative Structured Problem Solving Non-Collaborative Structured Problem Solving Collaborative Unstructured Problem Solving Performance Outcomes Individual Task Performance Individual Satisfaction with Method

PAGE 118

103 In Study 3 we also investigate the impact of processes used during development (faithfulness to the method) and individual developer differences (cognitive ability and years of IT experience) when variations take place in the developmental method. The research model used to explore the mediating effect of processes during development and the moderating effect of individual differences is shown in Figure 6.3. H5, H6, H7 H8, H9, H10, H11, H12, H13+ Figure 6.3 Study 3 Research Model: Mediating & Moderating Effects Research Questions and Hypotheses The primary research question addressed in Study 3 is as follows: Within the context of collaborative programming, do variations in the developmental method impact related performance outcomes and the processes used during collaborative programming? The primary focus of this research question relates to the issue of variations, or adaptations, of the collaborative developmental method (pair programming) and how to predict individual task performance and individual satisfaction with the method. As previously mentioned, in Study 3, the collaborative programming (pair programming) method is manipulated in three ways. The research hypotheses in Study 3 provide a method to test the degree to which these variations in the developmental method and collaboration impact individual task performance and individual satisfaction with the method. A review of the practitioner literature suggests that organizations vary, or adapt, the standard collaborative method (pair programming) in a number of ways. To date, minimal research in the academic literature has explored this issue of variation in method as it relates to collaborative programming (pair programming). However, Adaptive Developmental Method (Manipulated) Collaborative Structured Problem Solving Non-Collaborative Structured Problem Solving Collaborative Unstructured Problem Solvin g Performance Outcomes Individual Task Performance Individual Satisfaction with Method Processes During Development Faithfulness to Method Individual Characteristics Cognitive Ability Years of IT

PAGE 119

104 Structuration Theory (AST) has found that how technology is appropriated impacts performance outcomes (Poole and DeSanc tis 1989, 1990; Gopal et al. 1992-3). Collaboration is widely used today in organizational settings and is an essential part of the collaborative programming method (pair programming). In prior research, collaboration generally focuses on the proce ss people use when working together in a group to solve problems and make decisions. The heuristic problem-solving model, suggest that problem solving ability is enha nced as individuals work together in a collaborative manner (Newell and Simon 1972). In Study 3, we define collaboration in the cont ext of the activities associated with two programmers working together interactiv ely on the experimental tasks. The collaborative programming method (pair progra mming) is varied or adapted in Study 3, as we investigate the impact of collaboration on individua l performance outcomes. Prior research on collaborative programming (pair programming) has not yet explored the impact of variations in the development method or of collaboration. Additionally, the academic research on brainstorming has failed to consistently support the claim of higher performance outcomes for most activities when using brainstorming techniques. Thus, we hypothesize: H1: Developers working collaboratively utilizing a structured problem solving developmental method will have higher levels of individual task performance than developers working collaboratively util izing an unstructured problem solving developmental method. H2: Developers working collaboratively ut ilizing a structured developmental method will have higher levels of individual task performance than developers working noncollaboratively utilizing a structured problem solving developmental method. H3: Developers working non-collaborativel y utilizing a structured problem solving developmental method will have higher levels of individual task performance than developers working collaboratively util izing an unstructured problem solving developmental method. Prior research on collaborative progra mming (pair programming) suggests that developers report higher levels of satisfac tion when working with the collaborative programming method (pair programming), as opposed to working alone (Nosek 1998; Williams et al. 2000). Similarly, higher leve ls of satisfaction have been reported by subjects when they work with others using brainstorming to solve problems. Thus, we hypothesize: H4: Developers working collaboratively will ha ve higher levels of individual satisfaction with the developmental method than deve lopers working non-collaboratively. Adaptive Structuration Theory (AST) posits fa ithfulness to the appropriation of the work method is an important factor in performance. Faithfulness refers to the extent to which a group uses the process or system, in keeping with the spirit in which it was meant to be

PAGE 120

105 used. Prior research on AST suggests th at faithfulness to method also impacts performance outcomes in group work (Pool e and DeSanctis 1989, 1990; Gopal et al. 1992-3). Little research has explored faith fulness to the method in the context of collaborative programming (pair programming), let alone when there are variations in the collaborative programming method. The in tensive process study, conducted in Study 1 of this dissertation, suggests that developers who are more faithful to the collaborative programming method (pair programming), w ill have higher performance outcomes, holding all other variables cons tant. Additionally, as previo usly mentioned structured approaches to problem solving have b een shown to produce higher performance outcomes when compared to unstructured met hods, such as brainstorming. Thus, we hypothesis: H5: For developers working collaborativel y utilizing a structured problem solving developmental method, developers who perceive they were more faithful to the method will have higher levels of individual task performance. H6: For developers working non-collaborativ ely utilizing a structured problem solving developmental method, developers who perceive they were more faithful to the method will have higher levels of individual task performance. H7: For developers working collaborativel y utilizing an unstructured problem solving developmental method, developers who perceive they were more faithful to the method will have higher levels of individual task performance. Prior research has shown cognitive ability to be a predictor of performance levels. Job knowledge has also been shown to be the mo st immediate link betw een cognitive ability and performance. Individuals with higher cognitive ability tend to develop greater understandings of job duties as compared to their counterparts with lower cognitive ability (Schmidt et al. 1986). A review of the psychology literature suggests that in groups, individual differences may have both additive (group average) and compensatory (higher ability group members help lower ab ility group members). For Study 3, we view these individual differences as co mpensatory. Thus, we hypothesize: H8: For developers working collaborativel y utilizing a structured problem solving developmental method, developers with higher c ognitive ability will ha ve higher levels of individual task performance. H9: For developers working non-collaborativ ely utilizing a structured problem solving developmental method, developers with higher cognitive ability, will have higher levels of individual task performance. H10: For developers working collaboratively utilizing an unstructured problem solving developmental method, developers with higher cognitive ability, will have higher levels of individual task performance.

PAGE 121

106 H11: For developers working collaboratively utilizing a structured problem solving developmental method, developers with higher levels of IT experience will have higher levels of individual task performance. H12: For developers working non-collaborative ly utilizing a structured problem solving developmental method, developers with higher le vels of IT experience, will have higher levels of individual task performance. H13: For developers working collaboratively utilizing an unstructured problem solving developmental method, developers with higher le vels of IT experience, will have higher levels of individual task performance. Research Design In order to examine and test the resear ch hypotheses, we conducted a laboratory experiment at a university located in the southern United States. One hundred and twenty (120) subjects were r ecruited for participation in th e experiment. As an incentive to participate in th e research, subjects completing th e study received 5% towards their final course grade. Participat ion was strictly voluntary. Ninety eight of the participants were full tim e and part time graduate students majoring in management information systems (MIS) or accounting information systems (AIS), who were enrolled in an Advanced Systems Analys is and Design course. Twenty two of the subjects were graduating seniors majoring in MIS, who were enrolled in a capstone class Management of Information Resources. Four full time, professional programmers were also recruited to par ticipate in Study 3. Prior to beginning the experiment, each part icipant was assigned at random to one of three treatment groups: Treatment Group I pairs (dyads) of developers who work collaboratively utilizing a stru ctured problem solving method (test cases) and then write code alone; Treatment Group II developers work alone utilizing a structured problem solving (test cases) and then write code al one (control group); and Treatment Group III pairs (dyads) of developers who work colla boratively utilizing an unstructured problem solving method (brainstorm) and then write code alone. Subjects working in dyads, for the initial pa rt of each experimental task, were also randomly assigned to work in a designated programming pair for the duration of the experiment. All subjects were assigned two experimental tasks (Task II and Task III). The order in which the experimental tasks were completed by subjects was also assigned at random by the researchers prior to the beginning of the study. Two tasks were included in the experiment in order to vary the difficulty of the tasks. Data Collection Prior to beginning Study 3, scripts, questionn aires and experimental tasks were pretested. A pilot study was also conducted in or der to ensure that there were variations

PAGE 122

107 across and between subjects relative to individua l task performance. After a review of the pretest and pilot results, changes were inco rporated into the experimental materials as deemed appropriate. Copies of the final sc ripts, questionnaires and experimental tasks used in Study 3 are found in the Appendices. Data collection began in the summer 2003 and continued through the fall of 2004. The majority of the one hundred and twenty (120) participants in Study 3 were graduate MIS and graduate AIS students enrolled in Advan ced Systems Analysis and Design. Student subjects were offered a number of weekend days in which to participate in the experiment. Subjects self select ed the day in which they chose to participate in the study. Completion of the experiment took place in one session over a 3 hour period at the university. The researcher conducted the same experiment with the full time programmers in one session at their place of employment. Figure 6.4 outlines the experimental design with the explanation of notations.

PAGE 123

108 Figure 6.4 Experimental Design The day of the study, participants reported to pre-assigned classroom(s) as instructed by the researcher. At each session, only one treatment was administered, or participants were assigned to different classrooms by treatment group. When multiple sessions of the experiment being conducted simultaneously, the primary researcher had assistance in carrying out the experiment. Research assistants were trained prior to conducting the experiment. This approach was utilized so that participants would not be biased or confused by hearing differing instructions for the completion of the experimental tasks. Each session began with an introduction to the study. Participants were then asked to read and to sign an Informed Consent Form. All study procedures and materials had been reviewed and approved by the universitys Institutional Research Review Board. Next, participants were given their pre-assigned subject number and team number, (if appropriate). Subjects were instructed to use this identification throughout the study, in order to ensure that their confidentiality would be preserved. Participants had been instructed to bring a pen, pencil, eraser and calculator with them to the experimental session. Extra writing implements and calculators were also made available to subjects, in case they did not bring these items with them. Next, subjects were given a packet of experimental materials and instructed to proceed as instructed by the researcher. Demographic information about subjects was then collected. Subjects also completed the Wonderlic Personnel Test, which measured their general cognitive ability. Training in the appropriate development method followed. ________________________________________________________________ Treatment Groups Observations _____ Group I Collaboration and Structured O1 Xc O2O3 O4O5 Group II Non-Collaborative and Structured O1 Xc O2O3 O4O5 Group IIICollaborative and Unstructured O1 Xc O2 O3 O4O5 Explanation of Notations _________________________________________________________________________ Symbol Meaning _________________________________________________ O1 Questionnaire Part Overview and Part A (Initial questionnaire) Demographics (age, gender, languages known) Covariates: cognitive ability, years of IT experience Xc Training in method O2 Programming Task II or Task III (order of task randomly assigned) O3 Questionnaire Part B Processes: faithfulness to method Individual responses: satisfaction with method O4 Programming Task III or Task II (order of task randomly assigned) O5 Questionnaire Part C (Final questionnaire) Processes: faithfulness to method Individual responses: satisfaction with method

PAGE 124

109 Training materials included an example of a simple programming task in which the experimental treatment (test cases or brai nstorming technique) and psuedocode was illustrated. Subjects were told that the example was illustrative in nature and that there could be alternative solutions to the example. Psuedocode was used in each task, in order to deal with unknown differences within pa irs on specific programming languages. All participants completed two experiment al programming tasks (Tasks II and III); however, the order in which the tasks were completed wa s varied. Using two tasks allowed for variation in difficulty of the tasks. The experimental tasks were provided in hard copy. Subjects were instructed to comp lete all aspects of the experimental tasks assignments in pencil on th e sheets provided to them. Participants were given forty five minutes to complete each experimental task. The time allotted to each task as part of the experime ntal protocol is now described. Five (5) minutes was provided to read th e experimental task. Then pa rticipants were allotted up to twenty (20) minutes to complete the i nitial phase of the designated task method and design. This phase of the experiment vari ed depending on the treatment group, i.e. some of the subjects worked alone or together on test cases or worked together brainstorming about the programming module. The remaining time allotment was to be used to write the pseudocode for the experimental tasks alone. Subjects were instructed to raise their hand, as a signal to the researcher, when they completed the initial phase of the designated experimental task method. Upon seeing the signal, the researcher signed off on the initi al phase of the experimental work and instructed subjects to conti nue on with the coding section of the assignment. If participants failed to signal the researcher at the end of the allo tted twenty minutes, subjects were instructed to begin to write code alone. These pro cedures were put into place to help ensure that experimental protocols were followed by the subjects. Upon the completion of each experimental task, subjects were asked to complete a series of questionnaires designed to measure their individual perception of faithfulness to the task domain and individual satisfaction with the method. Subjects were debriefed upon completion of the experiment. Subject Demographics All subjects who volunteered for Study 3 comp leted the experiment. Forty six of the subjects worked collaboratively utilizing st ructured problem solving (test cases) before writing code alone (Treatment Gr oup I); thirty two of the subjects worked alone utilizing structured problem solving (test cases) before writing code alone (Treatment Group II); and forty two of the subjects worked colla boratively utilizing unstructured problem solving (brainstorm) before writing code alone (Treatment Group III). Figure 6.5 presents a breakdown of partic ipants by experimental group and by the number of tasks completed.

PAGE 125

110 Figure 6.5 Number of Subjects and Ta sks in Each Experimental Group Selected subject demographics are presente d in Figure 6.6 and Fi gure 6.7. The average age of the subjects participating in the study was 29 years of age. Thirty five percent of all participants were female with the remaining sixty five percent were male. Figure 6.6 Subject Demographics Figure 6.7 Frequency Tables for Selected Demographic Variables _______________________________________________________________________ Number Number of of Tasks Treatment Group Subjects Completed Total 120 240 Group I Collaboration and Structured 46 92 Group II Non-Collaborative and Structured 32 64 Group III Collaborative and Unstructured 42 84 ____________________________________________________ Variable N Median Std Dev Min Max Age 120 29.6 6.8 21 52 ______________________________________________ Variable Percent Gender Female 35 Male 65

PAGE 126

111 The results of individual differences (cognitive ability and years of IT experience) are found in Figures 6.8 and 6.9. Variation is noted across subjects for all items. While the median cognitive ability score for the population of all programmers is 29 (Wonderlic 1999), Study 3 participants median score was 29.8, with scores ranging from 17 to 48. Figure 6.8 Descriptive Statistics Selected Variables As shown in Figure 6.9, approximately 53% of the subjects had at least four years of IT experience, while 26% of the subjects reported five or more years of IT experience. Subjects reported experience with a number of programming languages including Visual Basic, Java, Pascal, C, C++, FORTRAN and COBOL. Figure 6.9 Frequency Tables for Selected Variables Measures Measurement is discussed in detail in Chapter Three. Programming outcomes measured individual task performance, as well as individual developer satisfaction. Individual performance on task was based on the correct code produced for each programming task. A scoring template was developed by the researchers to rate the programming outcomes. A score of 1 10 was possible on code for each programming module. Higher levels of individual task performance reflected more complete and accurate code. Two independent raters were trained and used to evaluate task performance. There was a high level of inter-rater reliability on both tasks. Inter-rater reliability varied by pair (90% to 100%) and is based on the percentage of agreement for each item rated. The detail of this rating is shown in Appendix. ________________________________________________ Variable N Median Std Dev Min Max Cognitive Ability 120 29.8 6.5 17 48 Years of IT Experience 120 3.9 1.3 0 > 8 __________________________________________________ Variable Percent Years of IT experience None 2 One 5 Two 3 Three 8 Four 53 Five 24 Six 1 Seven 2 Eight or more 1

PAGE 127

112In Study 3, the researchers adapted McGraths (1988) five item scale to measure individual satisfaction. This scale has been widely utilized in small group research. The decision to use the adapted McGrath scale was due to the fact that in order to have an acceptable reliability score for individual satisfaction with the method in Study 2, the removal of one item was needed. The 7 point Likert scale used to measure individual satisfaction was adapted from prior research (1 = very unsatisfied; 7 = very satisfied.) The process measured during development included perceived faithfulness to the task domain. As previously mentioned, the researchers developed a scale designed to measure perceived faithfulness to the method. In order to measure perceived faithfulness to task domain in Study 3, the researchers adapted the 5 item instrument used in Study 2. The 7 point Likert scale had a possible score of 1 7 (1 = not faithful; 7 = very faithful). Individual cognitive ability was measured utilizing the Wonderlic Personnel Test (WPT). The WPT is comprised of 50 questions to be administered in a timed 12-minute period. Data was also collected regarding each participants years of IT experience. Copies of these instruments are found in the Appendix. Data Analysis The preliminary focus of the data analysis is the evaluation of the main effects of developmental method on individual task performance outcomes (correct code) and individual satisfaction with the programming method (H1 H4). The second step in the data analysis is to analyze the potential impact of mediating (H 5 H7) and moderating variables (H8 H13). The design of the experiment is a randomized design, since the experimental treatment was randomly assigned to all participants. The dependent variables of individual task performance and individual satisfaction with method represent two distinct (no correlation) dependent variables. The Pearson Correlations for individual task performance and individual satisfaction with method were .17 and -.06 for Tasks II and Task III, respectively. The existence of cognitive ability, years of IT experience and faithfulness to method as mediating and moderating variables (covariates) makes ANCOVA the correct method of statistical analysis. ANCOVA (Analysis of Covariates) is used to test the main effects and interaction effects of a variable on a continuous dependent variable, controlling for the effects of the selected other variables which covary with the dependent variables. The SPSS system was used for all statistical analysis. To determine the reliability of the scales, Cronbachs alpha was computed for each measure used in the questionnaires. A Cronbachs alpha of .70 or greater is considered an acceptable measure of reliability. Reliability scores are detailed in Figure 6.10. Based on these criteria, reliability scores for the following measures are acceptable: perceived faithfulness to the method and individual satisfaction with the method.

PAGE 128

113 Figure 6.10 Standardized Cronb achs Alpha for Measures The Wonderlic Personnel test has demonstrated re liability (test retest reliabilities range from .82 to .94) and validity and the test is widely used by business and governmental organizations to evaluate job applicants for employment and occupational training programs. In order to assess construct validity, a confirmatory fact or analysis was performed. Factor loadings are the correlation of each variable and the factor. For the variable faithfulness to the method, factor loadings for all items ranged from .47 to .85. While the value of .47 represents a low loading on the sc ale, it is clear that it is significantly different from the loadings for the construct satisfaction. For th e variable individual satisfaction with the method, factor loadings for all items range from .51 to .83. These values indicate that all item s reflect a common theme (converg ent validity) of individual satisfaction with the development method when applied in the real wo rld. These values indicate that all items refl ect a common theme (convergent va lidity) of faithfulness to the development method when applied in the re al world. A factor analysis was also conducted to ensure that two dist inct constructs exist (divergent validity). The results of the factor loadings indicate th at one construct exists. Table 6.1 shows the results of this analysis. ______________________________________________________________________ Items and Related Survey Survey Survey ___________ Task II Task III Faithfulness to the method .76 .82 Individual Satisfaction with method .80 .75

PAGE 129

114 Note: Extraction Method: Maximum Likelihood, Varimax Rotation; 2 factors extracted. *Rotation sum of square Loadings; The questionnaire used in Study 3 was modified as appropriate for each treatment group. Table 6.1 Factor Analysis of Faithfulness to Method and Individual Satisfaction with Method Dependent Variables There are two dependent variables, or performance outcomes, in Study 3: individual task performance and individual satisfaction with the developmental method. Individual task performance represents the individual score for each developer and is the number of correct and complete code segments completed for each experimental programming module. Individual satisfaction is the self assessed satisfaction score for each developer to the method. Prior to applying further statistical analysis, the data were reviewed for appropriateness and the presence of any outliers that may affect the data. One observation was deleted from the analysis since one subject did not produce code for Task II or Task III. It was also noted that one subject did not complete the questionnaire on individual satisfaction of Task II. Five subjects did not complete the questionnaire on individual satisfaction with method for Task III; however, these subjects did complete coding. A summary of the performance outcomes is shown in Figure 6.11. Construct Item Item Wording Factor 1 Factor 2 Faithfulness to Method Faith47 We were faithful to doing test cases first before writing the pseudocode alone for the programming assignment .096 .676 Faith48 My partner and I exerted equal influence in doing test cases first before writing the pseudocode alone for our programming assignment .026 .467 Faith49 We read the task first, then planned and work together throughout, in doing test cases first before writing the pseudocode alone for our programming assignment. .188 .845 Faith50 We followed the instructions that were given to us, in doing test cases first before writing the pseudocode alone for our programming assignment. .190 .734 Satisfaction with Method Sat61 I am satisfied working together on test cases and then writing code alone .673 .273 Sat62 I am satisfied with the test case outputs we generated on this assignment .742 .224 Sat63 I am satisfied with the psuedocode outputs I generated on this assignment .560 -.009 Sat64 I am satisfied with the assumptions we made while working on test cases together for this assignment. .830 .012 Sat65 I would like to continue to work together on test cases for this assignment .511 .150 Total Eiganvalues* 2.348 2.075 % of Variance* 26.087 23.054 Cumulative %* 26.087 49.142

PAGE 130

115 Figure 6.11 Summary of Individual Performance Outcomes Next the assumptions related to ANCOVA were checked. Four assumptions are to be met for ANCOVA as follows: 1) the dependent variable is normally distributed for each treatment group; 2) the variance of the dependent variable is constant among the treatment groups; 3) the sum of the errors is zero; and 4) the errors are independent. The underlying assumptions of normality for each dependent variable for the three treatment groups were tested using graphical representations (histograms and normal probability plots). A review of the graphical representations for each dependent variable (code and satisfaction) showed severe deviations (bimodal and tri-modal) from normality when plotted by group. A number of statistical tests may be used for normality. The Shapiro-Wilk test for normality (recommended if the sample size is less than 2000) also confirmed instances of non normal data. The null hypothesis of a normality test is that there is not significant departure from normality. When the p value is more than .05, it fails to reject the null hypothesis and thus the assumption holds. As noted in Table 6.2, many of the tests for normality were rejected, reflecting departures from normality for individual task performance and individual satisfaction with the method. ________________________________________________________________ Code N Median Std Dev Min Max Task II 119 4.68 2.27 1 9 Task III 119 3.97 2.04 1 9 ________________________________________________________________ Satisfaction N Median Std Dev Min Max Task II 119 4.80 1.36 1 7 Task III 115 4.69 1.33 1 7

PAGE 131

116 Group Shapiro-Wilk Statistic df Sig. Task II Code Collaborative Unstructured .953 46 .060 Non-Collaborative Structured .948 28 .173 Collaborative Structured .913 41 .004 Task III Code Collaborative Unstructured .946 46 .032 Non-Collaborative Structured .909 28 .019 Collaborative Structured .924 41 .009 Task II Satisfaction with Method Collaborative Unstructured .957 46 .085 Non-Collaborative Structured .955 28 .271 Collaborative Structured .960 41 .154 Task III Satisfaction with Method Collaborative Unstructured .947 46 .038 Non-Collaborative Structured .979 28 .819 Collaborative Structured .886 41 .001 This is a lower bound of the true significance. a Lilliefors Significance Correction Table 6.2 Shapiro-Wilk Test for Normality for Dependent Variables Statistical Test of Main Effects If the distribution does not appear to be normal and the sample size is small, other statistical procedures that do not require the assumption of normality are to be used. Kruskal-Wallis and the Median Test are non parametric techniques that may be utilized for a non-parametric ANOVA. Kruskal-Wallis compares between the medians of two or more samples to determine if the samples come from different populations. If the distributions are not normal then the Kruskal-Wallis test should be used to compare the groups. If a significant difference is found then there is a difference between the highest and lowest median. Data types that can be analyzed with Kruskal-Wallis must meet the following criteria: 1) the data points must be independent from each other; 2) the distributions do not have to be normal and the variances do not have to be equal; 3) there are more than five data points per sample; 4) all individuals must be selected at random from the population; 5) all individuals must have equal chance of being selected and 6) sample sizes should be

PAGE 132

117 equal as possible, but some di fferences are allowed. Since the assumptions are met, in Study 3 the Kruskal-Wallis test is appropriate. Kruskal and Wallis (1952) found that for small alpha (less than about 0.10) and for selected small values of 1, 2 and 3 the true level of significance is smaller than the stated level of significance a ssociated with the chi-square d distribution, which indicates that the chi-squared approximation furnishes a conservative test in many, if not all situations. The p-value is a pproximately the probability of a chi-squared random variable with k-1 degrees of freedom exceeding th e observed value of T (Conover 1999). Based on this information, the data were analyzed using non-parametric statistical techniques. The data were analyzed by the Kruskal-Wallis analysis of ranks and the Median Test, to test the main effects of the developmental me thod on individual task performance. These tests represent the nonparametric equivale nts to ANOVA (Soft Stat 2003). The hypothesis and results for each of the tests for individual task perf ormance is presented and interpreted as follows: For Task II, individual task performa nce between the developmental methods: Ho: there are no differences betw een the medians of the samples ( 1 = 2 = 3) (median 1 [collaborative structured] = median 2 [noncollaborative structured] = median 3 [collaborative unstructured] Ha: There is a difference between the medians of the samples ( 1 2 3) (median 1 [collaborative structured] median 2 [noncollaborative structured] median 3 [collaborative unstructured] For Task III, individual task performance between the developmental methods: Ho: there are no differences betw een the medians of the samples ( 1 = 2 = 3) (median 1 [collaborative structured] = median 2 [noncollaborative structured] = median 3 [collaborative unstructured] Ha: There is a difference between the medians of the samples ( 1 2 3) (median 1 [collaborative structured] median 2 [noncollaborative structured] median 3 [collaborative unstructured At a alpha level of .10 (p value of less than .10) the Median test for Task III indicates that there is a significant difference between th e medians of the developmental methods (p value of .073) for Task III. The results of the Kruskal-Wallis and the Median Tests are found in Table 6.3 though Table 6.6.

PAGE 133

118 Group N Median Rank Collaborative Unstructured 46 57.14 Non-Collaborative Structured 31 58.06 Collaborative Structured 42 64.56 Task II Code Total 119 Collaborative Unstructured 46 62.48 Non-Collaborative Structured 31 62.79 Collaborative Structured 42 55.23 Task III Code Total 119 Table 6.3 Kruskal-Wallis Median Rank for Individual Task Performance Task II Code Task III Code Chi-Square 1.168 1.272 df 2 2 Asymp. Sig. .558 .529 a Kruskal Wallis Test b Grouping Variable: Group Table 6.4 Kruskal-Wallis Test Statistics for Individual Task Performance Group Collaborative Unstructured NonCollaborative Structured Collaborative Structured Task II Code > Median 22 13 24 <= Median 24 18 18 Task III Code > Median 19 15 10 <= Median 27 16 32 Table 6.5 Median Test Frequencies by Individual Task Performance

PAGE 134

119 Task II Code Task III Code N 119 119 Median 4.00 4.00 Chi-Square 1.742(a) 5.226(b) df 2 2 Asymp. Sig. .418 .073 a 0 cells (.0%) have expected frequencies less than 5. The minimum expect ed cell frequency is 15.4. b 0 cells (.0%) have expected frequencies less than 5. The minimum expect ed cell frequency is 11.5. Table 6.6 Test Statistics for Median Test for Individual Task Performance Kruskal-Wallis assumes equal variances in the groups. For Task III the variance assumption is not met. Therefore the median te st is used for further analysis. The results of the Median Test were analyzed by trea tment group. Forty one percent (41%) of the participants utilizing collabor ative unstructured (CUS) probl em solving scored above the median score. This compared to 50% of the non collaborative structured (NCS) group and 25% of the collaborative structured (CS) group. These results are summarized Figure 6.12. Figure 6.12 Descriptive Sta tistics Median Test Tests of Hypotheses 1 3 The next step in the analysis was to determine which of treatment groups was significantly different for Task III Code. In order to test Hypothesis 1 3, the Median Test was conducted, comparing each of the treatment groups. At an alpha of .10, there is a significant difference between the particip ants utilizing collaborative unstructured problem solving and those utilizing collaborati ve structured problem solving (p = .081). At an alpha of .10, there is a significant difference between th e participants utilizing noncollaborative structured problem solving a nd those utilizing collaborative structured problem solving (p = .029). At an alpha of .10, there is no significant difference between the participants utilizing collaborative unstr uctured problem solving and those utilizing non-collaborative structured problem solving (p = .539). These results are summarized in Table 6.7 through Table 6.12. ___________________________________________________________________ Task III Code CU NCS CS_ > median 41% 50% 25% =< median 59 50 75

PAGE 135

120 Group Collaborative Structured Collaborative Unstructured > Median 10 19 Task III Code <= Median 32 27 Table 6.7 Median Test Frequencies for Hypotheses 1 Task III Code N 88 Median 4.00 Chi-Square 3.041 df 1 Asymp. Sig. .081 Table 6.8 Test Statistic for Median Test for Hypothesis 1 Group NonCollaborative Structured Collaborative Structured > Median 15 10 Task III Code <= Median 16 32 Table 6.9 Median Test Frequencies Hypothesis 2 Task III Code N 73 Median 4.00 Chi-Square 4.784 df 1 Asymp. Sig. .029 Table 6.10 Test Statistics for Median Test for Hypothesis 2

PAGE 136

121 Group Collaborative Unstructured NonCollaborative Structured > Median 19 15 Task III Code <= Median 27 16 Table 6.11 Median Test Fre quencies for Hypotheses 3 Task III Code N 77 Median 4.00 Chi-Square .377 df 1 Asymp. Sig. .539 Table 6.12 Test Statistics for Median Test for Hypothesis 3 Test of Hypothesis 4 The same statistical methods were applie d to the dependent variable individual satisfaction with method. The underlying a ssumptions of normality for each dependent variable for the three treatment groups were tested using graphical representations (histograms and normal probability plots). A re view of the graphical representations for each dependent variable (code and satisfaction) showed severe deviations (bimodal and tri-modal) from normality when plotted by group. The Shapiro-Wilk test for normality (recommended if the sample size is less than 2000) also confirmed instances of non normal data. The null hypothesis of a normality test is that there is not significant departure form normality. When the p value is more than .05, it fails to reject the null hypothe sis and thus the assumption holds. In many instances, as summarized in Table 6.13, the tests for normality were rejected and severe departures from normality were noted for Individual Satisfaction with the Method.

PAGE 137

122 Group Shapiro-Wilk Statistic df Sig. Task II Satisfaction with Method Collaborative Unstructured .957 46 .085 Non-Collaborative Structured .957 29 .274 Collaborative Structured .960 41 .154 Task III Satisfaction with Method Collaborative Unstructured .947 46 .038 Non-Collaborative Structured .977 29 .757 Collaborative Structured .886 41 .001 This is a lower bound of the true significance. a Lilliefors Significance Correction Table 6.13 Shapiro-Wilk Test for Normality Individual Satisfaction with the Method Based on these results the ANCOVA statistical technique could not be utilized to analyze the data and a non parametric method was sele cted. The data was analyzed using the Kruskal-Wallis Analysis of Ranks Test and the Median Test, to test the main effects of the developmental method on individual task performance. The hypothesis and results for each of the tests for indivi dual satisfaction to the method is presented and interpreted as follows: For Task II, individual satisfaction with the method between the developmental methods: Ho: there are no differences betw een the medians of the samples ( 1 = 2 = 3) (median 1 [collaborative structured] = median 2 [noncollaborative structured] = median 3 [collaborative unstructured] Ha: There is a difference between the medians of the samples ( 1 2 3) (median 1 [collaborative structured] median 2 [noncollaborative structured] median 3 [collaborative unstructured] For Task III, individual satisfaction with the method between the developmental methods: Ho: there are no differences betw een the medians of the samples ( 1 = 2 = 3) (median 1 [collaborative structured] = median 2 [noncollaborative structured] = median 3 [collaborative unstructured] Ha: There is a difference between the medians of the samples ( 1 2 3) (median 1 [collaborative structured] median 2 [noncollaborative structured] median 3 [collaborative unstructured At a alpha level of .10, both the Kruskal-Wallis Test and the Median Test for Task II indicate that there is a significant differe nce between the medians of the individual satisfaction with developmental methods. For Task II, this is demonstrated by a p value

PAGE 138

123 of .016 for the Kruskal-Wallis Test and a p value of .029 for the Median Test for individual satisfaction with the method. For Task III, the p value of .063 for the KruskalWallis Test also indicates that there is a difference between individual satisfaction with the developmental methods. The results of the Kruskal-Wallis and the Median Tests are found in Table 6.14 through Table 6.17. Group N Median Rank Collaborative Unstructured 46 64.00 Non-Collaborative Structured 31 44.89 Collaborative Structured 42 66.77 Task II Satisfaction with Method Total 119 Collaborative Unstructured 46 60.76 Non-Collaborative Structured 28 45.32 Collaborative Structured 41 63.56 Task III Satisfaction with Method Total 115 Table 6.14 Kruskal-Wallis Median Rank for Hypothesis 4 Task II Satisfaction with Method Task III Satisfaction with Method Chi-Square 8.211 5.521 df 2 2 Asymp. Sig. .016 .063 a Kruskal Wallis Test Table 6.15 Kruskal-Wallis Test Statistics for Hypothesis 4 Group Collaborative Unstructured NonCollaborative Structured Collaborative Structured > Median 25 8 22 Task II Satisfaction with Method <= Median 21 23 20 > Median 22 10 22 Task III Satisfaction with Method <= Median 24 18 19 Table 6.16 Test Statistics for Median Test for Hypothesis 4

PAGE 139

124 Task II Satisfaction with Method Task III Satisfaction with Method N 119 115 Median 5.0000 5.0000 Chi-Square 7.061(a) 2.174(b) df 2 2 Asymp. Sig. .029 .337 a 0 cells (.0%) have expected frequencies less than 5. The minimum expecte d cell frequency is 14.3. b 0 cells (.0%) have expected frequencies less than 5. The minimum expect ed cell frequency is 13.1. Table 6.17 Test Statistics for Median Test for Hypothesis 4 The next step in the analysis was to determ ine if the collaborative treatment groups were significantly different from the non-colla borative treatment group. For Task II satisfaction with the method, both the Krus kal-Wallis and the Median Tests were conducted. At an alpha level of .10, there is a significant difference between the collaborative and noncollaborative groups for Task II satisfaction with the method (Kruskal-Wallis Test p valu e = .008 and Median Test p value = .014). At an alpha level of .10, th ere is a significant a signi ficant difference between the collaborative and the non-collaborative groups for Task III as indicated by the KurskalWallis Test (p value = .040). The Median test did not indicate these same significant differences between the collaborative and non-collaborative groups (p value = .238). These results are found in Table 6.18 through Table 6.21. Group N Median Rank Collaborative Unstructured and Collaborative Structured 88 65.59 Non-Collaborative Structured 32 46.50 Task II Satisfaction with Method Total 120 Collaborative Unstructured and Collaborative Structured 87 62.19 Non Collaborative Structured 29 47.43 Task III Satisfaction with Method Total 116 Table 6.18 Kruskal-Wallis Medi an Rank for Hypothesis 4

PAGE 140

125 Task II Satisfaction with Method Task III Satisfaction with Method Chi-Square 7.089 4.201 df 1 1 Asymp. Sig. .008 .040 a Kruskal Wallis Test Table 6.19 Kruskal-Wallis Test Statistics for Hypothesis 4 Group Collaborative Unstructured and Collaborative Structured NonCollaborative Structured > Median 47 9 Task II Satisfaction with Method <= Median 41 23 > Median 44 11 Task III Satisfaction with Method <= Median 43 18 Table 6.20 Test Statistics for Median Test for Hypothesis 4 Task II Satisfaction with Method Task III Satisfaction with Method N 120 116 Median 5.0000 5.00 Chi-Square 6.028 1.395 df 1 1 Asymp. Sig. .014 .238 Table 6.21 Test Statistics for Median Test for Hypothesis 4

PAGE 141

126 Test of Covariates (Hypothesis 5 13) As previously mentioned, empirical evidence suggests that cognitive ability, experience and faithfulness to the method have a strong positive link to performance (Jex 2002; Gopal 1988). The data were tested for correlation between th e covariates and the dependent variables. For Task II, moderate linear correlations (Pearson Correlation Matr ix) were noted as follows: cognitive ability (Wonderlic score) a nd Task II code (.32); and satisfaction with the method and faithfulness to the method (.38). For Task III, moderate linear correlations (Pearson Correlation Matrix) were also noted as follows: cognitive ability (Wonderlic score) and Task III code ( .27); and satisfaction with the method and faithfulness to the method (.36). Years of IT experience does not appear to be correlated to task performance. Faithfulness to the Method does not appear to have a significant linear correlation to code for either task, per the Pearson Correlation Matrices. These results are shown in Table 6.22 and Table 6.23. Wonderlic Score Years IT Experience Task II Satisfaction with Method Task II Code Task II Faithfulness to Method Wonderlic Score Pearson Correlation 1 -.176 .061 .324(**) -.070 Sig. (2-tailed) .054 .510 .000 .449 N 120 120 120 120 119 Years IT experience Pearson Correlation -.176 1 .064 -.122 -.094 Sig. (2-tailed) .054 .489 .183 .312 N 120 120 120 120 119 Task II Satisfaction with Method Pearson Correlation .061 .064 1 .170 .377(**) Sig. (2-tailed) .510 .489 .063 .000 N 120 120 120 120 119 Task II Code Pearson Correlation .324(**) -.122 .170 1 .062 Sig. (2-tailed) .000 .183 .063 .501 N 120 120 120 120 119 Task II Faithfulness to Method Pearson Correlation -.070 -.094 .377(**) .062 1 Sig. (2-tailed) .449 .312 .000 .501 N 119 119 119 119 119 Table 6.22 Pearson Correlation Matrix for Task II

PAGE 142

127 Wonderlic Score Years IT Experience Task III Faithfulness to Method Task III Satisfaction with Method Task III Code Wonderlic Score Pearson Correlation 1 -.176 -.073 -.027 .267(**) Sig. (2tailed) .054 .432 .771 .003 N 120 120 118 116 120 Years IT experience Pearson Correlation -.176 1 -.003 .180 -.036 Sig. (2tailed) .054 .975 .053 .694 N 120 120 118 116 120 Task III Faithfulness to Method Pearson Correlation -.073 -.003 1 .360(**) -.021 Sig. (2tailed) .432 .975 .000 .822 N 118 118 118 115 118 Task III Satisfaction with Method Pearson Correlation -.027 .180 .360(**) 1 -.061 Sig. (2tailed) .771 .053 .000 .514 N 116 116 115 116 116 Task III Code Pearson Correlation .267(**) -.036 -.021 -.061 1 Sig. (2tailed) .003 .694 .822 .514 N 120 120 118 116 120 Table 6.23 Pearson Correlation Matrix for Task III Because the data are not normally distribu ted and the Pearson Correlation Matrices indicated that that there may be a modera te correlation between cognitive ability and code, non parametric testing was applied for testing covariates (Hypothesis 5 10). Hypotheses 5 7 deal within group comparisons of developers who perceive that they were faithful to the development method. In testing these hypothese s, we define highly faithful developers as those participants w ho had self assessed faithfulness to the method scores of 6 or greater (with 7 = highly fa ithful). Participants who had self assessed faithfulness to the method scores of less than 6 (with 1 = not faithful) are defined as average or below. In order to test hypothesis 5, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is not a significant difference in task performance for those developers who utili zed collaborative structured problem solving and who

PAGE 143

128 perceived that they were highly faithful to the method. For Task II, the p values are as follows: Kruskal-Wallis (p value = .124) and Median Test (p value = .193). For Task III, the p values are as follows: Kruskal-Wallis (p value = .243) and Median Test (p value = .312). The results of the Kruskal-Wallis and the Median Tests are found in Tables 6.24 through Table 6.31. Within Group Task II Faithfulness to the Method N Median Rank Collaborative Structured High Faithfulness Task II 29 25.81 Collaborative Structured Average to Low Faithfulness Task II 17 19.56 Task II Code Total 46 Table 6.24 Kruskal-Wallis Median Rank for Hypothesis 5 (Task II) Task II Code Chi-Square 2.371 df 1 Asymp. Sig. .124 a Kruskal Wallis Test b Grouping Variable: Within Group Task II Faithfulness to t he Method Table 6.25 Kruskal-Wallis Test Stat istics for Hypothesis 5 (Task II) Within Group Task II Faithfulness to the Method Collaborative Structured High Faithfulness Task II Collaborative Structured Average to Low Faithfulness Task II > Median 16 6 Task II Code <= Median 13 11 Table 6.26 Median Test for Hypothesis 5 (Task II)

PAGE 144

129 Task II Code N 46 Median 4.00 Chi-Square 1.697 df 1 Asymp. Sig. .193 Table 6.27 Median Test Statisti c for Hypothesis 5 (Task II) Within Group Task III Faithfulness to Method N Median Rank Collaborative Structured High Faithfulness Task III 30 25.17 Collaborative Structured Average to Low Faithfulness Task III 16 20.38 Task III Code Total 46 Table 6.28 Kruskal-Wallis Median Rank for Hypothesis 5 (Task III) Task III Code Chi-Square 1.362 df 1 Asymp. Sig. .243 a Kruskal Wallis Test b Grouping Variable: Within Group Task III Faithfulness to Method Table 6.29 Kruskal-Wallis Median Rank for Hypothesis 5 (Task III)

PAGE 145

130 Within Group Task III Faithfulness to Method Collaborative Structured High Faithfulness Task III Collaborative Structured Average to Low Faithfulness Task III > Median 14 5 Task III Code <= Median 16 11 Table 6.30 Median Test Frequenc ies for Hypothesis 5 (Task III) Task III Code N 46 Median 4.00 Chi-Square 1.023 df 1 Asymp. Sig. .312 a Grouping Variable: Within Group Task III Faithfulness to Method Table 6.31 Test Statistic for Median Test for Hypothesis 5 (Task III) In order to test hypothesis 6, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is not a significant difference in task performance for those developers who utilized non-collaborative structur ed problem solving and who perceived that they were highly faithful to the method. For Task II, the p values are as follows: Kruskal-Wallis (p value = .303) and Median Test (p value = .955). For Task III, the p values are as follows: Kruskal-Wallis (p value = .212) and Median Test (p value = .570). The results of the Kruskal-Wallis and the Median Tests are found in Tables 6.32 through Table 6.39.

PAGE 146

131 Within Group Task II Faithfulness to the Method N Median Rank Non-Collaborative Structured High Faithfulness Task II 24 15.10 Non-Collaborative Structured Average to Low Faithfulness Task II 7 19.07 Task II Code Total 31 Table 6.32 Kruskal-Wallis Median Rank for Hypothesis 6 (Task II) Task II Code Chi-Square 1.061 df 1 Asymp. Sig. .303 a Kruskal Wallis Test b Grouping Variable: Within Group Task II Faithfulness to t he Method Table 6.33 Kruskal-Wallis Median Rank for Hypothesis 6 (Task II) Within Group Task II Faithfulness to the Method NonCollaborative Structured High Faithfulness Task II NonCollaborative Structured Average to Low Faithfulness Task II > Median 10 3 Task II Code <= Median 14 4 Table 6.34 Median Test Frequenc ies for Hypothesis 6 (Task II)

PAGE 147

132 Task II Code N 31 Median 4.00 Chi-Square .003 df 1 Asymp. Sig. .955 a Grouping Variable: Within Gr oup Task II Faithfulness to the Method Table 6.35 Test Statistics for Medi an Test for Hypothesis 6 (Task II) Within Group Task III Faithfulness to Method N Median Rank Non-Collaborative Structured High Faithfulness Task III 26 15.12 Non-Collaborative Structured Average to Low Faithfulness Task III 5 20.60 Task III Code Total 31 Table 6.36 Kruskal-Wallis Median Rank for Hypothesis 6 (Task III) Task III Code Chi-Square 1.556 df 1 Asymp. Sig. .212 a Kruskal Wallis Test b Grouping Variable: Within Group Task III Faithfulness to Method Table 6.37 Kruskal-Wallis Test Statistic for Hypothesis 6 (Task III) Within Group Task III Faithfulness to Method NonCollaborative Structured High Faithfulness Task III NonCollaborative Structured Average to Low Faithfulness Task III > Median 12 3 Task III Code <= Median 14 2 Table 6.38 Median Test Frequenc ies for Hypothesis 6 (Task III)

PAGE 148

133 Task III Code N 31 Median 4.00 Chi-Square .322 df 1 Asymp. Sig. .570 a Grouping Variable: Within Group Task III Faithfulness to Method Table 6.39 Test Statistic for Median Test for Hypothesis 6 (Task III) In order to test hypothesis 7, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is not a significant difference in task performance for those developers who utili zed collaborative unstructure d problem solving and who perceived that they were highly faithful to the method. For Task II, the p values are as follows: Kruskal-Wallis (p value = .871) and Median Test (p value = .969). For Task III, the p values are as follows: Kruskal-Wallis (p value = .465) and Median Test (p value = .610). The results of the Kruskal-Wallis and the Median Tests are found in Tables 6.40 through Table 6.47. Within Group Task II Faithfulness to the Method N Median Rank Collaborative Unstructured High Faithfulness Task II 34 21.65 Collaborative Unstructured Average to Low Faithfulness Task II 8 20.88 Task II Code Total 42 Table 6.40 Kruskal-Wallis Median Rank for Hypothesis 7 (Task II) Task II Code Chi-Square .026 df 1 Asymp. Sig. .871 a Kruskal Wallis Test b Grouping Variable: Within Group Task II Faithfulness to the Method Table 6.41 Kruskal-Wallis Test Stat istic for Hypothesis 7 (Task II)

PAGE 149

134 Within Group Task II Faithfulness to the Method Collaborative Unstructured High Faithfulness Task II Collaborative Unstructured Average to Low Faithfulness Task II > Median 13 3 Task II Code <= Median 21 5 Table 6.42 Median Test Frequenc ies for Hypothesis 7 (Task II) Task II Code N 42 Median 5.00 Chi-Square .001 df 1 Asymp. Sig. .969 a Grouping Variable: Within Group Task II Faithfulness to the Method Table 6.43 Test Statistics Median Test for Hypothesis 7 (Task II) Within Group Task III Faithfulness to Method N Median Rank Collaborative Unstructured High Faithfulness Task III 31 20.69 Collaborative Unstructured Average to Low Faithfulness Task 11 23.77 Task III Code Total 42 Table 6.44 Kruskal-Wallis Median Rank for Hypothesis 7 (Task III)

PAGE 150

135 Task III Code Chi-Square .534 df 1 Asymp. Sig. .465 a Kruskal Wallis Test Table 6.45 Kruskal-Wallis Test St atistic Hypothesis 7 (Task III) Within Group Task III Faithfulness to Method Collaborative Unstructured High Faithfulness Task III Collaborative Unstructured Average to Low Faithfulness Task III > Median 8 2 Task III Code <= Median 23 9 Table 6.46 Median Test Frequenc ies for Hypothesis 7 (Task III) Task III Code N 42 Median 4.00 Chi-Square .260 df 1 Asymp. Sig. .610 a Grouping Variable: Within Group Task III Faithfulness to Method Table 6.47 Test Statistic for Median Test Hypothesis 7 (Task III) Hypotheses 8 10 deal with the within group comparisons of developers who have high levels of cognitive ability. In testing thes e hypotheses, we define developers with high levels of cognitive ability as those particip ants who scored above the population median for all programmers on the Wonderlic Personnel Test. This median score is 29. Thus, participants who had Wonderlic scores of 30 or higher are defined as developers with high cognitive ability. Study 3 subjects who had Wonderlic scores of 29 or lower are defined as developers with average or below average cognitive ability. In order to test hypothesis 8, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is a signi ficant difference in task performance for those developers who utilized collaborative stru ctured problem solving and who had high cognitive ability for Task II per the Kruskal-Wa llis Test. For Task II, the p values are as

PAGE 151

136 follows: Kruskal-Wallis (p value = .063) and Me dian Test (p value = .147). At an alpha level of .10, the results of th ese statistical test s indicate that ther e is not significant difference in task performance for those develo pers who utilized collaborative structured problem solving and who had high cognitive ab ility for Task III. For Task III, the p values are as follows: Kruskal-Wallis (p value = .849) and Median Test (p value = .293). The results of the Kruskal-Wallis and the Median Tests are found in Tables 6.48 through Table 6.51. With In Group Cognitive Ability N Median Rank Collaborative Structured High Cognitive Ability 20 27.65 Collaborative Structured Average to Low Cognitive Ability 26 20.31 Task II Code Total 46 Collaborative Structured High Cognitive Ability 20 23.93 Collaborative Structured Average / Below Cognitive Ability 26 23.17 Task III Code Total 46 Table 6.48 Kruskal-Wallis Medi an Rank for Hypothesis 8 Task II Code Task III Code Chi-Square 3.450 .036 df 1 1 Asymp. Sig. .063 .849 a Kruskal Wallis Test Table 6.49 Kruskal-Wallis Test Statistics for Hypothesis 8

PAGE 152

137 Within Group Cognitive Ability Collaborative Structured High Cognitive Ability Collaborative Structured Average to Low Cognitive Ability > Median 12 10 Task II Code <= Median 8 16 > Median 10 9 Task III Code <= Median 10 17 Table 6.50 Median Test Fre quencies for Hypothesis 8 Task II Code Task III Code N 46 46 Median 4.00 4.00 Chi-Square 2.102 1.104 df 1 1 Asymp. Sig. .147 .293 Table 6.51 Test Statistics for Median Test for Hypothesis 8 In order to test hypothesis 9, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is not a significant difference in task performance for those developers who utilized non-collaborative structured problem solving and who had high cognitive ability for Task II. For Task II, the p values are as follows: KruskalWallis (p value = .180) and Median Test (p va lue = .124). At an alpha level of .10, the results of these statistical tests indicate that there is a significant difference in task performance for those developers who utili zed non-collaborative structured problem solving and who had high cognitive ability for Task III, per the Median Test. For Task III, the p values are as follows: Kruskal-Wallis (p value = .124) and Median Test (p value = .095). The results of the Kruskal-Wallis and the Median Tests are found in Tables 6.52 through Table 6.55.

PAGE 153

138 Within Group Cognitive Ability N Median Rank Non-Collaborative Structured High Cognitive Ability 18 17.83 Non-Collaborative Structured Average to Low Cognitive Ability 13 13.46 Task II Code Total 31 Non-Collaborative Structured High Cognitive Ability 18 18.11 Non-Collaborative Structured Average to Low Cognitive Ability 13 13.08 Task III Code Total 31 Table 6.52 Kruskal-Wallis Medi an Rank for Hypothesis 9 Task II Code Task III Code Chi-Square 1.794 2.360 df 1 1 Asymp. Sig. .180 .124 a Kruskal Wallis Test Table 6.53 Kruskal-Wallis Test Statistic for Hypothesis 9 Within Group Cognitive Ability NonCollaborative Structured High Cognitive Ability NonCollaborative Structured Average / Below Cognitive Ability > Median 9 4 Task II Code <= Median 9 9 > Median 11 4 Task III Code <= Median 7 9 Table 6.54 Median Test Fre quencies for Hypothesis 9

PAGE 154

139 Task II Code Task III Code N 31 31 Median 4.00 4.00 Chi-Square 1.146 2.783 df 1 1 Asymp. Sig. .284 .095 Table 6.55 Test Statistics for Median Test for Hypothesis 9 In order to test hypothesis 10, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is a signi ficant difference in task performance for those developers who utilized collaborative unstr uctured problem solving and who had high cognitive ability for Task II and III, per the Kruskal-Wallis Test. For Task II, the p values are as follows: Kruskal-Wallis (p value = .079) and Median Test (p value = .170). For Task III, the p values are as follows: Kruskal-Wallis (p value = .064) and Median Test (p value = .546). The results of the Kruskal-Wallis and the Median Tests are found in Tables 6.56 through Table 6.59. Within Group Cognitive Ability N Median Rank Collaborative Unstructured High Cognitive Ability 26 24.08 Collaborative Unstructured Average / Below Cognitive Ability 16 17.31 Task II Code Total 42 Collaborative Unstructured High Cognitive Ability 26 24.19 Collaborative Unstructured Average / Below Cognitive Ability 16 17.13 Task III Code Total 42 Table 6.56 Kruskal-Wallis Medi an Rank for Hypothesis 10

PAGE 155

140 Task II Code Task III Code Chi-Square 3.093 3.433 df 1 1 Asymp. Sig. .079 .064 Table 6.57 Kruskal-Wallis Test Statistics for Hypothesis 10 Within Group Cognitive Ability Collaborative Unstructured High Cognitive Ability Collaborative Unstructured Average / Below Cognitive Ability > Median 12 4 Task II Code <= Median 14 12 > Median 7 3 Task III Code <= Median 19 13 Table 6.58 Median Test Fre quencies for Hypothesis 10 Task II Code Task III Code N 42 42 Median 5.00 4.00 Chi-Square 1.879 .365 df 1 1 Asymp. Sig. .170 .546 Table 6.59 Test Statistics for Me dian Test for Hypothesis 10 In order to test hypothesis 11, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is not a significant difference in task performance for those developers who utilized collaborative structured problem solving and who had high IT experience. For Task II, the p values are as follows: Kruskal-Wallis (p value = .989) and Median Test (p value = .323). For Task III, the p values are as follows: Kruskal-

PAGE 156

141 Wallis (p value = .191) and Median Test (p value = .800). The results of the KruskalWallis and the Median Tests are found in Tables 6.60 through Table 6.63. Within Group Years IT Experience N Median Rank Collaborative Structured High IT Experience 12 22.54 Collaborative Structured Average or Low IT Experience 32 22.48 Task II Code Total 44 Collaborative Structured High IT Experience 12 26.58 Collaborative Structured Average or Low IT Experience 32 20.97 Task III Code Total 44 Table 6.60 Kruskal-Wallis Medi an Rank for Hypothesis 11 Task II Code Task III Code Chi-Square .000 1.712 Df 1 1 Asymp. Sig. .989 .191 Table 6.61 Kruskal-Wallis Test Statistics for Hypothesis 11 Within Group Years IT Experience Collaborative Structured High IT Experience Collaborative Structured Average or Low IT Experience > Median 4 16 Task II Code <= Median 8 16 > Median 5 12 Task III Code <= Median 7 20 Table 6.62 Median Test Fre quencies for Hypothesis 11

PAGE 157

142 Task II Code Task III Code N 44 44 Median 4.00 4.00 Chi-Square .978 .064 Df 1 1 Asymp. Sig. .323 .800 Table 6.63 Test Statistics for Me dian Test for Hypothesis 11 In order to test hypothesis 12, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is a signi ficant difference in task performance for those developers who utilized non-collaborative st ructured problem solving and who had high IT experience for Task II. For Task II, the p values are as follows: Kruskal-Wallis (p value = .062) and Median Test (p value = .008). At an alpha level of .10, the results of these statistical tests indicate that there is a not significant difference in task performance for those developers who utilized non-colla borative structured problem solving and who had high IT experience for Task II. For Task II I, the p values are as follows: KruskalWallis (p value = .491) and Median Test (p value = .530). The results of the KruskalWallis and the Median Tests are found in Tables 6.64 through Table 6.67. Within Group Years IT Experience N Median Rank Non-Collaborative Structured High IT Experience 15 17.53 Non-Collaborative Structured Average or Low IT Experience 29 25.07 Task II Code Total 44 Non-Collaborative Structured High IT Experience 15 20.67 Non-Collaborative Structured Average or Low IT Experience 29 23.45 Task III Code Total 44 Table 6.64 Kruskal-Wallis Medi an Rank for Hypothesis 12

PAGE 158

143 Task II Code Task III Code Chi-Square 3.486 .473 df 1 1 Asymp. Sig. .062 .491 Table 6.65 Kruskal-Wallis Test Statistics for Hypothesis 12 Within Group Years IT Experience NonCollaborative Structured High IT Experience NonCollaborative Structured Average or Low IT Experience > Median 3 18 Task II Code <= Median 12 11 > Median 5 14 Task III Code <= Median 10 15 Table 6.66 Median Test Fre quencies for Hypothesis 12 Task II Code Task III Code N 44 44 Median 4.00 4.00 Chi-Square 7.013 .900 df 1 1 Asymp. Sig. .008 .343 Table 6.67 Test Statistics for Me dian Test for Hypothesis 12 In order to test hypothesis 13, the Kruskal-Wallis and Median Tests for these variables were analyzed and interpreted for each task. At an alpha level of .10, the results of these statistical tests indicate that there is a not significant difference in task performance for those developers who utilized collaborative unstructured problem solving and who had high IT experience. For Task II, the p values are as follows: Kruskal-Wallis (p value = .714) and Median Test (p value = .732). For Task III, the p values are as follows: Kruskal-Wallis (p value = .473) and Median Test (p value = .601). The results of the Kruskal-Wallis and the Median Tests are found in Tables 6.68 through Table 6.71.

PAGE 159

144 Within Group Years IT Experience N Median Rank Collaborative Unstructured High IT Experience 6 17.75 Collaborative Unstructured Average or Low IT Experience 26 16.21 Task II Code Total 32 Collaborative Unstructured High IT Experience 6 14.08 Collaborative Unstructured Average or Low IT Experience 26 17.06 Task III Code Total 32 Table 6.68 Kruskal-Wallis Medi an Rank for Hypothesis 13 Task II Code Task III Code Chi-Square .135 .516 df 1 1 Asymp. Sig. .714 .473 Table 6.69 Kruskal-Wallis Test Statistics for Hypothesis 13 Within Group Years IT Experience Collaborative Unstructured High IT Experience Collaborative Unstructured Average or Low IT Experience > Median 3 11 Task II Code <= Median 3 15 > Median 1 7 Task III Code <= Median 5 19 Table 6.70 Median Test Fre quencies for Hypothesis 13

PAGE 160

145 Task II Code Task III Code N 32 32 Median 5.00 4.00 Chi-Square .117 .274 df 1 1 Asymp. Sig. .732 .601 Table 6.71 Test Statistics for Me dian Test for Hypothesis 13 Results of Study 3 The primary goal of Study 3 was to determine if variations, or adaptations, in the developmental method impact individual perfor mance outcomes. An analysis of the overall results of Task III for Study 3, indica te that there are differences between the treatment groups. Fifty percent (50%) of the developers working alone, scored above the median score for Task III code for all method variations. This compared to 25% of the developers collaborating using te st cases and 41% of the deve lopers using brainstorming. While the study findings show that there are st atistical differences in code performance between pairs of subjects utiliz ing a structured problem solving approach and those that did not, the study hypotheses were not supporte d. Developers who worked alone using test cases had higher code performance as co mpared to pairs of developers using test cases. Additionally, collaborators reported hi gher levels of satisf action, as opposed to non-collaborators. The test of covariates indicates there is no significant differences between or within the groups for task performance, for developers who were highly faithful to the development method. Cognitive ability does not appear to be statistically significant factor in task performance, except when developers work alone. For developers working noncollaboratively utilizing structur ed problem solving, developers with higher levels of IT experience had higher levels of individual co de performance. The study hypotheses and results are shown in Table 6.72.

PAGE 161

146 Study 3 Hypotheses Results H1: Developers working collaboratively utilizing a structured problem solving developmental method will have higher levels of individual task performance than developers working collaboratively utilizing an unstructured problem solving developmental method. Not Supported; however significant difference Task III H2: Developers working collaboratively utilizing a structured problem solving developmental method will have higher levels of individual task performance than developers working non-collaboratively utilizing a structured problem solving developmental method. Not Supported; however significant difference Task III H3: Developers working non-collaborativel y utilizing a structured problem solving developmental method will have higher levels of individual task performance than developers working collaboratively utiliz ing an unstructured problem solving. Not Supported H4: Developers working collaboratively will have higher levels of individual satisfaction with the developmental method than developers working noncollaboratively. Supported Task II & Task III H5: For developers working collaborativel y utilizing a structured problem solving developmental method, developers who percei ve they were more faithful to the method will have higher levels of individual task performance. Not Supported H6: For developers working non-collaboratively utilizing a structured problem solving developmental method, developers who percei ve they were more faithful to the method will have higher levels of individual task performance. Not Supported H7: For developers working collaborativel y utilizing an unstruc tured problem solving developmental method, developers who percei ve they were more faithful to the method will have higher levels of individual task performance. Not Supported H8: For developers working collaborativel y utilizing a structured problem solving developmental method, developers with highe r cognitive ability will have higher levels of individual task performance. Supported Task II H9: For developers working non-collaboratively utilizing a structured problem solving developmental method, developers with highe r cognitive ability, will have higher levels of individual task performance. Not Supported H10: For developers working collaborativ ely utilizing an unstruc tured problem solving developmental method, developers with highe r cognitive ability, will have higher levels of individual task performance. Supported Task II & Task III H11: For developers working collaborativ ely utilizing a structur ed problem solving developmental method, developers with higher levels of IT experience will have higher levels of individual task performance. Not Supported H12: For developers working non-colla boratively utilizing a structured problem solving developmental method, developers with higher levels of IT experience, will have higher levels of individual task performance. Supported Task II H13: For developers working collaborativ ely utilizing an unstruc tured problem solving developmental method, developers with higher levels of IT experience, will have higher levels of individual task performance. Not Supported Table 6.72 Summary of Hypotheses and Results Limitations There are a number of inherent limita tions to this study. Although laboratory experiments allow for greater precision in the control and measurement of subjects may lack in generalizability to the field and realism to the participants. Some of the measures included in the study were self-reported. Partic ipants were allotted short periods of time to complete the experimental programming task s, which may not be fully representative of programming projects and conditions used in industry.

PAGE 162

147 Chapter Seven Discussion The purpose of this dissertation was to examin e the individual developer characteristics, developmental settings, collaborative methods and the processes during development that impact collaborative programming performance outcomes. The performance variables examined in the study include pair task performance, individual task performance and individual satisfaction with the method. The underlying premise of this research is that successful collaborative outcomes, especially fewer defects, are dr iven by these factors. The results of these studies will further our understanding of colla borative programming methods and related research questions. A multi-phase research design was utilized in this research. Three laboratory experiments were conducted to explore the individual developer differences, developmental setting, collaborative methods and process differences that impact collaborative programming performance outcomes. This chapter provides a review of the signifi cant findings of the dissertation, a discussion of research contributions and managerial implications, limitations of the study and opportu nities for future research. Significant Findings An examination of the significant findings of this dissertation is pr esented in four major sections. The first three sections discuss the results of each of the three studies in collaborative programming. Section Four includ es a discussion of the additive nature of the studies, and how the findings bring meaning to the variables and constructs that are relevant to collaborative programming performance. Study 1 Findings The primary focus of Study 1 was to inve stigate the individual characteristics and processes that impact collabo rative programming (pair progr amming) performance. In Study 1, small numbers of pairs are studied in detail as they prep are three programming tasks. The study was conducted in two phases. The findings of the Study 1 suggest that both individual characteristi cs and the way in which th e collaborative programming method is appropriated are relevant to performance. The results also demonstrate how high levels of distributed cognition between the develope rs help explain enhanced performance. In Phase 1 of Study 1, we examine the impact of the individual developer characteristics (cognitive ability and conflict ha ndling style). By viewing th e video tapes of the subjects

PAGE 163

148 who were utilizing pair programming researcher s are able to have a window into the process of pair programming. This study focu ses on how task conflict and faithfulness to the collaborative programming (pair programming) method may impact performance outcomes. The findings of this qualitative an alysis underscore the role of conflict and faithfulness to the method in collaborativ e programming (pair programming) outcomes. The study shows that task performance outcomes were moderated by faithfulness to the method and conflict, by contrasting two pairs of developers. While all of the pairs in Phase 1 of Study 1 had sufficient cognitive ability to perform the programming tasks successfully, the dyads highli ghted for analysis offer some interesting insights into the importance of faithfully appropriating the pair programming method. The highly faithful dyad had constant interact ion while working to prepare test cases before coding. In addition, limited task c onflict, which was resolved, was noted. The performance of the dyad was consistently high for each experimental task, suggesting that these processes are of impor tance to successful outcomes. Conversely, the pair in which there was an acc eleration of conflict and high withdrawal by one subject, had initially high performance, but became the lowest performer by the end of the study. The high avoidance conflic t handling style manife sted itself in low interaction between the subj ects and by the dominance of hi s partner in performing the experimental tasks. These f actors resulted in an escalation of task conflict with each programming exercise. Thus, performance outcomes continued to suffer throughout the study. It is also interesting to note that there wa s little variation in the satisfaction reported by the subjects in Phase 1 of Study 1, with most participants reporting high levels of satisfaction with the pair programming method. The only notable exception was the subject in the pair that exhibited high withdrawal or avoidance throughout the experiment. The results of Phase 1 of Study 1 suggests that both individual performance differences, as well as processes during deve lopment, impact performance outcomes. In Phase 2 of Study 1, we uti lize the theoretical perspectiv e of distributed cognition to explain how and why collaborative program ming (pair programming) may result in higher task performance. By coding the transc ripts of developers as they worked on an experimental task, qualitative analysis is utilized to note some interesting findings between a very high and a very low performing pair. The high performing pair displayed very high le vels of distributed cognition. The nature of the interaction betw een the developers was constant and dynamic, with each developer making and taking perspectives on how the c oding task should be approached. There also was strong evidence of concern for each other during the exercise. Conversely, the pair of developers who displayed low or nega tive levels of distri buted cognition had very low performance. These subjects had inter actions that can be ch aracterized by minimal interaction and low levels of perspective ma king between the developers. In this dyad, one of the developers is dominate in perf orming most of the work. The other subject acknowledges his partners efforts and offers very little input relati ve to the completion

PAGE 164

149 of the task. Phase 2 of Study 1 provides little evidence of di stributed cognition related to the preparation of test cases. Additionally, it does not appear that developers who prepared more correct test cases necessa rily produced highe r quality code. It is interesting to note that in Phase 2 of Study 1, the high performing pair also had high levels of cognitive ability and significant year s of IT experience. This may suggest that high cognitive ability and gr eater IT experience may enha nce programming results. Study 2 Findings Study 2 focuses on how developmental setting impacts performance outcomes for collaborative programming (pair programming). We also co ntinue to investigate the individual differences and processes that impact performance. Two experimental tasks were included for analysis. Approximately half of dyads utilized collaborative programming (pair programming) in a face-to-face setting, while the remaining pairs programmed virtually. Variation was noted in the subjects for indi vidual differences (cogni tive ability, conflict management style and IT experience). The resu lts of this study show that while it is possible to use collaborative programming in a virtual setting, the ability to produce high quality code is negatively impacted. The f ace-to-face developers had significantly higher levels of code performance, as compared to their virtual counterp arts. Programmers who worked in a face-to-face setti ng also reported higher levels of satisfaction. These findings suggest that that collaborative pr ogramming is not an e ffective methodology to use in a virtual developmental setting. These findings are consistent with media richness theory which posits as communication modalities diminish, performance is inhibited due to issues related to coordina tion. The findings also suggest th at for intellective tasks that require problem solving, face-to-face setti ngs are preferable, in order to maximize performance. Study 2 suggests that in additi on to development setting, individual developer differences of the pair (cognitive ability, conflict handling style and years of IT experience) interact to impact pair performance. In Study 2 we view ed the impact of pairing as compensatory. When the characteristic of the developer dyad is determined by the higher cognitive ability individual in the pair, code perf ormance is positively impacted. When the characteristic of the developer dyad is de termined by the higher integrative conflict management style of the individual in the dyad, developer integrative style and setting interact to impact test case performance. Prior research in conflict has shown that the integrative style has been associated with hi gher levels of problem solving. And finally, the results support the no tion that experience impacts results. When the characteristic of the developer dyad is determined by the hi gher IT experience individual in the dyad, developer experience and setting interact to impact both test case and code performance. These findings suggest that in pairing individuals for collaborative programming, individual characteristics should be taken in to account. In addition to high cognitive ability, more integrative conflict styles and greater IT experience may enhance performance.

PAGE 165

150 Study 3 Findings Study 3 focuses on the impact on performance when variations, or adaptations, take place in operationalizing the collaborative progr amming method (pair programming). In todays business environment, adaptations of pure pair programming are becoming more common place. Three treatment groups were studied: pairs of programmers who used unstructured problem solving (brainstormed toge ther) and then wrote code alone; pairs of developers who used structured problem solv ing (prepared test cases together) and then wrote code alone; and individual developers who utilized stru ctured problem solving (test cases) and then wrote code. Two experime ntal tasks were analyzed. Study 3 also continues to investigate how individual de veloper characteristics and processes during development impact performance outcomes. An analysis of the overall results of Ta sk III for Study 3, indi cate that there are differences between the treatment groups. Fift y percent (50%) of th e developers working alone, scored above the median score for Task III code for all method variations. This compared to 25% of the developers colla borating using test cases and 41% of the developers using brainstorming. While the study findings show that there are st atistical differences in code performance between pairs of subjects utiliz ing a structured problem solving approach and those that did not, the study hypotheses were not supporte d. Developers who worked alone, using test cases had higher code performance as co mpared to pairs of developers using test cases. This may suggest that the act of stru ctured problem solving is more important to better code performance than the act of working collaboratively. Additionally, Study 3 participants reported higher levels of satisfaction when working with another developer, as opposed to worki ng alone. These findings are consistent with those of prior studies on pair programming, in that develope rs working in pairs reported greater satisfaction with the method. This may suggest th at collaboration is more closely related to satisfaction with the work se tting, than to the development method. The importance of individual developer diffe rences is also highlighted by Study 3. Cognitive ability appears to play an important role in performance, particularly when working collaboratively. IT experience has a significant impact on code performance for solo programmers, who did not benefit fr om collaborating about the tasks. Overall Study Findings In order to gain additional insight into our investigation of collaborative programming (pair programming and variations of pair programming), we reviewed the code performance results for subject s across all three studies. Si nce all participants produced code, we reviewed code task performance for Tasks II and Task III. One hundred and fifty nine (159) observations of data were collected across a ll three studies for code. Of

PAGE 166

151 those observations, 130 represent pair task pe rformance, while 29 observations represent individual task performance. As shown on Table 7.1, mean code performance on Task II of 4.43 is higher than the mean performance on Task III of 3.82.

PAGE 167

152 N Minimum Maximum Mean Std. Deviation Task II Code 159 1 10 4.43 2.356 Task III Code 159 1 10 3.82 2.142 Valid N (listwise) 159 Table 7.1 Summary of Code Performance All Studies The mean scores were also analyzed for each treatment group within each of the three studies. The developers in Study 1 had the highest mean score on code for each task (Task II = 5.00 and Task III = 4.41). It should be noted that Study 1 subjects were given slightly more time to complete each task (one hour as opposed to 45 minutes); however, they may or may not have taken the entire tim e allotted. Developers in Study 1 used the pure pair programming method. Study 3 participants utilized variations of pure pair prog ramming. Subjects who worked alone and used structured problem solving (test cases) before writing code, had mean scores that essentially equaled those of Study 1 subjects (Task II = 5.00 and Task III 4.31). Forty five minutes was allocated to co mplete the experimental tasks in Study 3; however, participants did not necessarily use the entire time allotted to complete their work. Pairs of subjects who utilized structur ed problem solving (test cases) had the third highest level of code performance (Task II = 4.84 and Task III = 3.59), while pairs using unstructured problem solving (brainstorming) had mean code performance that was lower (Task II = 4.33 and Task III = 4.13). Study 2 subjects used the pure pair programming method. The mean code performance for the face-to-face subjects was close to the th at of the subjects in Study 3 who used structured collaboration (Task II = 3.91 and Ta sk III = 3.41).Virtual pairs clearly had the least favorable mean performance levels (Task II = 2.31 and Task III = 2.38). A summary of the mean scores for each task by Study and treatment group is shown in Table 7.2.

PAGE 168

153 Method, Study, Setting Task II Code Task III Code Pair Programming, Study 1, Face-to-face Mean 5.00 4.41 N 11 11 Std. Deviation 3.256 2.871 Pair Programming, Study 2, Face-to-Face Mean 3.91 3.41 N 16 16 Std. Deviation 2.162 2.375 Pair Programming, Study 2, Virtual Mean 2.31 2.38 N 13 13 Std. Deviation 1.109 1.543 Non-Collaborative Structured, Study 3 (Subjects worked Alone) Mean 5.00 4.31 N 29 29 Std. Deviation 2.018 2.392 Collaborative Structured, Study 3, Face-to-Face Mean 4.84 3.59 N 44 44 Std. Deviation 2.596 1.896 Collaborative Unstructured, Study 3, Face-to-Face Mean 4.33 4.13 N 46 46 Std. Deviation 2.098 1.928 Total Mean 4.43 3.82 N 159 159 Std. Deviation 2.356 2.142 Table 7.2 Summary of Code Perfor mance by Study, Method and Setting Box plots or graphical descriptions based on quartiles of data were also produced. The Box plot is based on the quartiles of a data set. Quartiles are values that partition the data set into four groups, each containing 25% of the measurements. By definition, 50% of the observations fall inside the box. The median is shown by the line in the box (McClave & Benson, 1991). These patterns (of performance on code) are essentially the same for both Task II and Task III. As shown in the plot, the higher levels scores for

PAGE 169

154Task II code were reported for subjects in Study 1. However, with the exception of the virtual pairs, the range of scores did not show much variation. The median scores participants who used structured problem solving (Study 3), was higher than the median scores of the other groups. These findings suggest that the act of using structured problem solving (test cases) may be more relevant to higher levels of performance than the aspect of collaboration. The box plot for Task II is shown in Figure 7.1. Pair Programming, Study 1, Face-to-FacePair Programming, Study 2, Face-to-FacePair Programming, Study 2, VirtualNon-Collaborative Structured, Study 3Collaborative Structured, Study 3, Face-to-FaceCollaborative Unstructured, Study 3, Face-to-FaceMethod, Study, Setting 0246810Task II Code Figure 7.1 Box Plots of Findings by Method, Study & Setting Task II Code Next, a Pearson Correlation Matrix was utilized to investigate the correlation between test cases and code. Eighty five of the observations across all three studies included both test case and code. The Pearson correlation matrix revealed low to moderate correlations between correct test cases and code for each task. The findings appear to be consistent across all studies. Given that mean code performance was the highest for the process study (Study 1) and for structured problem solving (Study 3), these correlations suggest that the act of using a structured problem solving approach (test cases) may enhance code performance. The Pearson Correlation Matrix is shown on Table 7.3.

PAGE 170

155 Task II Test Cases Task II Code Task III Test Cases Task III Code Task II Test Cases Pearson Correlation 1 .228(*) -.053 .054 Sig. (2-tailed) .035 .628 .621 N 85 85 85 85 Task II Code Pearson Correlation .228(*) 1 .332(**) .376(**) Sig. (2-tailed) .035 .002 .000 N 85 85 85 85 Task III Test Cases Pearson Correlation -.053 .332(**) 1 .218(*) Sig. (2-tailed) .628 .002 .045 N 85 85 85 85 Task III Code Pearson Correlation .054 .376(**) .218(*) 1 Sig. (2-tailed) .621 .000 .045 N 85 85 85 85 Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed). Table 7.3 Pearson Correlati on Matrix All Studies Contributions The knowledge gained from this study will aid academics and practitioners alike in enhancing quality outcomes of collaborative software development, as well as suggest new strategies for optimizing quality code de velopment. This s ection discusses those contributions provided to each group. Contributions to Researchers The study makes several relevant contributi ons to our understandi ng of collaborative programming software development methodologies First, the resear ch literature on the software development practices has focu sed primarily on traditional development methodologies. This study offers perspectiv es on the newer, innovative collaborative software development practices. Additiona lly, it adds to the body of knowledge on software development methodologies in the management information systems domain. To date, minimal research has investigated collaborative programming (pair programming). Second, collaborative programming has emerge d as a potentially viable software development technique that addresses the continuing need to produce high quality software in shorter time frames. This is of critical importance to software development success, since poorly tested software increases the associated risks of poor quality and is

PAGE 171

156 likely to account for higher production costs. Earlier detection of errors or bugs, during development, is of particular interest. The findings suggest that while developers on average may enjoy collaborative programming (pair programming), the use of the pair programming method in and of itself may not necessarily result in better code. The findings suggest that the act of structured problem solving, not collaboration may be significant to enhanced performance. This suggests that variations of the pure pair programming may be mo re appropriate. This study extends the work of prior re search on collaborative programming (pair programming). The results provide further perspectives on the factors that impact collaborative programming performance. The results of the study underscore how individual developer differen ces and process differences impact task performance. While cognitive ability, conflict management style and years of IT experience are important, how the method is appropriated is of equal importance. The negative impact of withdrawal and avoidance is demonstrated by continual declines in performance of the pair profiled in the process study (Study 1), which is supported by prior research on conflict. The research uses the theory of distributed cognition to explain higher performance in the collaborative programmi ng setting. Understanding differences in performance and productivity between individu al programmers is important, as it may help us understand how we may raise the lowe st level of performance to much higher levels, as well as select individuals fo r the collaborative development setting Third, this research represents an initial at tempt to explore the impact of developmental setting on performance outcomes. Virtual software development is becoming a reality as organizations continue to strive to meet business needs. The findings suggest that collaborative programming in a virtual setting offers little advantages, with performance results falling far below that of face-to-face developers. Fourth, the research literature to date on collaborative software development has focused primarily on pure pair programmi ng. Little research, if any, has explored variations in the methods of collaboration used in collabora tive programming. This is an important area, as adaptations of standa rd pair programming are being used in practice. The high level of practitioner interest in alternative methods, as well as in ways to collaborate, is driven in great part by the perceived misallocation of resources imposed in implementing the pure pair programming method. Results of the study suggest that using a structured problem solving approach may be of key importance to enhancing code performance. This supports the notion that va riations in the ap plication of the pur e pair programming method are valid, particularly when structured approaches are inst ituted. The support for using brainstorming is not strongly suppor ted. It is hoped that the results of our investigation may prove meani ngful in providing a framework for collaborative software development. Such a framework is essential if organizations are to plan effectively and make sensible allocations of resources (Gor y 1989) to software development tasks.

PAGE 172

157 Contributions to Practitioners This study is of value to practit ioners in a number of ways. First, the results suggest that if collaborative programming (pair programming ) is to be utilized then organizations must consider how individual developer di fferences and the pr ocesses used during development impact results. The study demonstrates that cognitive ability, conflict management style and years of IT experience should be consid ered in pairing setting up pairs. Organizations may want to screen applicants prior to assigning them to pairs. Additionally, how pair program ming is performed is equally important. Two processes examined in this study are faithfulness to th e method and task conflict. Organizations should insure that the proper training takes place when using pair programming. Training should focus upon the method and processes to be followed (such as developing test cases) during collaborative programming, as well as conflict management and interaction strategies between the develope rs. This training may include intervention strategies that relate to conflict handling style. The findings also suggest that variations or adaptations of collaborative programming (pair programming) are a realistic approach of implementation in the work place. The use of test cases or structured problem so lving appears to be strongly supported and should be the focus of training. The findi ngs also suggest that some level of collaboration may be desirable, as in all instance s developers reported higher satisfaction working together, as opposed to working alone. Limitations to the Study There are a number of inherent limitations to this study. In all three studies, the experimental method was utilized to inve stigate collaborative programming. Although laboratory experiments allow for greater prec ision in the control and measurement of subjects they are lacking generalizability to the field. Subjects were allowed short periods of time to complete the experimental programming tasks, which may not be fully representative of programming pr ojects used in industry. Sin ce subjects were working in a laboratory setting, their behavi or may not be representative of their behavior in a noncontrived work environment. As a result, weaker effects may have been noted as the novelty of working together, subjects may make efforts to mask impacts that they may view as negative. And finally, some of th e measures included in the study were selfreported. An inherent limitation of Study 1 is the low number of participants Additionally, since subjects were audio and video taped, their be havior may not be representative of their behavior in a non-c ontrived setting. Participants used in the Study 2 study were students who had low levels of IT experience. Additionally, in Study 2 collaborative software (Groove) was utilized by subjects who programmed in a virtual environment. What impact, if any, this phenomenon may have had on performance outcomes remains unclear.

PAGE 173

158 And finally, Study 1 participants had slightly longer time to complete the experimental tasks. Study 3 participants did not work at a computer, while performing the experimental tasks. Opportunities for Future Research The findings of this study suggest a number of opportunities for the extension of the research. The first involves a further invest igation of structured problem solving (test case) on task performance. The second con cerns enhancing quality outcomes when pair programming is used. And the third explores how training and learning may be enhanced by utilizing pair programming. A pilot study has been conducted to investig ate the impact of test cases on code performance. Subjects were assigned to two treatment groups. In the first treatment group, subjects were given test cases for the experimental task and then instructed to write code. The second treatment group was given the only the expe rimental task. The results of this study should enable a further understanding of the relationship of structured problem solving and quality code production. Second, the impact of individual differences and processes deserve further study. The findings suggest that cognitive ability and years of IT experience may be of particular relevance to enhancing collaborative progr amming outcomes. A quasi experiment is proposed in which pairs of developers ar e assigned based on prescreening of cognitive ability and experience. The results of this study may gi ve guidance on how individuals should be paired in order to increase performance outcomes. Third, the investigation on distributed cogni tion suggests that coll aborative programming may be an effective training tool for or ganizations. An investigation of pair programming grounded in learning theory offers an opportunity for future research.

PAGE 174

159 References Ambler, S. Adopting Extreme Programming Computing Canada, Willowdale; April 2000. Ang, S. and Slaughter, S., Work outcome s and Job Design for Contract Versus Permanent Information Systems Professionals on Software Development Teams, MIS Quarterly Volume 25, Number 3, September 2001, pp. 321 350. Argyris, C., and Schon, D. Organizational learni ng: A Theory of Action Perspective. Reading, MA: Addison-Wesley Publishing company, 1978. Atkinson, R. C., and Shiffrin, R. M. Human memory: a proposed system of control processes. In K. Spense and J. Spense (Eds.), The psychology of le arning and motivation, Vol. 2, New York: Academic Press, 1968, pp. 89 163. Barki, H. and Harwick, J., Interpersonal C onflict and its Management in Information Systems Development, MIS Quarterly, Vol. 25, No 2, June 2001, pp. 195-228. Barrick, M. R., Mount, M. K. and Strauss, J. P. Conscientiousness and performance of sales representatives: Test of th e mediating effects of goal setting. Journal of Applied Psychology, 78, 1993, pp. 715 722. Barrick, M.R., Stewart, G.L., Neubert, M.J. and Mount, M.K. Relating Member Ability and Personality to Work-team Processes and Team Effectiveness," Journal of Applied Psychology, ( 83:3), June 1998, pp. 377-391. Baskerville, R., Travis, J. and Trues, D. "Systems without method: the impact of new technologies on information systems developmen t projects", In Kendall, K., DeGross, J. and Lyytinen, K. (eds) The Impact of Computer Supporte d Technologies on Information Systems Development, Elsevier Science Publishe rs B. V., North Holland Press, 1992, pp. 241-269. Beck, K. Extreme Programming Explained. Boston: Addison-Wesley, 2000. Belanger, F. and Collins, R. W. Identif ying Candidates for Successful Telecommuting Outcomes, The Information Society (13), 1997. Bernard, E. Object-oriented methodologies. Retrieved from: http://www.tao.com 1995

PAGE 175

160 Biggs, M. Pair programming: Development times two, InfoWorld ; Framingham; July 24, 2000. Blake R. and Mouton, F. The Managerial Grid Gulf: Houston, 1964. Bobko, P., Roth, P.L., and Potosky, D. Deriv ation and Implications of a Meta-analytic Matrix Incorporating Cognitive Ability, Alternative Predictors, and Job Performance, Personnel Psychology, (52:3), Autumn 1999, pp. 561-589. Brooks, F. The Mythical Man-Month, 2nd Edition. Reading: Addison-Wesley, 1995. Brooks, R. E. Studying Progra mmer Behavior Experimentally: The Problems of Proper Methodology, Communications of the ACM 1980, Volume 23, Number 4, In Human Factors In Software Development 2nd Edition, IEEE Press Society, 1985, pp. 207 213. Brown, A. L., Ash, D., Rutherford, M. Na kagawa, K., Gordon, A., and Campione, J. (1993). Distributed expertise in th e classroom. In G. Salomon (Ed.), Distributed cognitions. New York: Cambridge Press, 1993, pp. 188 228. Burke, R. Methods of resolving superior-subo rdinate conflict: Th e constructive use of subordinate differences and disagreements. Organizational Behavior and Human Performance ( 5), 1970, pp. 393 Campbell, J. P. Modeling the performa nce prediction problem in industrial and organizational psychology. In M. D. Dunnette and L. M. Hough (EDS). Handbook of industrial and organizational psychology (2nd ed., Vol. 1). Palo Alto, CA: Consulting Psychologists Press, 1990, pp. 687 732. Campbell, J. P. Alternative models of job performance and their implications for selection and classification. In M. G. Rumsey, C. B. Walker, and J. H. Harris (Eds.), Personnel selection and classification Hillsdale, NJ: Erlbaurn, 1994, pp. 33 52. Campion, M. A., Medsker, G. J. and. Hi ggs, A. C. Relations between Work Group Characteristics and Effectiven ess: Implications for Desi gning Effective Work Groups, Personnel Psychology 46 (1993), pp. 823 47. Capozzoli, T. Conflict Resolution A Ke y Ingredient in Successful Teams, SuperVision (60:11), November 1999, pp. 14-16. Carlson., J. R. and Zmud, R. W. Channe l Expansion theory and the Experimental Nature of Media Richness Perceptions. Academy of Management Journal, Vol. 42, 1990, pp. 153 170. Cockburn, A. Characterizing People as Non-Li near, First-Order Components in software Development, Humans and Technology, Humans and Technology Technical Report, TR 99.05, October 1999.

PAGE 176

161 Cockburn, A Just-In-Time Methodology Construction: Humans and Technology, Technical Report, TR 2000.01, Retrieved from: http://crystalmethodologies.org/articles /jmc/justintimemethodol ogyconstuction.html 2000. Cohen, C.F., Birkin S.J., Garfield, M.J ., and Webb, H.W. Managing Conflict in Software Testing: Lessons From the Field, Communications of the ACM Vol. 47, No. 1, January 2004, pp. 76 81. Cohen, J. Statistical Power Analysis for the Behavioral sciences Second Edition, Hilldale, NJ: Erlbaurn, 1998. Conover, W. J. Practical Nonparametric Statistics third edition, John Wiley & Sons, Inc.: New York, 1999, pp. 288 296. Daft, R., and Lengel, R. 1984. Information Richness: A new approach to managerial behavior and organization de sign. In B. M. Staw and L. L. Cummings (Eds.), Research in Organizational Behavior, Vol. 6. Greenwich, CT: JAI Press, 1984, pp. 191 233. Daft, R., Lengel, R., and Trevino, L. M essage equivocally, media selection and manager performance: Implications for Information systems, MIS Quarterly 17; 1987, pp. 355 366. Davis, H. and Anderson, M. Individual differences and development one dimension or two? In The Development of Intelligence, Studies in Developmental Psychology edited by Anderson, Mike, Psychology Press Ltd. Publ ishers: Hove East Sussex, UK, 1999. DeMarco, T. and Lister, T. Peopleware, Productive Projects and Teams 2nd Edition, Dorset House Publishing, New York, New York, 1999, p. 154. Dennis, A. R. and Garfield, M. J., The A doption and use of GSS in project teams: Toward more participative processes and outcomes, MIS Quarterly Minneapolis: Vol. 27, Issue 2, June 2003, p. 289. DeSanctis, G. and Poole, M. S. Transit ions in Teamwork in New Organizational Forms , Advances in Group Process (14), 1997, pp. 157 176. Domino, M. A., Collins, R. W., Hevner, A. R. and Cohen, C. F., Conflict in Collaborative Software Development, Proceedings of the 2002 ACMSIGCPR Conference Philadelphia, Pennsylvania, April 2003. Donaldson, S. E. and Siegel, S. G. Successful Software Development 2nd Edition, Prentice Hall PTR, Upper Saddle, NJ, 2001.

PAGE 177

162 Downs, C. W., Clampitt, P. G. and Pfeiffe r, A. L.. Communication and Organizational Outcomes, in Handbook of Organizational Communication G. M. Goldhaber and G. A. Barnett (eds.), Able Publ ishing, Norwood NJ, 1988, pp. 171-211. Dumaine, B. The Trouble with teams. Fortune September 5, 1994, pp. 86 Dunnette, N. D. and Hough, L. M, Handbook of Industrial and organizational Psychology, Second Edition, Volume 2, Consulting Psychologists, Press, Inc. Palo Alto California, 1991. Firesmith, D. Take a flying leap: th e plunge into object-oriented technology, American Programmer 5, 8, 1992, pp. 17 27. Fitzgerald, B. Systems Development Methodo logies Time to Advance the Clock in Systems Development Methods for the Next Century edited by Wojtkowski, G. Wojtkowski, W. Wrycza, S. and Zupancic, J., Plenum Press: New York and London, 1997. Flor, N. and Hutchins, E. "Analyzing dist ributed cognition in soft ware teams: A case study of team programming during perfective software maintenance", In J. KoenemannBelleveau et al. (Eds.), Proceedings of the fourth annual workshop on empirical studies of programmers, Norwood, NJ: Ablex Publishing, 1991, pp. 36 59. Flor, N. Dynamic Organization in Multi-Agent Distributed Cognitive Systems, PhD dissertation, University of California, San Diego, Department of Cognitive Science, 1994. Foley, K. Knowledge management key to collaboration, InformationWeek October, 2001. Fowler, M. Extreme Programming: What is a Lightweight Mythology? Retrieved from: http://www.extremeprogramming.org/light2.html November 2000. Friedman, A. Computer Systems Development: History, Organization and Implementation Wiley and Sons, Chichester, 1989. Garvin, D. Building a learning organization, Harvard Business Review, July August 1993, pp. 78 91. Ghezzi, C. Jazayeri, M, and Mandrioli, D, Fundamentals of Software Engineering Englewood Clifts, NJ: Prentice Hall, 1991. Glass, R. L. Error-Free Software Remains Extremely Elusive, IEEE Software January / February 2003.

PAGE 178

163 Gopal, A., Bostrom, R. and Chin, W. W. Applying adaptive stru cturation theory to investigate the process of group support systems use: Journal of Management Information Systems, Armonk; 9 (2) Winter 1992-1993, pp. 45-62. Gory, G. and Morton, M. A Framework for Management Information Systems, Sloan Manag ement Review Spring 1989. Greenberg, J. D. and Dickleman, G. D istributed Cognition: A Foundation for Performance Support, Performance Improvement, July 2000. Griffin, R. W. Toward an integr ated theory of task design, Research in Organizational behavior Vol. 9, JAI Press, Inc., USA, 1987, pp 79 120. Hackett, R. D., Work Attitudes and Empl oyee Absenteeism: A Synthesis of the Literature, paper presented at 1988 Nati onal Academy of Management Conference, Anaheim CA, August 1988. Hackett R. D. and Guion, R. M., A Reeval uation of the Absenteeism-Job Satisfaction Relationship, Organizational Behavior and Human Decision Process June 1985, pp. 34081. Hackman, J. R. and Lawler, E. Employ ee reactions to job characteristics. Journal of Applied Psychology 55, 1971, pp. 259 286. Hackman, J. R. and Oldham, G. R. Motiv ation through the design of work: Test of a theory. Organizational Behavior a nd Human Performance 16, 1976, pp. 250 279. Hackman, J. R. and Oldham, G. R. Work Redesign, Reading, MA: Addison-Wesley, 1980. Hammer, M. and Champy, J. Reengineering the corporation: A manifesto for business revolution. New York: Harper Business, 1993. Henderson-Sellers, B. and Edwards, J. Object-Oriented Knowledge : The Working Object, Prentice Hall: Sidney, 1994. Herman, J. B. Are Situational Continge ncies Limiting Job AttitudeJob Performance Relationships? Organizational Behavior and Human Performance October 1973, pp. 208 24. Herzberg, F., Mausner, B. and Synderman, B. The motivation to work New York: Wiley, 1959. Hewitt, J. and Scardamalia, M. Design Principles for the Support of Distributed Processes. Paper presented in a Symposium: Distributed Cognition: Theoretical and Practical Contributions, at the Annual mee ting of the American Educational Research

PAGE 179

164 Association, New York City. http://csile.oise.utoronto.ca/abstracts/distributed/ April 1996. Highsmith III, J. A. Adaptive Software Development, A Collaborative Approach to Managing Complex Systems Dorset House Publishing, New York, N.Y., 2000. Holland, J. L. Making Vocational Choices: A Theory of Vocational Personalities and Work Environments 2nd Ed. Prentice Hall: Englewood, NJ, 1987. Hough. D. Rapid delivery: an evolutionary approach for application development, IBM Systems Journal 32, 3, 1993, pp. 397 419. Huber, G. P. Organizational learning, th e contributing processe s and the literature, Organizational Science, 2, 1, 1991, pp. 88 Hutchins, E. Cognition in the wind Cambridge: MIT Press, 1995. Hutchins, E. and Holland, J. COGSCI: Dist ributed cognition syllabus. Retrieved from: http://hci.ucsd.edu/131/syllabus/index.html 1999. Hunter, J.E. Cognitive Ability, Cognitive Aptitudes, Job Knowledge, and Job Performance, Journal of Organizational Behavior (29), 1986, pp. 340-362. Humphrey, W. S., Introduction to the Team Software Process Addison-Wesley, 2000, pp. 293 294. Jarvenpaa, S., Knoll, K., and Leidner, D. Is Anybody out There? Antecedents of Trust in Global Virtual Teams, Journal of management Information Systems (14), 1998, pp. 29 64. Jarvenpaa, S. and Leidner, D. Comm unication and Trust in Global Virtual Teams, Organization Science Winter 1999, pp. 791 815. Jehn, K. A Multimethod Examination of th e Benefits and Detriments of Intragroup Conflict, Administrative Science Quarterly (40:2), June 1995, pp. 256-282. Jehn, K. "A Qualitative Analysis of Conflict Types and Dimensions in Organizational Groups," Administrative Science Quarterly (42:3), 1997, pp. 530-557. Jex, S. M. Organizational Psychology, A Sc ientist-Practioner Approach John Wiley and Sons, Inc., 2002, pp. 65 90. John Hancock Corporation, Personal co mmunication, Tampa, Florida, 2002. Johnson-Laird, P. N. Mental M odels. In M. Posner (Ed.), Foundations of cognitive science Cambridge, MA: MIT Press, 1989, pp. 469-499.

PAGE 180

165 Jordan, D., Blanton, J., Pettigrew, L, Chene y, P., Collins, R and Nord, W. Information Systems Development: an Investigati on of Leader Behaviors, Communication competence and Communicator Style as Predic tors of Project Leader Effectiveness, dissertation, May 1994. Kemerer, Chris F. Software Project Management Readings and Cases The McGrawHill Companies, Inc., 1997, p. 35. Korbanik, K., Baril, G. and Watson, C. Manager conflict management style and leadership effeteness: The Moderating effects of gender. Sex Roles (29), 1993, pp. 405 420. Lawler, E. E. III. Total quality management and employee involvement: Are they compatible? Academy of Management Executive 8 (1), 1994, pp. 68 76. Lawler, E. E. III and Cohen, S.G. Designing pay systems for teams. ACA Journal Autumn, 1992, pp. 6 18. Leonard-Barton, D. The factory as a learning laboratory. Sloan Management Review Fall 1992, pp. 23 38. Lengel, R. and Daft, R. The selection of communication media as an executive skill. Academy of management Executive 2 (3): 1988, pp. 255 233. LePine, J.A., Colquitt, J.A. and Erez, A. Adaptability to Changing Task Contexts: Effects of General Cognitive Ability, Conscien tiousness, and Openness to Experience, Personnel Psychology (53:3), 2000, pp. 563-593. Levitt, B. and March, J. G. Organizational learning. Annual Review of Sociology 14, 1988, pp. 319340. Lewin, K., "The Background of Conflict in Ma rriage." in G.W. Lewin and G.W. Allport (Eds.) Resolving Social Conflicts New York, Harper and Brothers, 1948, pp. 84-102. Lipnack, J. and Stamps, J. Virtual Teams: Reaching Across Space, Time and Organizations with Technology, New York: John Wiley and Sons, Inc., 1997, pp. 412 420. Locke, J. The Nature and Causes of Job Satisfaction, In Job Satisfaction and Absenteeism: a Meta-Analytical Re-Examination, Canadian Journal of Administration Science June 1984, pp. 61 77. Manz, C. C. and Sims, H. P., Jr. Business without bosses: How self-managing teams are building high-performing companies New York: John Wiley and Sons, 1993.

PAGE 181

166 Manzo, J. The Odyssey and Other Code Science Success Stories, CrossTalk The Journal of Defense Software Engineering Vol. 15, No. 10, October 2002, pp. 22 24. Markus, L. M. Information richness theo ry, mangers and electronic mail. Paper presented at annual meeting of the A cademy of Management, Anaheim, CA., 1988. McBreen, P. Questioning Extreme Programming Addison-Wesley, 2003. McClave and Benson, P. Statistics for Business Fifth edition, Dellen Publishing Company, San Francisco, 1991, pp. 123 125.. McConnell, S. Rapid Development, Taming Wild software Schedules Redmond, Washington, Microsoft Press, 1996. McDaniel, M. A., Schmidt, F. L., and Hunt er, J. E. Job experience correlates of job performance. Journal of Applied Psychology, 73, 1988, pp. 327 353. McGourty, Jack and Meuse, Kenneth P., The Team Developer An Assessment and Skill Building Program, John Wiley and Sons, Inc., 2001. McGrath, J. E. Groups: Interaction and Performance, Englewood Cliffs, JN: PrenticeHall, 1984. Mendenhall, W. and Sincich, T. A Second Course in Statistics: Regression Analysis Upper Saddle River, NJ: Prentice Hall, 1996. Milliken, F. and Martins, L., "Searching for Common Threads: Understanding the Multiple Effects of Diversity in Organizational Groups." Academy of Management Review (21:2), April 1996, pp. 402-433. Murphy, K. R. "Is the Relationship between co gnitive ability and job performance stable over time? Human Performance 2, 1989, pp. 183 200. Nardi, B. A. Study Context: A comparison of activity theory, situated action models and distributed cognition. In A. A. Nardi (Ed.), Context and consciousness: Activity through and human-com puter interaction Cambridge: MIT Press, 1996. Nardi, B. A., Concepts of cogniti on and consciousness: Four voices, Journal of Computer Documentation 22, 1998, pp. 31 48. Nawrocki, J. and Wojciechowski, A. Experimental Evaluation of Pair Programming, in : K. Maxwell, S.Oligny, R. Kuster s, E. van Veenendaal (eds.), Project Control: Satisfying the Customer, Shaker Publishing 2001 (Proceedings of the 12th European Software Control and Metrics Conference ESCOM 2001, 2-4, London), April 2001, pp. 269-276. Neisser, U. Cognitive Psychology New York: Applet on-Century-Crofts, 1967.

PAGE 182

167 Newell, A., Rosenbloom, P. S. and Laird, J. E. Symbolic architectures for cognition. In M. Posner (Ed.). Foundations of co gnitive science, Cambridge, MA: MIT Press, 1989, pp. 93 131. Newell A. and Simon, H. A. Human Problem Solving Englewood Cliffs, J. J.: PrenticeHall, 1972. Newman, M. and Robey, D. "A Social Pro cess Model of User-Analyst Relationships." MIS Quarterly, (16:2), June 1992, pp. 249-266. Norman, D. The psychology of everyday things New York Basic, Books, 1988. Norman, D. Cognitive artifacts. In J. Carroll (ed.), Designing interaction : Psychology at the human-computer interface New York: Cambridge University Press, 1991. Nosek, J. The Case for Collaborative Programming, Communications of the ACM (41:3), March 1998, pp. 105 108. Pedhazur, E. J. and Schmelkin, L. P. Measurement, Design and Analysis: An Integrated Approach Hildale, NJ: Lawrence Er lbaum Associates, 1991, p. 277. Pelled, L.H. "Demographic Diversity, C onflict, and Work Group Outcomes: An Intervening Process Theory." Organization Science 7:6, 1996, pp. 615 631. Perry, M. J., Distributed Cognition and Computer Supported Collaborative Design: The Organization of Work in Constructi on Engineering, dissertation, April 1997; Retrieved from: http:// www.brunel.ac.uk/~cssrmjp/MP_thesis/Ch3.pdf 2002 pp. 44 65. Petty, M. M., McGee, G. W. and Cavender, J. W.,A Meta-analysis of the Relationship Between Individual Job Satisfaction and Individual Performance, Academy of Management Review October 1984, pp. 712 21. Pinsonneault, A., Barki, H., Gallupe, R. B., and Hoppen, N, Electronic brainstorming: The illusion of productivity, Information Syst ems Research, Lithicum, Volume 10, Issue 2, June 1999, p. 110. Plowman, L., The interfunctiona lity of talk and text, Computer Support of Cooperative Work, vol. 3, 1995, pp. 229 246. Poole, M. S. and DeSanctis, G. Use of gr oup decision support system as an appropriation process. Proceedings of the Twenty-Sec ond Annual Hawaii Intern ational Conference on System Sciences, January 1989.

PAGE 183

168 Poole, M. S. and DeSanctis, G. Unde rstanding the use of group decision support systems: the theory of adaptive structuration In C. W. Steinfeld and J. Fulk (eds.), Organizations and Communication Technology, Newbury Park, CA: Sage, 1990, pp. 173 193. Posner, M. Foundations of Cognitive Science MA: MIT Press, 1989. Pressman, Roger, S. Software Engineering, A Practitioners Approach McGraw-Hill, Inc, Third addition, 1992, pp. 23 27. Quinones, M. A., Ford, J. K., and Treachout, M. S. The relationship between work experience and job performance: A c onceptual and meta-analytic review. Personnel Psychology, 48, 1995, pp. 887 910. Rahim, M. Rahim Organizational Conflict Inventory-II Palo Alto, CA: Consulting Psychologist Press, 1983a. Rahim, M. A Measure of styles of handling interpersonal conflict, Academy of Management Journal (26), 1983b, pp. 368-367. Rahim, M and Bonoma, T. Managing organiza tional conflict: a model for diagnosis and intervention. Psychological Reports (44), 1979, pp. 1323 1344. Rice, R. and Shook, D. Voice messaging, coordination and communication. In J. Galeger, R. Kraut, and C. Egido (Eds.), Intellectual technology: social and technological foundations of cooperative work, Hillsdale, NJ: Erlbaum, 1989, pp. 327 350. Riehle, D. A Comparison of the Value Systems of Adaptive Software Development and Extreme Programming: How Methodologies May Learn from Each Other, SKYVA International, Retrieved from: http:// www.skyva.com 2000. Robbins, S. P. Organizational Behavior, Concepts, Controversies, Applications Fourth Edition. Upper Sadler River, NJ: Simon and Schuster, 1998. Rochelle, J. and Teasley, S. The constr uction of shared knowledge in collaborative problem solving. In Computer-Supported Collaborative Learning (ed.). OMalley), Springer-Verlag, Heidelberg, 1994, pp. 69 Rogers, Y. and Ellis, J. Distributed Cogniti on: an alternative framework for analyzing and explaining collaborative working, Journal of Information Technology, vol. 9(2), 1994, pp. 119 128. Rogers, Y. A brief introduction to distributed cognition. Retrieved from: http://www.cogs.susx.ac.uk/useres/yvonner/dcog.html 1997.

PAGE 184

169 Salisbury, D., Chin, W, Gopal, A., Newsted, P. Research Report: Better theory through measurementdeveloping a scale to capture consensus on appropriation, Information Systems Research Linthicum, March 2002. Salomon, G. Distributed cognitions: Psychological and educational considerations New York: Cambridge University Press, 1993. Schmidt, F. L., and Hunter, J. E. The validity and utility of selection methods in personnel psychology: Practical and theoretic al implications of 85 years of research findings. Psychological Bulletin 124, 1988, pp. 262 -274. Schmidt, F., Hunter, J.E. and Pearlman, K. Task Differences and the Validity of Aptitude Tests in Selection: A Red Herring, Journal of Applied Psychology, (66), 1981, pp. 166-185. Schmidt, F. L., Hunter, J. E. and Outerbridge, A. N. The impact of job experience and ability on job knowledge, work sample pe rformance and supervisory ratings of performance, Journal of Applied Psychology, 71, 1986, pp. 432 429. Schmitt, N., Rogers, W., Chan, D., and Sheppa rd, L. and Jennings, D. Adverse Impact and Predictive Efficiency of Various Predictor Combinations, Journal of Applied Psychology, (37), 1997, pp. 717-730. SEI.Software Engineering In stitute, Retrieved from: http://www.sei.cum.edu/cmmi/present ations/euro-sepg-tutorial/sld014.htm CMMI, 2002. Sillince, J. A., A model of social, emotional and symbolic aspects of computer-mediated communication within organizations, Computer Support of Cooperative Work Vol. 4, 1996, pp. 1 31. Simon, H. A., and Kaplan, C. A. Foundation of cognitive scien ce. In M. Posner (Ed.), Foundations of cognitive science Cambridge, MA: MIT Press, 1989, pp. 1 -47. Straus, D., How to Make Collaboration Work -Po werful Ways to Build Consensus, Solve Problems and Make Decisions Berrett-Koehler Publishers, Inc., San Francisco, 2002. Stat Soft, Retrieved from: http://www.statsoftinc .com/textbook/stnonpar.html 2003. Taggar, S., Hackett, R. and Saha, S. L eadership Emergence in Autonomous Work Teams: Antecedents and Outcomes, Personnel Psychology (52), 1999, pp. 899-926. Taylor, F. W. The Principals of Scientific Managemen t. New York: Harper and Row, 1911.

PAGE 185

170 Telsuk, P. E. and Jacobs, R. R. Toward an integrated model of work experience, Personnel Psychology 51, 1998, pp. 321 355. Thompson, L. Making the Team: A Guide for Mangers. Upper Sadler River, NJ: Prentice Hall, 2000. Trembly, A. C. Software bugs cost billions, study says, National Underwriter: Erlanger; July 29, 2002. Trevino, L., Lengel, R. Bodensteiner, W., Ge rloff, E., and Muir, N. The richness imperative and cognition style, Management Communication Quarterly, 4, 1990, pp. 176 197. Triandis, H. C., Dunnete, M. D., Hough, L. M. editors, Handbook of Industrial and Organizational Psychology, second edition, volume 4, Cons ulting Psychologists Press, Inc. Palto Alto California, 1994. Trimmer, K., Blanton, J.E., and Collins, R. W. Information Systems Development: Can there be "Good" Conflict? Proceedings of the 2000 ACM SIGCPR Conference Chicago Illinois, April 6-8, 2000, pp. 174-179. Venkatesh, A. and Vitalari, N. An emer ging distributed work arrangement: an investigation of computer-based supplemental work at home, Management Science, 38, 12, 1992, pp. 1687 1706. Vroom, V., Iaffaldano, M. T. and Muchinsky, P. M., Work and Motivation Job Satisfaction and Job Performa nce: A Meta-Analysis, Psychological Bulletin March 1985, pp. 251 73. Walden, D. A. and Spangler, W. D. Putting together the pieces: A closer look at the determinants of job performance, Human Performance 2, 1989, pp. 29 59. Walker. C. R. and Guest, R. The Man on the Assembly Line. Cambridge, MA: Harvard University Press, 1952. Webster, J., and Trevino, L. Rational and soci al theories as complementary explanations of communication choices: Tw o policy-capturing studies. Academy of Management Journal, 38; 1995, pp. 1544 1572. Wellins, R S., Byham, W. C. and Wilson, J. M. Empowered teams: Creating selfdirected work groups that improve quality, producti on and participation San Francisco: Jossey-Bass, 1991. Williams, L., Kessler, R., Cunningham, W. and Je ffries, R. Strengthening the Case for Pair Programming, IEEE Software (14), July / August 2000, pp. 19-25.

PAGE 186

171 Wondelic, Inc. Wonderlic Personnel Test and Scholas tic Level Exam Users Manual. Libertyville, IL: Author, 1999. Wood, L. J. Practice, ability and expertise in computer programming. Dissertation Abstracts International: Section B: the Sciences and Engi neering. Vol. 60(9-B), US: Univ Microfilms Interna tional, April 2000, pp. 4947.

PAGE 187

172 Appendices

PAGE 188

173 Appendix A Informed Research Consent Form You are invited to participate in a study about how informatio n systems are developed. The following informatio n is being presented to help yo u decide whether or not you want to be a part of this minimal risk resear ch study. Please read carefully. If you do not understand anything, please ask Madeline Do mino (the person in char ge of this study, who is at the fron t of the room). The title of the study is Investigation of Testing Impacts of Pairs in Software Testing The principal investigators are Madeline Domi no and Al Hevner. The location of this study is the CIS building on the Un iversity of South Florida campus. You are being asked to participate because yo u have experience as a systems developer. In the study you will be asked to program pa rt of an information system, but in pairs instead of alone. The purpose of the study is to find out if there are differen ces in your satis faction with Programming in pairs, and whether programming in pairs results in fewer errors in the program code produced. In addition, we will ask you so me questions about how you usually approach problems and resolve conflict, as well as your gene ral life attitudes. After you complete the program ming task, we will ask you about how tiring the task was, and how capable you believe you are to prog ram in the pair setting. If you agree to particip ate, you will be aske d to do the following: 1. Complete a questionnaire about your approach to pr oblems, how you typically resolve conflict, and your general life attitudes 2. Complete a short training pr ogram on how to program in pairs 3. Work with another systems developer on pr ogramming part of an information system 4. Complete a questionnaire ab out your perception of how tiring the pair programming task was, how capable you believe you are to do programming in pairs 5. Agree to be videotap ed while you program The entire process should take approximately four hours, with a half ho ur break after the training session.

PAGE 189

174 Appendix A (Continued) If you agree to particip ate in this study, you will re ceive a certificate from the IS/DS Department at U SF that indicates that you have co mpleted a pair pr ogramming training program. Because of the increa sed interest in indu stry in the pair pr ogramming setting for systems development, we believe that this knowledge and experien ce will be useful to you in your career. There are no risks in partic ipating in the study, and you may withdraw from the study at any time. All information collected in this study will be kept stri ctly confidential, and your name will not be associated with any paper or vi deo materials. All information will be coded with a number that is not associ ated with your name, and will be housed in locked file cabinets in the IS/DS Department of USF. Your privacy an d research records will be kept confidential to the extent of the law. Authorized research personnel, employees of the Department of Health and Human Services and the USF Institutional Review Board may inspect the records from this research project. The results of this study may be published. However, the data obtained from you will be combined with data from other people in the publicati on. The published results will not include your name or any other information that would in any way personally identify you. Participation in this study is voluntary. You are free to participate in this research study or to withdraw at any time. If you choos e not to participate, or if you withdraw, there will be no penalty or loss of benef its that you are entitle d to receive. If you have any questions about the study or research subjects' rights, please contact Madeline Domino (813-974-6753) or Al Hevner (813-974-6765). If you have questions about your rights as a person who is taking part in a research study, you may contact a member of the Divi sion of Research Compliance of the University of South Florida at 813-974-5638. Your ConsentBy signing this form I agree that: I have fully read or have had read and expl ained to me this informed consent form describing a research project. I have had the opportunity to que stion one of the persons in charge of this research and have received satisfactory answers. I understand that I am being asked to participate in rese arch. I understand the risks and benefits, and I freely give my consent to participate in the research project outlined in this form, under the conditions indicated in it. I have been given a signed copy of this informed consent form, which is mine to keep. ______________________ ________________________ ___________ Signature of Participant Printed Name of Participant Date

PAGE 190

175 Appendix A (Continued) Investigator Statement I have carefully explained to the subject th e nature of the above protocol. I hereby certify that to the best of my knowledge the subject signing this consent form understands the nature, demands, risks and benefits involved in participating in this study. _________________________________ ________________________ _______ Signature of Investigator or Printed Name of Investigator Date Authorized research investigators Designated by the Principal Investigator Institutional Approval of Study and Informed Consent This research project/study and informed consent form were reviewed and approved by the University of South Florida Institutional Review Board for the protection of human subjects. This approval is valid until the date provided below. The board may be contacted at (813) 974-5638. Approval Consent Form Expiration Date: Revision Date:_______________

PAGE 191

176 Appendix B Task I: Compute Mowing Time This is a module to compute the time requi red to cut grass around houses, based on input of the time required to mow a square yard and the length and width dimensions of the house and the lot. Therefore this module mu st allow for reading in five variables: Lot_length (in yards) Lot_width (in yards) House_length (in yards) House_width (in yards) Mowing_time (number of square yard per minute) Based on this input, the module should com pute and display time required to mow the grass around a house. In this task, you are given the unit test cases and psuedocode (as examples), and your job is to check them bot h for errors. Any errors you find should be written on this sheet. In th e other two tasks, you will be asked to write the pseudocode and create the test cases. a. Complete the Expected Results part of th is unit test data set for this module, and use it to check the pseudocode for errors. INPUT DATA 1 2 3 4 5 6 7 8 9 10 Lot_length 30 40 50 30 35 0 50 35 35 40 Lot_width 30 20 60 40 45 30 0 36 40 50 House_length 20 20 30 29 36 20 25 25 0 25 House_width 20 10 40 20 36 20 20 40 25 0 Mowing_time 2 0 3 2 2 4 3 3 4 3 EXPECTED 250 RESULTS

PAGE 192

177 Appendix B (Continued) b. Check the accuracy of the pseudocode fo r this module and make any necessary changes. Calculate_mowing_time Prompt operator for lot_length, lot_width Get lot_length, lot_width Set lot_area = lot_length lot_width Prompt operator for house_length, house_width Get house_length, house_width Set house_area = house_length house_width Set mowing_area = lot-area house_area Prompt operator for mowing_time Get mowing_time Set mowing_time = mowing_area/mowing_time Output mowing_time END

PAGE 193

178 Appendix C Task II Discount Invoice Module This module is part of the invoice processing program for a retailer. In this module, any discounts for the customer are computed. At th is retailer, there are two ways to earn a discount: (1) total-purchaseamount-discount -if the total amount of purchases, pre-tax, is greater than an established amount; and (2) product-specific-discount -if the total number of purchases of designated products is greater than a s econd, set number of purchases. This second type of discount is given to encourag e sales of certain products. If the customers purchases earn both type s of discounts, the s econd, product-specific discount is computed first. If the second discount reduces the sale to below the set amount for the first discount, the customer doe s not get the first, total purchase amount discount. These values are passed to this module: Total1 = pre-tax total of the prices of one or more items purchased on the invoice Total_num = total number of purch ases of designated products Total2 = total purchase price of purchases of designated products Discount_level1 = established amount of total purchases required to earn a discount Discount_level2 = established number of purchases of designated products required to earn a discount Discount1 = percentage discount for total-purchase-amount-discounts Discount2 = percentage discount for product-specific-discounts You can assume that the values passed to th is module do not include negative numbers or zeros, since the input is checked in the other module. This module should return these values: Total_discount1 = amount of total-purchase-amount-discount New_total1 = new total purchase am ount (that reflects the discount(s)) Total_discount2 = amount of product-specific-amount discounts a. Prepare a test data set with at least 10 cases for this module. b. Write the pseudocode for this module and check it for accuracy. END

PAGE 194

179 Appendix D Task III Sales Report Module This module produces a sales report for a car dealership. For each car sold, the following information is stored in the SALES file, whic h is sorted by date and time of a car sale. Variable Name Description Type and Format Auto_ID Automobile ID number 15 alphanumeric Sale_Date Date of Sale date field Sale_Time Time of Sale HH:MM Make Make of Car 10 alphanumeric Model Model of Car 10 alphanumeric Veh_Type Vehicle Type 5 alphanumeric S_Name Salesperson Name 20 character (formatted: last name, space, first name, space, middle initial) S_Comm Salesperson Commission (2) numeric S_Price Sales price 6 (2) numeric, dollar format Del_Date Delivery date date field The sales manager wants a report on the first day of each month that lists total sales and commissions for each salesperson. There shoul d be just one line for each salesperson, listing name, average commission, total sale s for the previous month, and total commission amount. At the end of this list there should be a grand total of all sales and all commissions earned. a. For this module, the test data should be a SALES file with many records. Describe the test records you would put into the SALES file in terms of the number of records and their content. b. Write the pseudocode for this module and check it for accuracy. END

PAGE 195

180 Appendix E Questionnaire Study 1 Roch Interpersonal Conflict Inventory 1. I try to investigate an issue with my boss to find a so lution acceptable to us. 2. I generally try to satisfy the needs of my peers. 3. I attempt to avoid being put on the spot and try to keep conflict with my peers to myself. 4. I try to integrate my ideas with the ideas of others to come up with a joint decision. 5. I give up something in order to get something else. 6. I try to work with others to find so lutions to problems that satisfy both our expectations. 7. I usually avoid open disc ussion of differences or disagreements with others. 8. I usually hold on to my solution to a problem. 9. I try to find a middle course to resolve an impasse. 10. I use my influence to get my ideas accepted. 11. If possible, I use authority to make a decision go in my favor. 12. I usually accommodate the wishes of others. 13. I give in to wishes of my boss. 14. I win some and I lose some. 15. I exchange accurate information with othe rs in order to solve a problem together. 16. I sometimes will help a decision to be made in favor of others.

PAGE 196

181 Appendix E (Continued) 17. I usually make concessions to others. 18. I argue my case in order to show the merits of my position. 19. I try to play down our differences to reach a compromise with others. 20. I usually propose a middle ground for breaking deadlocks. 21. I negotiate with my boss so a compromise can be reached. 22. I try to stay away from disagreeing with my boss. 23. I avoid unpleasant encounters with others. 24. I use my expertise to make decisions in my favor. 25. I often go along with the suggestions of others. 26. I use give and take so take a compromise can be made. 27. When I disagree with my boss, I am genera lly firm in pursuing my side of the issue. 28. I try to bring everyone = s concerns out in the open so that the issues can be resolved in the best possible way. 29. I collaborate with others to come up with decisions acceptable to us. 30. I try to satisfy the expectations of others. 31. I sometimes use my power to win a competitive situation. 32. I try to keep my disagreement with my boss to myself in order to avoid hard feelings. 33. I try to avoid unpleasant exchanges with others. 34. I generally avoid an argument with my boss. 35. I try to work with others to ge t a proper understandi ng of the problem.

PAGE 197

182 Appendix E (Continued) INTERPERSONAL CONFLICT INVENTORY Each question should answered on a five point scale ranging from Rarely (1) to Always (5). Rarely Always Rarely Always 1) 1 2 3 4 5 21) 1 2 3 4 5 2) 1 2 3 4 5 22) 1 2 3 4 5 3) 1 2 3 4 5 23) 1 2 3 4 5 4) 1 2 3 4 5 24) 1 2 3 4 5 5) 1 2 3 4 5 25) 1 2 3 4 5 6) 1 2 3 4 5 26) 1 2 3 4 5 7) 1 2 3 4 5 27) 1 2 3 4 5 8) 1 2 3 4 5 28) 1 2 3 4 5 9) 1 2 3 4 5 29) 1 2 3 4 5 10) 1 2 3 4 5 30) 1 2 3 4 5 11) 1 2 3 4 5 31) 1 2 3 4 5 12) 1 2 3 4 5 32) 1 2 3 4 5 13) 1 2 3 4 5 33) 1 2 3 4 5 14) 1 2 3 4 5 34) 1 2 3 4 5 15) 1 2 3 4 5 35) 1 2 3 4 5 16) 1 2 3 4 5 17) 1 2 3 4 5 18) 1 2 3 4 5 19) 1 2 3 4 5 20) 1 2 3 4 5

PAGE 198

183 Appendix E (Continued) (Note: All instruments scor ed by independent raters.) SCORING KEY INTERPERSO NAL CONFLICT INVENTORY A) INTEGRATING Total your responses to the following questions: 1 ____ +4 ____ +6 ____ +15 ____ + 28 ____ + 29 ____ + 35 ____ = ________ Divide total by 7 = _____ B) AVOIDING Total your responses to the following questions: 3 ____ + 7 ____ + 22 ____ + 23 ____ + 32 ____ + 33 ____ + 34 ____ = ________ Divide total by 7 = _____ C) COMPETING Dominating Total your respon ses to the following questions: 10 ____ + 11 ____ + 24 ____ + 27 ____ + 31 = _____ Divide total by 5 = ____

PAGE 199

184 Appendix E (Continued) D) OBLIGING Total your responses to the following questions: 2 ____ + 12 ____ + 13 ____ + 17 ____ + 25 ____ + 30____ = ______ Divide total by 6 = ______ E) COMPROMISING Total your responses to the following questions: 9 ____ + 20 ____ + 21 ____ + 26 ____ = _______ Divide total by 4 _____

PAGE 200

185 Appendix F Questionnaire Study 1 Satisfaction Scale Satisfaction is a positive rather than negative affective response to an individuals job or job-related experience. (Ada pted from Venkatesh, A. and Vitalari, N. An emerging distributed work arrangement: an investigati on of computer-based supplemental work at home. 1992. Management Science, 38 12, 1687-1706. and Watson-Fritz, M.B.; Narasimham, S. and Rhee, H.K. The impact of remote work on information organizational communication. 1996. Proceedings of the Telecommuting Conference April 25-26, Jacksonville FL.) Answered on a 7 point Likert scale, with Strongly disagree (SD = 1) to Strongly Agree (SA = 7) anchors. SD SA I am satisfied with the pair programming work setting 1 2 3 4 5 67 The pair programming work setting allows me to get help from my partner when needed 1 2 3 4 5 67 The pair programming work setting allows me to feel like I belong to the development team. 1 2 3 4 5 67 I am not satisfied with pair programming. 1 2 3 4 5 67 I do not believe that th e pair programming setting allows me to get help when I need it. 1 2 3 4 5 67

PAGE 201

186 Appendix G Study 1 Phase 1 Template PAIR PROGRAMMING OVERALL EVALUATION TEAM _____ TASK _____ D RIVER, SUBJ # ____ NAVIGATOR, SUBJ# _____ NOT VERY FAITHFULNESS TO PAIR PROGRAMMING 1 2 3 4 5 Comments: EQUAL INFLUENCE OR DOMINANCE OF ONE OR THE OTHER? Comments: WORK PATTERN : LEAST MOST READ TASK FIRST, THEN PLANNED AND WORKED TOGETHER, THROUGHOUT 1 2 3 4 5 READ TASK AND DO PRELIMINARY WORK ALONE, THEN COMBINE 1 2 3 4 5 DIVIDE THE TASK S AND WORK SEPARATELY 1 2 3 4 5 Comments / Transcriptions:

PAGE 202

187 Appendix G (Continued) Study 1 Phase 1 Template PAIR PROGRAMMING CONFLICT EPISODE FORM TEAM _____ TASK _____ DRIVER, SUBJ # ____ NAVIGATOR, SUBJ# _____ TAPE TIME IN _____ TIME OUT _____ CONFLICT OVER WHAT? (Check one ) TASK _____ INTERPERSONAL _____ Comments: HOW WAS CONFLICT RESOLVED? (Check all that apply, indicate sequence) DRIVER NAVIGATOR INTEGRATING ______ ______ OBLIGING ______ ______ DOMINATING ______ ______ AVOIDING ______ ______ COMPROMISING ______ ______ Comments: WAS CONFLICT RESOLVED? (Check one) YES _____ NO _____ If so, how? Comments: Input from facilitator? (If so, indicate number of times.)

PAGE 203

188 Appendix H Initial Questionnaire Study 2 (Note: Questions 16 and 73 105 used in dissertation) Instructions: Please respond to the following items as accurately as possible. This data will be kept CONFIDENTIAL. Place al l answers on the optical scanning sheet (Scantron) provided. Plea se DO NOT MARK on the question sheets. Thank you. General Information: Name please indicate your instructor, tim e of class & term (leaving no spaces): Sex please indicate the following: Male or Female Grade or Education SKIP THIS ITEM Birth date please indicate the following: Month, Day, Year Identification number Please put your SUBJECT ID in this space Special Code SKIP THIS ITEM Work Background Information: 1) How many years of full time wo rk experience do you have? 0) None 1) less than 1 year 2) 1 4 years 3) 5 7 years 4) More than 7 2) How many years IT work experience do you have? 0) None 1) less than 1 year 2) 1 4 years 3) 5 7 years 4) More than 7 3) How many years of programm ing experience do you have? 0) None 1) less than 1 year 2) 1 4 years 3) 5 7 years 4) More than 7 4) In which of the following languages do you have THE MOST experience? 0) C/C++ 1) Visual Basic 2) Java 3) Cobal 4) Fortran 5) Other

PAGE 204

189Appendix H (Continued) 5) How many IT jobs have you held? 0) None 1) 1 2) 2 3 3) 5 7 4 ) more than 7 6) If working, what is your current IT job title? 0) Help Desk 1) Developer 2) Tech Support 3) Analyst 4) Consultant 5) Other 6) Not Applicable, not working

PAGE 205

190 Appendix H (Continued) Information about YOUR Communication With others: The phrases listed below describe peoples behaviors relative to communication with others in work situations. If you have not worked, think ab out a team project that you have worked on in school when answering these questions. Please describe yourself as you generally are now, not as you wish to be in the future. Describe yourself as you honestly see yourself, in relation to ot her people you know and interact with. Use the rating scale below to describe how accurately each statement describes YOU. Please note that zero (0) is NOT an appropriate answer. Disagree Strongly Disagree Moderately Disagree Slightly Agree Slightly Agree Moderately Agree Strongly 1 2 3 4 5 6 7) Finds it easy to get along with others 8) Can adapt to changing situations 9) Treats people as individuals 10) Interrupts others who talk too much 11) Is rewarding to talk to 12) Can deal with others effectively 13) Is a good listener 14) Work relations are cold and distant 15) Have some nervous mannerisms in my speech 16) Is a very relaxed communicator 17) When I disagree with somebody, is very quick to challenge them 18) Can always repeat back to a person exactly what was meant 19) Is a very precise communicator 20) Leaves a definite impression on people 21) Rhythm or flow of my speech is sometimes affected by my nervousness 22) Under pressure, comes across as a relaxed speaker 23) My eyes reflect exactly what I am feeling when I communicate 24) Dramatizes a lot 25) Finds it very easy to communicate on a one-to-one basis with strangers 26) Usually, deliberately reacts in such a way that people know that I am listening to them 27) Usually does not tell people much about me until I get to know them well 28) Regularly tells jokes, anecdotes and stories when I communicate 29) Tends to constantly gesture when I communicate 30) Is an extremely open communicator 31) In a small group of strangers is a very good communicator 32) In arguments I insist upon very precise definitions 33) In most work situations I generally speak very frequently 34) Finds it extremely easy to maintain a conve rsation with coworkers I have just met 35) Likes to be strictly accurate when I communicate 36) Often I physically and vocally act out what I want to communicate 37) Readily reveals personal thi ngs about myself at work

PAGE 206

191 Appendix H (Continued) Information about YOUR Communica tion With others, continued: Use the rating scale below to describe how accurately each statement describes YOU. Please note that zero (0) is NOT an appropriate answer. Disagree Strongly Disagree Moderately Disagree Slightly Agree Slightly Agree Moderately Agree Strongly 1 2 3 4 5 6 38) Is dominant in work situations 39) Is very argumentative at work 40) Once I get wound up in a heated discussion at work, I have a hard time stopping myself 41) Is always an extremely friendly communicator 42) Really likes to listen ve ry carefully to people 43) Very often insists that other people docu ment or present some kind of proof for what they are arguing 44) Tries to take charge of th ings when I am with people 45) It bothers me to drop an ar gument that is not resolved 46) In most work situation I tend to come on strong 47) Is very expressive nonverbally in work situations 48) The way I say something usually leaves an impression on people 49) Whenever I communicate, I tend to be very encouraging to people 50) Actively uses a lot of facial expressions when I communicate 51) Very frequently verbally exaggerates to emphasize a point 52) Is an extremely attentive communicator 53) As a rule, I openly express my feelings and emotions

PAGE 207

192 Appendix H (Continued) Information about how you perceive how YOU work: Use the rating scale below to describe how accurately each statemen t describes you. Please note that zero (0) is NOT an appr opriate answer. NOTE THE SCALE IS FROM 1 7 FOR THIS SERIES OF QUESTIONS. Unlikely to Enjoy Likely to Enjoy 1 2 3 4 5 6 7 54) Adhering to the commonly established rules of my work area 55) Following well-trodden ways and gene rally accepted methods for solving problems 56) Being methodical and consistent in the way I tackle problems 57) Paying strict regard to the sequence of steps needed for the completion of a job 58) Adhering to the well-known techniques, methods and procedures of my area 59) Being strict on the production of results, as and when required 60) Accepting readily the us ual generally proven methods of solution 61) Being precise and exact about pr oduction of resu lts and reports 62) Adhering carefully to the st andards of my work area 63) Being fully aware beforehand of the se quence of steps required in solving problems 64) Being confronted with a maze of ideas which may, or may not, lead me somewhere 65) Pursuing a problem, particularly if it take s me into areas I don't know much about 66) Linking ideas which stem from more than one area of investigation 67) Being fully occupied with what appe ar to be novel methods of solution 68) Making unusual connections about id eas even if they are trivial 69) Searching for novel approaches not required at the time 70) Struggling to make connections between apparently unrelated ideas 71) Spending time tracing relationships between disparate areas of work 72) Being caught up by more than one concept, method or solution

PAGE 208

193 Appendix H (Continued) Information about how YOU interact with others: Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate answer. NOTE THIS SCALE IS FROM 1 5. Rarely Always 1 2 3 4 5 73) I try to investigate an issue with my boss to find a solution acceptable to us 74) I generally try to satisfy the needs of my peers 75) I attempt to avoid being put on the spot and try to keep conflict with my peers to myself 76) I try to integrate my ideas with the ideas of others to come up with a joint decision 77) I give up something in order to get something else 78) I try to work with others to find solutions to problems that satisfy both our expectations 79) I usually avoid open discussion of differe nces or disagreements with others 80) I usually hold on to my solution to a problem 81) I try to find a middle course to resolve an impasse 82) I use my influence to get my ideas accepted 83) If possible, I use authority to make a decision go in my favor 84) I usually accommodate the wishes of others 85) I give in to wishes of my boss. 86) I win some and I lose some. 87) I exchange accurate information with othe rs in order to solve a problem together 88) I sometimes will help a decision to be made in favor of others 89) I usually make concessions to others. 90) I argue my case in order to show the merits of my position 91) I try to play down our differences to reach a compromise with others 92) I usually propose a middle ground for breaking deadlocks 93) I negotiate with my boss so a compromise can be reached 94) I try to stay away from disagreeing with my boss 95) I avoid unpleasant enco unters with others. 96) I use my expertise to make decisions in my favor 97) I often go along with the suggestions of others 98) I use give and take so take a compromise can be made 99) When I disagree with my boss, I am ge nerally firm in pursuing my side of the issue 100) I try to bring everyones concerns out in the ope n so that the issues can be resolved in the best possible way. 101) I collaborate with others to come up with decisions acceptable to us. 102) I try to satisfy the expectations of others 103) I sometimes use my power to win a competitive situation 104) I try to keep my disagreement with my boss to myself in order to avoid hard feelings

PAGE 209

194 Appendix H (Continued) 105) I try to avoid unpleasa nt exchanges with others 106) I generally avoid an argument with my boss 107) I try to work with others to get a proper understanding of the problem -END OF SURVEY

PAGE 210

195 Appendix I Questionnaire II and Final Questionnaire Study 2 (Note: Questions 3 12 and 33 36 used in dissertation) Instructions: Please respond to the following items as accurately as possible. This data will be kept CONFIDENTIAL. Pla ce all answers on the optical scanning sheet (Scantron) provided. Please DO NOT MARK on the question sheets. Thank you. General Information: Name please indicate your instructor a nd time of class (leaving no spaces): Sex please indicate the following: Male or Female Grade or Education SKIP THIS ITEM Birth date please indicate the following: Month, Day, Year Identification number Please put your SUBJECT ID in this space Special Code SKIP THIS ITEM General Information on todays Session: 1. In todays session, I / we worked on 0) Task II (printed on white paper) 1) Task III (printed on yellow paper) 2) Task IV (printed on pink paper) 3) Task V (printed on blue paper) 2. In todays session my role was that of the... 0) Not applicable, I did not have a partner 1) Driver 2) Navigator

PAGE 211

196 Appendix I (Continued) General evaluation about today s Pair Programming session: Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) is an appropriate answer ONLY IF YOU DID NOT HAVE A PARTNER. Not Applicable I did not have a partner Not Faithful Very Faithful 0 1 2 3 4 5 3. How faithful were you and your partner in following the pair programming technique during todays session? General evaluation about today s Pair Programming session: Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) is an appropriate answer ONLY IF YOU DID NOT HAVE A PARTNER. Not Applicable Did not have a partner Least Most 0 1 2 3 4 5 4. During todays session, my partner and I exerted equal influence in completing the task. 5. During todays session, my partner was more dominant in completing the task. 6. During todays session, I was more dominant in completing the task.

PAGE 212

197 Appendix I (Continued) General observations about your Work Patterns in todays Pair Programming session: Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) is an appropriate answer ONLY IF YOU DID NOT HAVE A PARTNER. Not Applicable Did not have a partner Least Most 0 1 2 3 4 5 7. We read the task first, then pl anned and worked together throughout. 8. We read the task and did the preliminar y work alone, and then combined our results. 9. We divided the tasks and worked separately. General observations about Conflict during this session: Conflict is commonly viewed as a behavi or such as ARGUMENTS OR OPPOSING PREFERENCES. 10. The number of times my partner and I experienced episodes of disagreement or conflict during todays session was. 0) None 1) 1 to 3 2) 3 to 5 3) 5 to 7 4) 7 to 9 5) more than 9 6) Not applicable, I did not have a partner 11. If you and your partner experienced epis odes of conflict or disagreement during the session, it was generally concerned 0) Not applicable, no conflict 1) The task(s) to be done 2) Interpersonal in nature 3) Not applicable, I did not have a partner

PAGE 213

198 Appendix I (Continued) 12. If you and your partner experienced epis odes of conflict or disagreement during the session, it was generally 0) Not applicable, no conflict 1) Resolved 2) Not resolved 3) Not applicable, I did not have a partner General questions about your Partner and the Pair Programming environment: 13. I know my partner before today 0) Not at all 1) Only slightly 2) Somewhat 3) As a casual acquaintance 4) Very well 5) Extremely well 6) Not applicable, I did not have a partner 14. I have worked with my partner in pair programming before today 0) Not at all 1) Once or Twice 2) Occasionally 3) Every Month 4) Every Week 5) Every Day 6) Not applicable, I did not have a partner 15. I have done pair programming before 0) Not at all 1) Once or Twice 2) Occasionally 3) Every Month 4) Every Week 5) Every Day 6) Not applicable, I did not have a partner 16. If you worked in a virtual setting today I have worked in a virtual setting before 0) Not at all 1) Once or Twice 2) Occasionally 3) Every Month 4) Every Week 5) Every Day 6) Not applicable, I did not have a partner

PAGE 214

199 Appendix I (Continued) General questions about how you perceive YOUSELF: Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) may be an appropriate answer, in some instances, ONLY if you did NOT do pair programming. Not Applicable I did not do pair programming Strongly Disagree Strongly Agree 0 1 2 3 4 5 6 7 17. I am capable of dealing with most problems that come up at work. 18. When I set important goals for myself, I achieve them. 19. If something looks too complicate d, I avoid it. 20. When trying to learn something new, I soon give up if I am not initially successful. 21. I am a self-reliant person. 22. Initial failure at programming in pairs makes me try harder. 23. I feel confident about my abil ity to do pair programming. 24. I am capable of doing programming in pairs at work. 25. If I have failures in pair programming, I will try harder. General questions about how you perceive d the pair programming experience: Use the rating scale to describe how a ccurately each statement describes your situation. Even if YOU DID NOT HAVE A PARTNER, please answer these questions relative to how perceived your pr ogramming experience on todays task. Very Low V e r y High 1 2 3 4 5 6 7 Please indicate your self-assessment of the following demands placed on you by the pair programming setting: 26. Mental Demand 27. Physical Demand 28. Temporal Demand 29. Effort 30. Frustration 31. Performance

PAGE 215

200 Appendix I (Continued) Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) is an appropriate answer ONLY IF YOU DID NOT HAVE A PARTNER. Not Applicable Did not have a partner Strongly Disagree Strongly Agree 0 1 2 3 4 5 6 7 32. I am satisfied with the pair programming work setting. 33. The pair programming work setting allows me to get help from my partner when needed. 34. The pair programming work setting allo ws me to feel like I belong to the development team. 35. I am not satisfied with pair programming. 36. I do not believe that the pa ir programming setting allows me to get help when I need it. Information about how YOUR PART NER interacts with others: Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is an appr opriate answer ONLY IF YOU DID NOT HAVE A PARTNER. Not Applicable Did not have a partner Rarely Always 0 1 2 3 4 5 37. my partner generally tries to satisfy the needs of my peers 38. my partner attempts to avoid being put on th e spot and tries to keep conflict with my peers to him / herself 39. my partner tries to integrat e his / her ideas with the id eas of others to come up with a joint decision 40. my partner gives up something in order to get something else 41. my partner tries to work with others to find solutions to problems that satisfy both our expectations 42. my partner usually avoids open discussion of differences or disagreements with others 43. my partner usually holds on to his / her solution to a problem 44. my partner tries to find a middle course to resolve an impasse 45. my partner uses his / her influen ce to get his / her ideas accepted 46. If possible, my partner us es authority to make a decision go in his / her favor 47. my partner usually accommodates the wishes of others 48. my partner wins some and loses some.

PAGE 216

201 Appendix I (Continued) Information about how YOUR PARTNER interacts with others, continued: Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is an appr opriate answer ONLY IF YOU DID NOT HAVE A PARTNER. Not Applicable Did not have a partner Rarely Always 0 1 2 3 4 5 49. my partner exchanges accurate information with others in order to solve a problem together 50. my partner sometimes will help a decision to be made in favor of others 51. my partner usually makes concessions to others. 52. my partner argues his / her case in order to show the merits of his / her position 53. my partner tries to play dow n our differences to reach a compromise with others 54. my partner usually proposes a middle ground for breaking deadlocks 55. my partner avoids unpleasant encounters with others. 56. my partner uses his / her expertis e to make decisions in my favor 57. my partner often goes along with the suggestions of others 58. my partner uses give and take so compromise can be made 59. my partner tries to bring ev eryones concerns out in th e open so that the issues can be resolved in the best possible way 60. my partner collaborates with others to come up with decisions acceptable to his / her 61. my partner tries to satisfy the expectations of others 62. my partner sometimes uses his / her power to win a competitive situation 63. my partner tries to avoid unpl easant exchanges with others 64. my partner tries to work with others to get a proper understanding of the problem

PAGE 217

202 Appendix I (Continued) General questions about the tasks: Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) may be an appropriate answer, in some instances, ONLY if you did NOT have a partner. Not Applicable I did not have a partner Very Small Extent V e r y Large Extent 0 1 2 3 4 5 6 7 To what extent did you perform the following? 65. I prepared test data sets for modules. 66. I prepared pseudocode for modules. 67. I checked pseudocode prepared. 68. I checked test data sets prepared. 69. I used test data sets prepar ed when checking pseudocode. 70. I made extensive use of my knowledge of programming and testing concepts and techniques. 71. I learned a great deal about the system by mentally processing parts of the design specification. 72. I frequently consulted th e documentation provided. 73. I obtained information about the sy stem from comments in the design specification. 74. I added new functionality to the ps eudocode prepared during the task. 75. I modified test cases prepared during the task. 76. I asked a colleague for technical in formation on techniques for developing pseudocode. 77. I asked a colleague for technical information on testing techniques. To what extent did you perform the following? 78. I had to keep my partner informed of my work so as to keep my work consistent with other task steps. 79. I was required to share my work for review to someone else. 80. I made an effort to insure that the changes I made in these tasks would not interfere with other work being done at the same time by others. 81. I needed input from others in order to complete my work. 82. I was required to review the work of others.

PAGE 218

203 Appendix I (Continued) General Questions about the task : Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) is NOT an appropriate answer. Very Small Extent Very Large Extent 1 2 3 4 5 6 7 To what extent does the pair programmi ng technique available to you supply the following functions? 83. Prepare a test data set to meet design specifications. 84. Write pseudocode for a module to meet design specifications. 85. Check pseudocode for errors. 86. Check unit test cases for errors. 87. Describe test records in te rms of number and content. To what extent does the pair programmi ng technique available to you supply the following functions? 88. Share task data or information with other individuals. 89. Exchange information relating to th e task with other individuals. 90. Maintain task management status and information. 91. Track schedule and/or progress for the task completion. 92. To what extent did you use the pair progr amming technique in completion of the task? To what extent did you agree with the following? 93. I know my partner will consider my concerns when making decisions. 94. The quality of our communi cation is extremely good. 95. We confront issues effectively. 96. Our goals are the same. 97. We view the world in the same way. 98. I understand my partners primary problems. 99. My partner understands my primary problems. 100.We have many shared activities. 101.I frequently think of my partner as a member of the same unit or team. NOTE: A nswer questions 79 thru 93 only if you used the Groove Software Tool in todays session. If you did not use Groove, you are finished!!

PAGE 219

204 Appendix I (Continued) General Questions about the Groove Software tool: Use the rating scale to describe how a ccurately each statement describes your situation. Please note that zero (0) is NOT an appropriate answer. Very Small Extent Very Large Extent 1 2 3 4 5 6 7 To what extent does the Groove software tool enable you to perform the following functions? 102. Prepare a test data set to meet design specifications. 103. Write pseudocode for a module to meet design specifications. 104. Check pseudocode for errors. 105. Check unit test cases for errors. 106. Describe test records in te rms of number and content. To what extent does the Groove software tool enable you to supply the following functions? 107. Share task data or information with other individuals. 108. Exchange information relating to th e task with other individuals. 109. Maintain task management status and information. 110. Track schedule and/or progress for the task completion. 111. To what extent did you use the Groove so ftware tool in comp letion of the task? -END OF SURVEY -Thank you for your participation!!!

PAGE 220

205 Appendix J Final Questionnaire Study 2 (Note: Questions 2 and 3 used in dissertation) Instructions: Please respond to the following items as accurately as possible. This data will be kept CONFIDENTIAL. Place al l answers on the optical scanning sheet (Scantron) provided. Pl ease DO NOT MARK on the question sheets. Thank you. General Information: Name please indicate your instructor a nd time of class (leaving no spaces): Sex please indicate the following: Male or Female Grade or Education SKIP THIS ITEM Birth date please indicate the following: Month, Day, Year Identification number Please put your SUBJECT ID in this space Special Code SKIP THIS ITEM General Information on todays Session: 2. In todays session, we worked on 0) Task II ( printed on white paper) 1) Task III (printed on yellow paper) 2) Task IV (printed on pink paper) 3) Task V (printed on blue paper) 3. In todays session my role was that of the... 0) Not applicable, I did not have a partner 1) Driver 2) Navigator IF YOU DID NOT HAVE A PARTNER, DO NOT COMPLETE THE REST OF THIS SURVEY.

PAGE 221

206 Appendix J (Continued) Information about how your PARTNER communicates with you. The phrases listed below describe peoples behaviors relative to communication with others in work situations. Use the rating scale below to describe how accurately each statement describes YOUR PARTNER. Pleas e note that zero (0) is may be an appropriate answer ONLY IF WORKED IN A VIRTUAL SETTING, and therefore are unable to respond to the item. Not Applicable I worked in a Virtual setting Strongly Disagree Moderately Disagree Slightly Disagree Slightly Agree Moderately Agree Strongly Agree 0 1 2 3 4 5 6 4. my partner finds it easy to get along with others 5. my partner can adapt to changing situations 6. my partner treats pe ople as individuals 7. my partner interrupts others who talk too much 8. my partner is rewarding to talk to 9. my partner can deal with others effectively 10. my partner is a good listener 11. my partners work relatio ns are cold and distant 12. my partner has some nervous mannerisms in his / her speech 13. my partner is a very relaxed communicator 14. When he/ she disagrees with somebody, my partner is very quick to challenge them 15. my partner can always repeat back to a person exactly what was meant 16. my partner is a very precise communicator 17. my partner leaves a definite impression on people 18. my partners rhythm or flow of his / her speech is sometimes affected by his / her nervousness 19. Under pressure, my partner comes across as a relaxed speaker 20. my partners eyes reflect exactly what he / she is feeling when he / she communicates 21. my partner dramatizes a lot 22. my partner finds it very easy to communicat e on a one-to-one ba sis with strangers 23. Usually, my partner deliberately reacts in such a way that people know that he / she is listening to them 24. Usually my partner does not tell people mu ch about him / her self until he / she gets to know them well 25. my partner regularly tells jokes, an ecdotes and stories when he / she communicates 25. my partner tends to consta ntly gesture when he / she communicates

PAGE 222

207 Appendix J (continued) Information about how your PARTNER communicates with you, continued. The phrases listed below describe peoples behaviors relative to communication with others in work situations. Use the rating scale below to describe how accurately each statement describes YOUR PARTNER. Pleas e note that zero (0) is may be an appropriate answer ONLY IF WORKED IN A VIRTUAL SETTING, and therefore are unable to respond to the item. Not Applicable I worked in a Virtual setting Strongly Disagree Moderately Disagree Slightly Disagree Slightly Agree Moderately Agree Strongly Agree 0 1 2 3 4 5 6 26. my partner is an extremely open communicator 27. in a small group of strangers my partner is a very good communicator 28. in arguments my partner insists upon very precise definitions 29. in most work situations my partne r generally speaks very frequently 30. my partner finds it extremely easy to mainta in a conversation with coworkers he/ she has just met 31. my partner likes to be strictly accurate when he/ she communicates 32. often my partner physically a nd vocally acts out what he / she wants to communicate 33. my partner readily reveals personal things about him/her self at work 34. my partner is dominant in work situations 35. my partner is very argumentative at work 36. once he / she gets wound up in a heated disc ussion at work, my partner has a hard time stopping him / her self 37. my partner is always an ex tremely friendly communicator 38. my partner really likes to lis ten very carefully to people 39. very often my partner insists that other pe ople document or present some kind of proof for what they are arguing 40. my partner tries to take charge of things when he / she is with people 41. it bothers my partner to drop an argument that is not resolved 42. in most work situation my partner tends to come on strong 43. my partner is very expressive nonverbally in work situations

PAGE 223

208 Appendix J (Continued) Information about how your PARTNER communicates with you, continued. The phrases listed below describe peoples behaviors relative to communication with others in work situations. Use the rating scale below to describe how accurately each statement describes YOUR PARTNER. Pleas e note that zero (0) is may be an appropriate answer ONLY IF WORKED IN A VIRTUAL SETTING, and therefore are unable to respond to the item. Not Applicable I worked in a Virtual setting Strongly Disagree Moderately Disagree Slightly Disagree Slightly Agree Moderately Agree Strongly Agree 0 1 2 3 4 5 6 44. The way my partner says something us ually leaves an impression on people 45. Whenever he / she communicates, my partner tends to be very encouraging to people 46. my partner actively uses a lot of facial expressions when he / she communicates 47. my partner very frequently verbally exaggerates to emphasize a point 48. my partner is an extrem ely attentive communicator 49. As a rule, my partner openly expres ses his / her feelings and emotions --END OF SURVEY --Thank you for your participation!!

PAGE 224

209 Appendix K Questionnaire Study 3 (Note: This questionnaire was adapted for each treatment group, as appropriate Collaborative Unstructured Problem Solving Questionnaire shown) OVERVIEW, p. 1 of 1 COLLABORATIVE PROGRAMMING BTAC OVERVIEW Thank you for participating in this survey about todays study. In completing this questionnaire, we ask that you answer each question carefully. There is no need to deliberate too much over any particular quest ion. Remember there are no right or wrong answers; we just want a trut hful response. All responses will be kept anonymous. You will complete this survey at various intervals throughout todays session, as instructed by the researcher. (DO NOT PR OCEED in completing the survey or opening the envelopes, once you see the written instruc tions to stop.) You w ill begin to complete the survey again at the direction of the researcher. Are there any questions? You will now be asked to complete the first page ONLY of this survey. In doing so, please respond only to this portion of the survey on the spaces below. Demographic Information Your identification number (Subject ID) ______________ Special Code (Team Code) ___________ Highest education level in College: Undergraduate: Degree completed in __________ (year) Or Current class status __________ (e.g., junior, senior) Graduate: Degree completed in _________ (year) And / Or Current number of graduate hrs. completed _______ Programming languages known: For each language you list, please describe your vel of knowledge (indicate with check): _______________ __ learned __ used in development __ highly proficient _______________ __ learned __ used in development __ highly proficient _______________ __ learned __ used in development __ highly proficient _________________ __ learned __ used in development __ highly proficient

PAGE 225

210 Appendix K (Continued) If you are working, or have worked, please indicate highest position (check one) Staff _____ Supervisor _____ Manager _____ List all IT positions that you have held or currently hold: __________________ PLEASE STOP DO NOT CONT INUE UNTIL INSTUCTED TO DO SO DO NOT PROCEED SECTION A, p. 1 of 3 COLLABORATIVE PROGRAMMING BTAC, continued Instructions: For this portion of the survey you will answer questions 1 through 32, only. DO NOT PROCEED in completing the survey, once you see the written instructions to stop. You will begin to complete the survey again at the direction of the researcher. For questions 1 through 32, please record your responses on the Scantron form provided to you. In most instances you will be aske d to complete a question based on number assigned to a scale, such 1 for strongly disagree or 7 for strongly agree. For example, If you strongly agree to the question, bubble in 1 (under letter B). Please use a #2 pencil and complete each circle completely. All responses will be kept anonymous. If you have any questions, please ask them now. Please begin when instructed to do so. General Information: NAME: FOLLOW INSTRUCTIONS PROVIDED BY RESEARCHER. SEX: Fill in your sex M (Male) or F (Female). INDENTIFICATION: Fill in your subject ID. DATE OF BIRTH: Please co mplete month, day and year. SPECIAL CODES: Fill in your special code. Please provide the following information abou t how YOU interact with others, Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate an swer. NOTE THIS SCALE IS FROM 1 5. Rarely Neither Always 1 2 3 4 5 1. I try to investigate an issue with othe rs to find a solution acceptable to us. 2. I generally try to satisfy the needs of others. 3. I attempt to avoid being put on the spot and try to keep conflict with others to myself. 4. I try to integrate my ideas with those of others to come up with a decision jointly. 5. I try to work with others to find solutions to problems that satisfy both our expectations

PAGE 226

211 Appendix K (Continued) 6. I usually avoid open discussion of differe nces or disagreements with others. 7. I try to find a middle course to resolve an impasse. 8. I use my influence to get my ideas accepted. 9. If possible, I use authority to make a decision go in my favor. 10. I usually accommodate the wishes of others. 11. I give in to wishes of others. SECTION A, p. 2 of 3 COLLABORATIVE PROGRAMMING BTAC, continued Please provide the following information abou t how YOU interact with others, Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate an swer. NOTE THIS SCALE IS FROM 1 5. Rarely Neither Always 1 2 3 4 5 12. I exchange accurate information with othe rs in order to solve a problem together. 13. I usually make concessions to others. 14. I usually propose a middle ground for breaking deadlocks. 15. I negotiate with others so a compromise can be reached. 16. I try to stay away from disagreeing with others. 17. I avoid unpleasant enco unters with others. 18. I use my expertise to make decisions in my favor. 19. I often go along with the suggestions of others. 20. I use give and take so that a compromise can be made. 21. I am generally firm in pursuing my side of the issue. 22. I try to bring all our concerns out in the open so that the issues can be resolved in the best possible way.

PAGE 227

212 Appendix K (Continued) 23. I collaborate with others to come up with decisions acceptable to us. 24. I try to satisfy the expectations of others. 25. I sometimes use my power to win a competitive situation. 26. I try to keep my disagreement with othe rs to myself in order to avoid hard feelings. 27. I try to avoid unpleasant exchanges with my peers. 28. I try to work with my peers for proper understanding of the problem. SECTION A, p. 3 of 3 COLLABORATIVE PROGRAMMING BTAC, continued Please note that for questions 29 through 32 the answer key has changed. 29. I have the following years of general work experience (to nearest year) 0) None 1) One 2) Two 3) Three 4) Four 5) Five 6) Six 7) Seven 8) Eight 9) Nine or more 30. I have the following years of IT work experience (to nearest years) 0) None 1) one 2) two 3) three 4) four 5) five 6) six 7) seven 8) eight 9) nine or more

PAGE 228

213 Appendix K (Continued) 31. I have the following years of progra mming experience (to nearest years) 0) None 1) one 2) two 3) three 4) four 5) five 6) six 7) seven 8) eight 9) nine or more 32. I have had training in the followi ng number of programming languages 0) None 1) one 2) two 3) three 4) four 5) five 6) six 7) seven PLEASE STOP DO NOT CONTINUE UNTIL INSTUCTE D TO DO SO DO NOT PROCEED SECTION B, p. 1 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE TASK YOU JUST COMPLETED Questions 33 through 71 relate to the experi mental task that you just completed Please provide the following information based on the programming task that you just completed All answers should be recorded on the Scantron form provided. Part a. 33. In todays session I / we worked on 0) Task II PINK paper 1) Task III YELLOW paper 34. Compared to other programming assignments I found the task that I just completed 0) Not difficult at all 1) Quite difficult 2) Slightly difficult 3) Neither difficult or complex 4) Slightly complex 5) Quite complex 6) Extremely complex

PAGE 229

214 Appendix K (Continued) 35. If this is the first task you have completed today, please bubble in 7 on the Scantron sheet. If this is NOT the first task you comp leted today, answer this question: Compared to the programming task completed in th e first session, I found this task 0) Not difficult at all 1) Quite difficult 2) Slightly difficult 3) Neither difficult or complex 4) Slightly complex 5) Quite complex 6) Extremely complex 7) Not applicable, this was the first task completed in todays session 36. I have worked on this programming task before 0) Yes 1) No SECTION B, p. 2 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE TA SK YOU JUST COMPLETED Part b. Today you were instructed to brainstorm first before writing the psuedocode alone. Please respond to the following questions, regarding how well you followed these instructions. Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate answer NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONG LY AGREE 1 2 3 4 5 6 7 37. We were able to reach consensus on how to apply brainstorming for the programming task, i.e. before writing pseudocode alone. 38. We always agreed on how brainstorming should be used for our programming task, i.e. before writing pseudocode alone. 39. There was some disagreement between us on how to utilize brainstorming in order to perform our programming task, i.e. before writing psuedocode alone. 40. We were not able to reach consensus, or a mutual understanding, of how to make use brainstorming to perform our programming tas k, i.e. before writing psuedocode alone. 41. Overall, we agreed on how we shoul d brainstorm today for our programming assignment, i.e. writing psuedocode alone.

PAGE 230

215 Appendix K (Continued) 42. There was no conflict between us regarding how we should use brainstorming in our work on the programming assignment. 43. We had difficulty agreeing about how we should incorporate brainstorming into our work on the programming assignment. 44. We reached mutual understanding on how we should incorporate brainstorming into our work on the programming assignment. 45. We differed (argued) about how we should incorporate brainstorming into our work on the programming assignment. 46. We were able to reach consensus on how we should incorporate brainstorming into our work on the programming assignment. SECTION B, p. 3 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE TASK YOU JUST COMPLETED Part c. Today you were instructed to brainstorm first before writing the psuedocode. Please respond to the following questions, regarding the programming task you just completed Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate answer NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONGLY AGREE 1 2 3 4 5 6 7 47. We were faithful to in doing brainstorm ing first before writing the pseudocode alone for the programming assignment. 48. My partner and I exerted equal influence in doing brainstorming first before writing the pseudocode alone for our programming assignment. 49. We read the task first, then plan ned and work together throughout, in doing brainstorming first before writing the pseudoc ode alone for our programming assignment. 50. We followed the instructions that were give n to us, in doing brainstorming first before writing the pseudocode alone for our programming assignment. 51. There was constant interaction between us, in doing brainstorming first before writing the pseudocode alone for our programming assignment.

PAGE 231

216 Appendix K (Continued) SECTION B, p. 4 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE TASK YOU JUST COMPLETED Part d Use the rating scale to describe how a ccurately each statement describes YOU. Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appr opriate answer. NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONGLY AGREE 1 2 3 4 5 6 7 52. I am capable of dealing with mo st problems that come up at work. 53. When I set important goals for myself, I achieve them. 54. If something looks too co mplicated, I avoid it. 55. When trying to learn something new, I soon give up if I am not initially successful. 56. I am a self-reliant person. 57. Initial failure at th e kind of task I did toda y makes me try harder. 58. I feel confident about my ability to do the kind of task I did today. 59. I am capable of doing the kind of task I did today. 60. If I have failures in doing the kind of task I did today, I will try harder

PAGE 232

217 Appendix K (Continued) SECTION B, p. 5 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE TASK YOU JUST COMPLETED Part e. Today you were instructed to brainstorm together before writing the pseudocode alone. Use the rating scale to describe how accurately each statement describes your situation Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate answer. NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONGLY AGREE 1 2 3 4 5 6 7 61. I am satisfied working together on br ainstorming and then writing code alone. 62. I am satisfied with the brainstormi ng outputs we generated on this assignment. 63. I am satisfied with the psuedocode outputs I generated on this assignment. 64. I am satisfied with the assumptions we made while working on brainstorming together for this assignment. 65. I would like to continue to work toge ther on brainstorming for this assignment. 66. We were very successful working togeth er on brainstorming for this assignment. 67. We were very successful in accomplis hing the desired outcomes required of us by doing brainstorming together for this assignment. 68. I like working together on brainstorming for this assignment. 69. I believe that my code is of better quality because we worked together brainstorming for this assignment. 70. I liked working together on brainstormi ng, because it helped me write better code alone. 71. I caught more defects in my code sin ce we did brainstorming together for this assignment. PLEASE STOP DO NOT CONTINUE UNTIL INSTUCTED TO DO SO

PAGE 233

218Appendix K (Continued) SECTION C, p. 1 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE FINAL TASK YOU JUST COMPLETED You have just completed THE FINAL exp erimental task in todays session. Questions 72 through 110 relate to the experimental task that you just completed Please provide the following information based on the final programming task that you just completed All answers should be recorded on the Scantron form provided. Part f. 72. In todays session I / we worked on 2) Task II PINK paper 3) Task III YELLOW paper 73. Compared to other programming assignments, I found this final task 0 Not difficult at all 1. Quite difficult 2 Slightly difficult 3 Neither difficult or complex 4 Slightly complex 5 Quite complex 6 Extremely complex 74. Compared to the other programming task I completed previously for this experiment, I found this final task 0) Not difficult at all 1) Quite difficult 2) Slightly difficult 3) Neither difficult or complex 4) Slightly complex 8) Quite complex 9) Extremely complex 75. I have worked on this final programming task before 1) Yes 2) No -Please continue -

PAGE 234

219 Appendix K (Continued) SECTION C, p. 2 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE FINAL TASK YOU JUST COMPLETED Part g. Today you were instructed to brainstorm fi rst before writing the pseudocode alone. Please respond to the following question s, regarding how well you followed these instructions. Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate answer NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONGLY AGREE 1 2 3 4 5 6 7 76. We were able to reach consensus on how to apply brainstorming for the programming task, i.e. before writing pseudocode alone. 77. We always agreed on how brainstorming should be used for our programming task, i.e. before writing psuedocode alone. 78. There was some disagreement between us on how to utilize brainstorming in order to perform our programming task, i.e. before writing psuedocode alone. 79. We were not able to reach consensus, or a mutual understanding, of how to make use brainstorming to perform our programming tas k, i.e. before writing psuedocode alone. 80. Overall, we agreed on how we should brainstorming today for our programming assignment, i.e. writing psuedocode alone. 81. There was no conflict between us regarding how we should brainstorm in our work on the programming assignment. 82. We had difficulty agreeing about how we should incorporate brainstorming into our work on the programming assignment. 83. We reached mutual understanding on how we should incorporate brainstorming into our work on the programming assignment. 84. We differed (argued) about how we should incorporate brainstorming into our work on the programming assignment. 85. We were able to reach consensus on how we should incorporate brainstorming into our work on the programming assignment.

PAGE 235

220 Appendix K (Continued) SECTION C, p. 3 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE FINAL TASK YOU JUST COMPLETED Part h. Today you were instructed to do brainstorming first before writing the pseudocode. Please respond to the following question s, regarding how well you followed these instructions. Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate answer NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONGLY AGREE 1 2 3 4 5 6 7 86. We were faithful to in doing brainstorm ing first before writing the psuedocode alone for the programming assignment. 87. My partner and I exerted equal influence in doing brainstorming fi rst before writing the pseudocode alone for our programming assignment. 88. We read the task first, then plan ned and work together throughout, in doing brainstorming first before writing the pseudoc ode alone for our programming assignment. 89. We followed the instructions that were given to us, in doing brainstorming first before writing the pseudocode alone for our programming assignment. 90. There was constant interaction between us, in doing brainstorming first before writing the pseudocode alone for our programming assignment. Please continue

PAGE 236

221 Appendix K (Continued) SECTION C, p. 4 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE FINAL TASK YOU JUST COMPLETED Part i. Use the rating scale to describe how a ccurately each statement describes YOU. Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appr opriate answer. NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONGLY AGREE 1 2 3 4 5 6 7 91. I am capable of dealing with most problems that come up at work. 92. When I set important goals for myself, I achieve them. 93. If something looks too co mplicated, I avoid it. 94. When trying to learn something new, I soon give up if I am not initially successful. 95. I am a self-reliant person. 96. Initial failure at th e kind of task I did toda y makes me try harder. 97. I feel confident about my ability to do the kind of task I did today. 98. I am capable of doing the ki nd of task I did today. 99. If I have failures in doing the kind of task I did today, I will try harder

PAGE 237

222 Appendix K (Continued) SECTION C, p. 5 of 5 COLLABORATIVE PROGRAMMING BTAC, continued QUESTIONS ABOUT THE FINAL TASK YOU JUST COMPLETED Part j. Today you were instructed to brainstorm together before writing the pseudocode alone. Use the rating scale to describe how accurately each statement describes your situation Use the rating scale below to describe how accurately each statement describes you. Please note that zero (0) is NOT an appropriate answer. NOTE THIS SCALE IS FROM 1 7. STRONGLY DISAGREE QUITE DISAGREE SLIGHTLY DISAGREE NEITHER SLIGHTLY AGREE QUITE AGREE STRONGLY AGREE 1 2 3 4 5 6 7 100. I am satisfied working together on br ainstorming and then writing code alone. 101. I am satisfied with the brainstormi ng outputs we generated on this assignment. 102. I am satisfied with the pseudocode outputs I generated on this assignment. 103. I am satisfied with the assumptions we made while working on brainstorming together for this assignment. 104. I would like to continue to work togeth er on brainstorming for this assignment. 105. We were very successful working together on brainstorming for this assignment. 106. We were very successful in accomplishi ng the desired outcomes required of us by doing brainstorming together for this assignment. 107. I like working together on brai nstorming for this assignment. 108. I believe that my code is of better quality because we worked together brainstorming for this assignment. 109. I liked working together on brainstormi ng, because it helped me write better code alone. 110. I caught more defects in my code since we did brainstorming together for this assignment. -PLEASE STOP, END OF SESSION THANK YOU FOR YOUR PARTICIPATION TODAY. You will now receive further instructions from the researcher.

PAGE 238

223 About the Author Madeline Ann Domino earned a doctorate in Business Administration from the University of South Florida, with a majo r in Management Information Systems and a minor in Accounting. She also holds a Masters of Business Administration and a Masters of Public Health from the University of South Florida, as well as, Bachelors in Science in Accounting from Florida State Univ ersity. Her areas of research interests include systems development, huma n factors and distributed work. Prior to entering the academy, Dr. Domino held a senior ma nagement positions at Bank of America where she served as the princi pal statewide finance officer for a $25 billion affiliate, as a member of the Senior Management Team and on the Audit Committee. She led several nationwide system s projects. Other signifi cant professional experience includes Sun Micro Systems, Pricewaterhous eCoopers and Deloitte & Touche. Domino is a Certified Public Acc ountant and serves on the Boar d of the Employers Health Coalition.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001787672
003 fts
005 20070509144618.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 070509s2004 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001428
040
FHM
c FHM
035
(OCoLC)124082402
049
FHMM
090
HF71 (ONLINE)
1 100
Domino, Madeline Ann.
0 245
Three studies of problem solving in collaborative software development
h [electronic resource] /
by Madeline Ann Domino.
260
[Tampa, Fla] :
b University of South Florida,
2004.
3 520
ABSTRACT: A potential solution to producing quality software in an acceptable time frame may be found by using the newer, innovative methods, such as collaborative software development. The purpose of this dissertation is to examine the individual developer characteristics, developmental settings, collaborative methods and the processes during development that impact collaborative programming performance and satisfaction outcomes.Understanding individual differences in performance in the collaborative development setting is important, since it may help us understand how the collaborative setting may raise the lowest level of performance to much higher levels, as well as how to select individuals for collaborative development. Exploring the impact of the virtual setting on collaborative development processes is important as it may help us improve performance outcomes in different work settings. Investigating how adaptations of pair programming impact collaborative processes may^ assist in implementing changes to the method that enhance quality and individual satisfaction.A multi-phase methodology is used, consisting of an intensive process study (Study 1) and two laboratory experiments (Studies 2 and 3). Study 1 illustrates that collaborative programming (pair programming) outcomes are moderated by both individual developer differences and the processes used during development. While cognitive ability and years of IT experience are important factors in performance, the impacts of conflict and the faithful appropriation of the method are highlighted. Distributed cognition is used as a theoretical foundation for explaining higher performance.Study 2 findings suggest that while collaborative programming is possible in a virtual setting, performance is negatively impacted. Face-to-face programmers have significantly higher levels of task performance, as well as satisfaction with the method, when compared to virtual programmers.Study 3 results suggests that the u se of structured problem solving (preparing test cases before writing code) may be a key factor in producing higher quality code, while collaboration may be indusive to higher levels of developer satisfaction.By understanding how, why and when collaborative programming techniques produce better performance outcomes and what factors contribute to that success, we add to the body of knowledge on methodologies in the MIS domain.
502
Dissertation (Ph.D.)--University of South Florida, 2004.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 223 pages.
590
Adviser: Alan R. Hevner, Ph.D.
653
Systems.
Distributed work.
Cognition.
Agile methods.
Conflict.
Dyads.
690
Dissertations, Academic
z USF
x Business Administration
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1428