xml version 1.0 encoding UTF8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchemainstance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam 2200397Ka 4500
controlfield tag 001 002046177
005 20100104091005.0
007 cr mnuuuuuu
008 100104s2008 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14SFE0002735
035
(OCoLC)495698886
040
FHM
c FHM
049
FHMM
090
LB3051 (Online)
1 100
Phan, Ha T.
0 245
Correlates of mathematics achievement in developed and developing countries :
b an HLM analysis of TIMSS 2003 eighthgrade mathematics scores
h [electronic resource] /
by Ha T. Phan.
260
[Tampa, Fla] :
University of South Florida,
2008.
500
Title from PDF of title page.
Document formatted into pages; contains 322 pages.
Includes vita.
502
Dissertation (Ph.D.)University of South Florida, 2008.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
3 520
ABSTRACT: Using eighthgrade mathematics scores from TIMSS 2003, a largescale international achievement assessment database, this study investigated correlates of math achievement in two developed countries, Canada and the United States and two developing countries, Egypt and South Africa. Variation in math achievement within and between schools for individual countries was accounted for by a series of twolevel HLM models. Specifically, there were five sets of HLM models representing student background, home resources, instructional practices, teacher background, and school background related factors. In addition, a final model was built by including all the statistically significant predictors in earlier models to predict math achievement. Findings from this study suggested that whereas the instructional practices model worked the best for the United States and the teacher background model served as the most efficient and parsimonious model for predicting math achievement in Egypt, the final model served as the best model for predicting math achievement in Canada and South Africa. These findings provide empirical evidence that different models are needed to account for factors related to achievement in different countries. This study, therefore, highlights the importance that policy makers and educators from developing countries should not base their educational decisions and educational reform projects solely on research findings of developed countries. Rather, they need to use their countryspecific findings to support their educational decisions. This study also provides a methodological framework for applied researchers to evaluate the effects of background and contextual factors on students' math achievement
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
590
Advisor: Jeffrey D. Kromrey, Ph.D.
653
Secondary data
Math performance
Multilevel analysis
Largescale assessment
International research
690
Dissertations, Academic
z USF
x Educational Measurement and Evaluation
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.2735
PAGE 1
Correlates of Mathematics Achievement in Developed and Developing Countries: An HLM Analysis of TIMSS 2003 Ei ghthGrade Mathematics Scores by Ha T. Phan A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Educational Measurement and Evaluation College of Education University of South Florida Major Professor: Jeffrey D. Kromrey, Ph.D. Robert F. Dedrick, Ph.D. John M. Ferron, Ph.D. Christina Sentovich, Ph.D. Date of Approval: October 10, 2008 Keywords: secondary data, math performa nce, multilevel analysis, largescale assessment, international research Copyright 2008, Ha T. Phan
PAGE 2
DEDICATION This manuscript is dearly de dicated to my family. To my father and mother whose positive influences on my life, my education, and the work presented here, have been tremendous. To my wonderful husband, Son a nd my two beautiful children, Thao and Namson, whose endless love and caring and c ountless support have made my educational dream possible.
PAGE 3
ACKNOWLEDGEMENTS I would like to thank the fo llowing individuals for their significant contributions toward my successful completion of this de gree. First, I would like to thank Jeffrey Kromrey, my major professor, for his wonde rful guidance, support, and wisdom that benefited me in many ways throughout this diss ertation journey. I am also in gratitude to my committee, John Ferron, Robert Dedrick, and Christina Sent ovich for their insightful feedback and efforts in the preparation of this manuscript. I am grateful to Gladis Kersaint and Denisse Thompson from the Ma th Education Program for their valuable consultations and recommendations related to the work presented here. Also, I am grateful to Lou Carey and Cynthia Parshall w ho first inspired my interest in measurement and research. Finally, I woul d like to acknowledge the National Center for Education Statistics for their training in largescale data analysis as well as their provision of the data for this research.
PAGE 4
i TABLE OF CONTENTS LIST OF TABLES ................................................................................................................ ....................... V LIST OF FIGURES ............................................................................................................... ................. VIII ABSTRACT ...................................................................................................................... ............................ X CHAPTER ONE: INTRODUCTION ..................................................................................................... ..... 1 STATEMENT OF PROBLEM ........................................................................................................................... 1 PURPOSE OF THE STUDY .............................................................................................................................. 2 RESEARCH QUESTIONS .............................................................................................................................. .. 3 RATIONALES FOR THE STUDY ...................................................................................................................... 4 THEORETICAL FRAMEWORK ........................................................................................................................ 6 LIMITATIONS .............................................................................................................................. ................. 6 DEFINITIONS .............................................................................................................................. .................. 8 CHAPTER TWO: LITER ATURE REVIEW .......................................................................................... 10 INTRODUCTION .............................................................................................................................. ............ 10 HISTORY OF INTERNATIONAL MATHEMATICS ACHIEVEMENT ASSESSMENTS ........................................... 10 IMPORTANCE OF INTERNATIONAL MATHEMATICS ACHIEVEMENT ASSESSMENTS ..................................... 12 THEORETICAL FRAMEWORK ...................................................................................................................... 14 STUDENTRELATED FACTORS AND STUDENT ACHIEVEMENT .................................................................... 16 Gender ........................................................................................................................ ......................... 16 Selfconfidence in Learning ................................................................................................... .............. 18 Valuing of Learning ........................................................................................................... .................. 19 Family B ackground.............................................................................................................. ................ 23 Time on Homework .............................................................................................................. ............... 26 Academic Tutoring ............................................................................................................. ................. 30 INSTRUCTIONAL PRACTICESRELATED FACTORS AND STUDENT ACHIEVEMENT ...................................... 34 Opportunity to L earn .......................................................................................................... ................. 34 Homework Assignment ........................................................................................................... ............ 39 Classroom Activitie s .......................................................................................................... .................. 42 Instructiona l Time ............................................................................................................ .................... 45 TEACHERRELATED FACTORS AND STUDENT ACHIEVEMENT .................................................................... 50 Preparation to T each .......................................................................................................... .................. 50 Readiness to Teach ............................................................................................................ .................. 53 Professional De velopment ...................................................................................................... ............. 56 SCHOOLRELATED FACTORS AND STUDENT ACHIEVEMENT ...................................................................... 61 Class Size .................................................................................................................... ......................... 62 School Resources .............................................................................................................. ................... 66 Instructional Limitations ..................................................................................................... ................. 69 SUMMARY .............................................................................................................................. ................... 73 CHAPTER THR EE:METHOD .......................................................................................................... ....... 80 PURPOSE OF THE STUDY ............................................................................................................................ 8 0 RESEARCH QUESTIONS .............................................................................................................................. 80 RESEARCH DESIGN .............................................................................................................................. ...... 81 Data Source ................................................................................................................... ....................... 81 Sampling Procedures ........................................................................................................... ................ 82 Data Coll ecti on ............................................................................................................... ..................... 83
PAGE 5
ii Sample ........................................................................................................................ ......................... 83 Country Pr ofiles .............................................................................................................. ..................... 85 Canada ........................................................................................................................ .................... 85 The United States ............................................................................................................. ............... 87 Egypt ......................................................................................................................... ...................... 88 South Africa .................................................................................................................. .................. 89 Instruments ................................................................................................................... ....................... 91 Eighthgrade Mathematics Assessment Survey .............................................................................. 91 Test bo oklet .................................................................................................................. .............. 91 Subject cont ent ar eas ......................................................................................................... ......... 93 Item writing and development .................................................................................................. .. 93 Item types .................................................................................................................... ............... 95 Translation, cultural adapta tion, and veri fication ....................................................................... 95 Reliability es timate s ......................................................................................................... .......... 95 Reported achiev ement scores ................................................................................................... .. 97 Raw scores .................................................................................................................... ............. 97 Standardized raw scores ....................................................................................................... ...... 98 National Rasc h scor es ......................................................................................................... ....... 98 Plausible values .............................................................................................................. ............ 98 Background Surveys ............................................................................................................ ........... 99 Eighthgrade mathematics student background survey. ............................................................. 99 Eighthgrade mathematics teacher survey ................................................................................ 100 School survey ................................................................................................................. .......... 100 Variables ..................................................................................................................... ....................... 101 Reliability of Composite Predictor Va riables .................................................................................. .. 108 Content ExpertsÂ’ Validation of the Selected Variables ...................................................................... 11 1 Interview with Cont ent Expert One ............................................................................................. 111 Interview with Cont ent Expert Two .............................................................................................. 112 Followup Interviews with Content Experts ................................................................................. 113 DATA ANALYSIS .............................................................................................................................. ....... 114 Secondary Data Analys is ....................................................................................................... ............ 114 Advantages .................................................................................................................... ................ 114 Disadvantages ................................................................................................................. .............. 116 Hierarchical Li near Mo deling .................................................................................................. .......... 117 Advantages of Mu ltilevel Mo dels ............................................................................................... .. 118 Assumptions of Mu ltilevel M odels .............................................................................................. 119 Analyses of TIM SS 2003 Database ............................................................................................... .... 121 Sampling Weights .............................................................................................................. ........... 121 Managing Multip le Databases ................................................................................................... .... 122 Treatment of Missing Data...................................................................................................... ...... 123 Univariate Analysis ........................................................................................................... ............ 124 Bivariate An alysis ............................................................................................................ ............. 124 Hierarchical Linear Modeling An alysis ........................................................................................ 124 Recoding Predictor Variable s for HLM Anal yses ........................................................................ 126 Models of the Study ........................................................................................................... ........... 126 Power Anal ysis ................................................................................................................ ............. 130 Summary ....................................................................................................................... ................ 130 CHAPTER FOUR: RESULTS ......................................................................................................... ........ 132 RESULTS FOR THE UNITED STATES ......................................................................................................... 132 Evaluation of Missing Data .................................................................................................... ........... 132 Univariate Analysis ........................................................................................................... ................ 132 Bivariate An alysis ............................................................................................................ .................. 135 Evaluation of HLM Assumptions ................................................................................................. ..... 137 HLM Anal ysis .................................................................................................................. ................. 141 Unconditional Mode l (Model 1) ................................................................................................. .. 141
PAGE 6
iii Research Qu estion 1 ........................................................................................................... ........... 142 Research Qu estion 2 ........................................................................................................... ........... 146 Research Qu estion 3 ........................................................................................................... ........... 149 Research Qu estion 4 ........................................................................................................... ........... 157 Research Qu estion 5 ........................................................................................................... ........... 163 Final Mo del ................................................................................................................... ................ 169 RESULTS FOR CANADA ........................................................................................................................ 174 Evaluation of Missing Data .................................................................................................... ........... 174 Univariate Analysis ........................................................................................................... ................ 174 Bivariate An alysis ............................................................................................................ .................. 177 Evaluation of HLM Assumptions ................................................................................................. ..... 178 HLM Anal ysis .................................................................................................................. ................. 183 Unconditional mode l (Model 1) ................................................................................................. ... 183 Research Qu estion 1 ........................................................................................................... ........... 184 Research Qu estion 2 ........................................................................................................... ........... 188 Research Qu estion 3 ........................................................................................................... ........... 191 Research Qu estion 4 ........................................................................................................... ........... 202 Research Qu estion 5 ........................................................................................................... ........... 209 Final Mo del ................................................................................................................... ................ 212 RESULTS FOR EGYPT .............................................................................................................................. 222 Evaluation of Missing Data .................................................................................................... ........... 222 Univariate Analysis ........................................................................................................... ................ 222 Bivariate An alysis ............................................................................................................ .................. 225 Evaluation of HLM Assumptions ................................................................................................. ..... 226 HLM Anal ysis .................................................................................................................. ................. 228 Unconditional Mode l (Model 1) ................................................................................................. .. 228 Research Qu estion 1 ........................................................................................................... ........... 229 Research Qu estion 2 ........................................................................................................... ........... 233 Research Qu estion 3 ........................................................................................................... ........... 236 Research Qu estion 4 ........................................................................................................... ........... 240 Research Qu estion 5 ........................................................................................................... ........... 243 Final Mo del ................................................................................................................... ................ 246 RESULTS FOR SOUTH AFRICA .................................................................................................................. 248 Evaluation of Missing Data .................................................................................................... ........... 248 Univariate Analysis ........................................................................................................... ................ 248 Bivariate An alysis ............................................................................................................ .................. 251 Evaluation of HLM Assumptions ................................................................................................. ..... 252 HLM Anal ysis .................................................................................................................. ................. 258 Unconditional mode l (Model 1) ................................................................................................. ... 258 Research Qu estion 1 ........................................................................................................... ........... 259 Research Qu estion 2 ........................................................................................................... ........... 263 Research Qu estion 3 ........................................................................................................... ........... 266 Research Qu estion 4 ........................................................................................................... ........... 272 Research Qu estion 5 ........................................................................................................... ........... 277 Final Mo del ................................................................................................................... ................ 281 SUMMARY OF RESULTS ........................................................................................................................... 28 3 Missing Da ta .................................................................................................................. .................... 283 Univariate Analysis ........................................................................................................... ................ 283 Bivariate An alysis ............................................................................................................ .................. 285 Evaluation of HLM Assumptions ................................................................................................. ..... 285 HLM Anal ysis .................................................................................................................. ................. 285 Unconditional model ........................................................................................................... .......... 285 Research Qu estion 1 ........................................................................................................... ........... 286 Research Qu estion 2 ........................................................................................................... ........... 287 Research Qu estion 3 ........................................................................................................... ........... 289 Research Qu estion 4 ........................................................................................................... ........... 290
PAGE 7
iv Research Qu estion 5 ........................................................................................................... ........... 291 Final Mo del ................................................................................................................... ................ 292 CHAPTER FIVE DISCUSSION ..................................................................................................... ...... 294 PURPOSE............................................................................................................................... ................... 294 REVIEW OF METHOD .............................................................................................................................. 294 RESULTS .............................................................................................................................. .................... 296 Home Resour ces Mo del .......................................................................................................... ........... 298 Instructional Pr actices Model ................................................................................................. ........... 299 Teacher Backgr ound Mo del ...................................................................................................... ......... 301 School Backgr ound Model ....................................................................................................... ......... 303 Final Mo del ................................................................................................................... ..................... 305 LIMITATIONS .............................................................................................................................. ............. 305 IMPLICATIONS .............................................................................................................................. ........... 308 FUTURE RESEARCH .............................................................................................................................. ... 309 REFERENCES .................................................................................................................... ...................... 310 APPENDICES .................................................................................................................... ....................... 324 APPENDIX A Â– LIST OF COUNTRIES ......................................................................................................... 325 APPENDIX B ITEMS USED TO CREATE COMPOSITE VARIABLE OPPORTUNITY TO LEARN ......................... 326 APPENDIX C ITEMS USED TO CREATE COMPOSITE VARIABLE READY TO TEACH MATH TOPICS ............... 330 APPENDIX D Â– RELIABILITIES OF COMPOSITE VARIABLES ...................................................................... 332 APPENDIX E WEIGHTED CORRELATION OF LEVEL1 VARIABLES FOR USA .......................................... 338 APPENDIX F UNWEIGHTED CORRELATION OF LEVEL2 VARIABLES FOR USA ...................................... 316 APPENDIX G WEIGHTED CORRELATION OF LEVEL1 VARIABLES FOR CANADA ................................... 317 APPENDIX H Â– UNWEIGHTED CORRELATION OF LEVEL2 VARIABLES FOR CANADA .............................. 318 APPENDIX I Â– WEIGHTED CORRELATION OF LEVEL1 VARIABLES FOR EGYPT ....................................... 319 APPENDIX K Â– UNWEIGHTED CORRELATION OF LEVEL2 VARIABLES FOR EGYPT .................................. 320 APPENDIX L WEIGHTED CORRELATION OF LEVEL1 VARIABLES FOR SOUTH AFRICA .......................... 321 APPENDIX M Â– UNWEIGHTED CORRELATION OF LEVEL2 VARIABLES FOR SOUTH AFRICA .................... 322 ABOUT THE AUTHOR ............................................................................................................ E ND PAGE
PAGE 8
v LIST OF TABLES Table 1: Summary of the Samp les Included in the Study ......................................................................... .... 85 Table 2. TIMSS 2003 Eighthgrade Math Assessment Booklet A ssembling Matrix .................................... 92 Table 3. Number of Items by Domain and Booklet in TIMSS 2003 Eighthgrade Math Assessment .................................................................................................................... .................. 93 Table 4. Maximum Number of Score Points in TIMSS 2003 Eighth Grade Math Assessment ................... 98 Table 5. Mapping of Variables in CarollÂ’s Mode l With Variables in th e Proposed Study .......................... 102 Table 6. Description of Contextu al and Backgrou nd Variable .................................................................... 103 Table 7. Weighted Descriptive Statistics for Level1 Variables for USA (N = 4,414) ............................... 133 Table 8. Unweighted Descriptive Statistics for Level1 Variables for USA (N = 4,414) ........................... 133 Table 9. Unweighted Descriptive Statistics fo r Level2 Variables for USA (N = 153) ............................. 135 Table 10. Parameter Estimates for Unconditional Mode l for USA ............................................................. 142 Table 11. Parameter Estimates for Models 26 (Level1 Student Back ground) for USA ........................... 143 Table 12. Parameter Estimates for Models 7 (Level1 Student Back ground) for USA ............................... 145 Table 13. Comparison of R2 between Model 7 and Previously Constructed Models for USA ................. 146 Table 14. Parameter Estimates forLevel1 Home Resources Mo del for USA ............................................ 147 Table 15. Parameter Estimates for Combined Level1 Predictors Model for USA ..................................... 148 Table 16. Parameter Estimates for Level2 In structional Practices Models for USA ................................. 150 Table 17. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model for USA ........................................................................................................ ..... 152 Table 18. Parameter Estimates for the Combined Le vel2 Instructional Practi ces Model for USA ............ 153 Table 19. Comparison of R2 between Model 14 and Previously C onstructed Models 913 for USA ......... 154 Table 20. Parameter Estimates for Teach er Background Mode ls for US A ................................................. 158 Table 21. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for USA ................................................................................................................. ........... 160 Table 22. Parameter Estimates for the Combin ed Teacher Background Model for USA ........................... 160 Table 23. Comparison of R2 between Model 18 and Previously C onstructed Models 9 and 1517 ............ 162 Table 24. Parameter Estimates for Schoo l Background Mode ls for USA ................................................... 164 Table 25. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for USA ................................................................................................................. ........... 166 Table 26. Parameter Estimates for the Combin ed School Background Model for USA ............................. 166 Table 27. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for USA ....................................................................................................................... ................ 167 Table 28. Parameter Estimate s for Full Mode l for USA .......................................................................... ... 170 Table 29. Comparison of R2 between Model 23 and Previously Constructed Models 14, 18 and 22 for USA ....................................................................................................................... ................ 171 Table 30. Weighted Descriptive Statistics for Le vel1 Variables for Canada (N = 6,248) ......................... 175 Table 31. Unweighted Descriptive Statistics for Level1 Variables for Canada (N = 6,248) ..................... 175 Table 32. Unweighted Descriptive Statistics for Level2 Variables for Canada (N = 271) ....................... 177 Table 33. Parameter Estimates for Un conditional Model fo r Canada ......................................................... 184 Table 34. Parameter Estimates for Models 26 (L evel1 Student Backgrou nd) for Canada ....................... 185 Table 35. Parameter Estimates for Model 7 (Lev el1 Student Backgroun d) for Canada ............................ 187 Table 36. Comparison of R2 between Model 7 and Previously C onstructed Models for Canada ............. 188 Table 37. Parameter Estimates forLevel1 Home Resources Mode l for Canada ........................................ 189 Table 38. Parameter Estimates for Combined Level1 Predictors Model for Canada ................................. 190 Table 39. Parameter Estimates for Level2 Inst ructional Practices Mode ls for Canada ............................. 192 Table 40. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model for Canada ...................................................................................................... .... 195 Table 41. Parameter Estimates for the Combined Leve l2 Instructional Practices Model for Canada ........ 195
PAGE 9
vi Table 42. Comparison of R2 between Model 14 and Previously Constructed Models 913 for Canada ........................................................................................................................ ................. 197 Table 43. Parameter Estimates for Teacher Background Models for Canada ............................................. 203 Table 44. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for Canada .............................................................................................................. .......... 205 Table 45. Parameter Estimates for the Combined Teacher Background Mo del for Canada ....................... 205 Table 46. Comparison of R2 between Model 18 and Previously C onstructed Models 9 and 1517 ............ 206 Table 47. Parameter Estimates for School Background Models for Canada ............................................... 209 Table 48. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for Canada .............................................................................................................. .......... 210 Table 49. Parameter Estimates for the Combined School Background M odel for Canada ......................... 211 Table 50. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for Canada .................................................................................................................... ............... 212 Table 51. Parameter Estimates for Full Model for Canada ....................................................................... .. 213 Table 52. Comparison of R2 between Model 23 and Previously Constructed Models 14, 18 and 22 for Canada .................................................................................................................... ............... 215 Table 53. Weighted Descriptive Statistics for Le vel1 Variables for Egypt (N = 1,876) ............................ 223 Table 54. Unweighted Descriptiv e Statistics for Level1 Variable s for Egypt (N = 1,876) ........................ 223 Table 55. Unweighted Descriptiv e Statistics for Level2 Variable s for Egypt (N =69) ............................. 225 Table 56. Parameter Estimates for Unconditional Model for Egypt ........................................................... 229 Table 57. Parameter Estimates for Models 26 (Level1 Student Backgr ound) for Egypt .......................... 230 Table 58. Parameter Estimates for Model 7 (L evel1 Student Backgr ound) for Egypt ............................... 232 Table 59. Comparison of R2 between Model 7 and Previously Constructed Models for Egypt ................ 233 Table 60. Parameter Estimates forLevel1 Home Resources Mo del for Eg ypt ........................................... 234 Table 61. Parameter Estimates for Combined Level1 Predictors M odel for Egypt ................................... 235 Table 62. Parameter Estimates for Level2 Inst ructional Practices M odels for Egypt ................................ 237 Table 63. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model for Egypt ....................................................................................................... ...... 238 Table 64. Parameter Estimates for the Combined Leve l2 Instructional Practices Model for Egypt .......... 238 Table 65. Comparison of R2 between Model 14 and Previously Constructed Models 913 for Egypt ......................................................................................................................... .................. 239 Table 66. Parameter Estimates for Teach er Background Models for Egypt ............................................... 241 Table 67. Parameter Estimates for the Combined Teacher Background Model for Eg ypt .......................... 242 Table 68. Comparison of R2 between Model 18 and Previously Constructed Models 9 and 1617 for Egypt ..................................................................................................................... ................ 242 Table 69. Parameter Estimates for Schoo l Background Models for Egypt ................................................. 243 Table 70. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for Egypt ............................................................................................................... ........... 244 Table 71. Parameter Estimates for the Combin ed School Background Model for Egypt ........................... 245 Table 72. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for Egypt ..................................................................................................................... ................ 245 Table 73. Parameter Estimate s for Full Mode l for Egypt ........................................................................ .... 246 Table 74. Comparison of R2 between Model 23 and Previously Constructed Models 14, 18 and 22 for Egypt ..................................................................................................................... ................ 247 Table 75. Weighted Descriptive Statistics for Leve l1 Variables for South Africa (N = 1,564) ................. 249 Table 76. Unweighted Descriptive Statistics for Level1 Variables fo r South Africa (N = 1,564) ............. 249 Table 77. Unweighted Descriptive Statistics for Level2 Variables fo r South Africa (N =52) .................. 250 Table 78. Comparisons of Results for South Africa ............................................................................. ....... 255 Table 79. Parameter Estimates for Unc onditional Model for South Africa................................................. 259 Table 80. Parameter Estimates for Models 26 (Lev el1 Student Background ) for South Africa ............... 260 Table 81. Parameter Estimates for Model 7 (Level 1 Student Background) for South Africa .................... 262 Table 82. Comparison of R2 between Model 7 and Previously Constructed Models for South Africa ........................................................................................................................ .................. 263 Table 83. Parameter Estimates for Level1 Ho me Resources Model fo r South Africa ............................... 264 Table 84. Parameter Estimates for Combined Le vel1 Predictors Model for South Africa ........................ 265 Table 85. Parameter Estimates for Level2 Instru ctional Practices Models for South Africa ..................... 267
PAGE 10
vii Table 86. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model ................................................................................................................. ............ 269 Table 87. Parameter Estimates for the Combined Level2 Instructional Practices Model for South Africa ........................................................................................................................ .................. 271 Table 88. Comparison of R2 between Model 14 and Previously Constructed Models 913 for South Africa ........................................................................................................................ .................. 272 Table 89. Parameter Estimates for Teacher Background Models for South Africa ..................................... 273 Table 90. Parameter Estimates for the Combined Teacher Background Model for South Africa ............... 276 Table 91. Comparison of R2 between Model 18 and Previously Constructed Models 9 and 1517 for South Africa .............................................................................................................. ............. 277 Table 92. Parameter Estimates for School B ackground Models fo r South Africa ...................................... 277 Table 93. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for So uth Africa ........................................................................................................ ....... 279 Table 94. Parameter Estimates for the Combined School Background Model for South Africa ................ 280 Table 95. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for South Africa .............................................................................................................. ............. 280 Table 96. Parameter Estimates fo r Full Model for South Africa ................................................................. 282 Table 97. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for South Africa .............................................................................................................. ............. 282
PAGE 11
viii LIST OF FIGURES Figure 1. Flowchart for managing multiple database s from TIM SS 2003 .................................................. 123 Figure 2. Histogram for le vel1 residuals for USA ............................................................................. ........ 137 Figure 4. Histogram for level2 intercept resi duals for USA ................................................................... ... 138 Figure 5. Level2 intercept residuals by predicted intercept for USA ......................................................... 139 Figure 6. Histogram for le vel2 slope (valuing math ) residuals for USA ................................................... 139 Figure 7. Level2 slope (valuing math) residuals by predicted math achievement for USA ....................... 140 Figure 8. Histogram for le vel2 slope (time on homework) residuals for USA .......................................... 140 Figure 9. Level2 slope (time on homework) residu als by predicted math ach ievement for USA .............. 141 Figure 10. Interaction between valuing of math an d opportunity to learn measurement for USA .............. 155 Figure 11. Interaction between time student spent on homework and opportunity to learn geometry for USA ....................................................................................................................... .............. 156 Figure 12. Interaction between selfconfidence in learning math and opportunity to learn data for USA ........................................................................................................................... ................ 157 Figure 13. Interaction between time student spent on homework and teacher reported readiness to teach number for USA .......................................................................................................... ..... 163 Figure 14. Interaction between class size for math instruction and selfconfidence in learning math for USA ....................................................................................................................... .............. 168 Figure 15. Interaction between class size for math instruction and valuing of math for USA .................... 169 Figure 16. Interaction between opportunity to learn geometry by time student spent on homework for USA ....................................................................................................................... .............. 172 Figure 17. Interaction between teacher reported re ady to teach number by time student spent on homework for USA .............................................................................................................. ..... 173 Figure 18. Histogram for Leve l1 residuals for Canada ......................................................................... ..... 179 Figure 19. Level1 residuals by predicte d math achievement for Canada ................................................... 179 Figure 20. Histogram fo r level2 intercept re siduals for Canada ............................................................... 180 Figure 21. Level2 intercept residuals by predicted inter cept for Canada ................................................... 18 0 Figure 22. Histogram for level2 slope (gender) residuals for Canada ........................................................ 1 81 Figure 23. Level2 slope (gender) residuals by predicted math achievement for Canada ........................... 181 Figure 24. Histogram for Le vel2 slope (extra lessons) residuals for Canada ............................................. 182 Figure 25. Level2 slope (extra lessons) residuals by predicted math achievement for Canada ................. 182 Figure 26. Histogram for leve l2 slope (selfconfidence) residuals for Canada .......................................... 183 Figure 27. Level2 slope (selfconfidence) residuals by predicted math achievement for Canada ............. 183 Figure 28. Interaction between average math instruc tional hours per year and gender for Canada ............ 198 Figure 29. Interaction between oppo rtunity to learn algebra and extra math lessons for Canada ............... 199 Figure 30. Interaction between oppo rtunity to learn geometry and extr a math lessons for Canada ............ 200 Figure 31. Interaction between o pportunity to learn data and se lfconfidence for Canada ......................... 201 Figure 32. Interaction between oppo rtunity to learn measurement and selfconfidence for Canada ........... 202 Figure 33. Interaction between teacher reported preparation to teach math and student selfconfidence in learning math for Canada .................................................................................... 207 Figure 34. Interaction between types of mathrelated professional development and student selfconfidence in learning math for Canada .................................................................................... 208 Figure 35. Interaction between average math instruc tional hours per year and gender for Canada ............ 216 Figure 36. Interaction between teacher reported pr eparation to teach math content and student selfconfidence in learning math for Canada .................................................................................... 217 Figure 37. Interaction between opportunity to learn data and gender for Canada ....................................... 218 Figure 38. Interaction between opportunity to learn data and selfconfidence in learning math for Canada ........................................................................................................................ ............... 219 Figure 39. Interaction between oppo rtunity to learn geometry and extr a math lessons for Canada ............ 220 Figure 40. Interaction between oppo rtunity to learn algebra and extra math lessons for Canada ............... 221
PAGE 12
ix Figure 41. Histogram for le vel1 residuals for Egypt .......................................................................... ........ 227 Figure 42. Level1 residuals by predic ted math achievement for Egypt ..................................................... 227 Figure 43. Histogram for level2 intercept residuals for Egypt ................................................................ ... 228 Figure 44. Level2 intercept residuals by predicted inter cept for Egypt ..................................................... 2 28 Figure 45. Histogram for level1 residuals for South Africa ................................................................... .... 253 Figure 46. Level1 residuals by predicted math achievement fo r South Africa .......................................... 253 Figure 47. Histogram for level2 inte rcept residuals fo r South Africa ........................................................ 257 Figure 48. Level2 intercept residuals by predicted intercept fo r South Africa .......................................... 257 Figure 49. Histogram for leve l2 slope (extra lessons) resi duals for South Africa ..................................... 258 Figure 50. Level2 slope (extra le ssons) residuals by predicted math achievement for South Africa ......... 258 Figure 51. Interaction between opportunity to learn data and student selfconfidence in learning math in South Africa .......................................................................................................... ....... 270
PAGE 13
x Correlates of Mathematics Achievement in Developed and Developing Countries: An HLM Analysis of TIMSS 2003 Ei ghthGrade Mathematics Scores Ha T. Phan ABSTRACT Using eighthgrade mathematics scores from TIMSS 2003, a largescale international achievement assessment database, this study investigated correlates of math achievement in two developed countries, Canada and the United States and two developing countries, Egypt and South Africa. Variation in ma th achievement within and between schools for individual countries was accounted for by a series of twolevel HLM models. Specifically, there were five se ts of HLM models representing student background, home resources, instructiona l practices, teacher background, and school background related factors. In addition, a final model was built by including all the statistically significant predictors in earlier models to predict math achievement. Findings from this study suggested that whereas the in structional practices model worked the best for the United States and the teacher background model served as the most efficient and parsimonious model for predicting math achieveme nt in Egypt, the final model served as the best model for predicting math achieve ment in Canada and South Africa. These findings provide empirical evidence that di fferent models are needed to account for factors related to achievement in different countries. This study, ther efore, highlights the importance that policy makers and educators from developing countri es should not base their educational decisions and educational re form projects solely on research findings of
PAGE 14
xi developed countries. Rather, they need to us e their countryspecific findings to support their educational decisions. This study al so provides a methodological framework for applied researchers to evaluate the eff ects of background and contextual factors on studentsÂ’ math achievement.
PAGE 15
1 CHAPTER ONE INTRODUCTION Statement of Problem StudentsÂ’ mathematics achievement is often associated with the future economic power of a country (Baker & LeTendre 2005; Bush, 2001; Heyneman & Loxley, 1982, 1983; Wobmann, 2003). Thus, the desire to unders tand and identify factors that may have meaningful and consistent relationships with math achievement has been commonly shared among national leaders and policy makers as well as educators around the world. For example, in 2007, there were more than 60 countries participati ng in the Trends in International Mathematics and Scie nce Study (TIMSS) (TIMSS, 2007). By collaboratively supporting and participating in a largescale international achievement study such as the TIMSS, it was hoped that the rich data (achievement and other contextual data) collected from such a st udy could illuminate important correlates of math achievement both within and between countries that would Â“otherwise escape detectionÂ” (Wagemaker, 2003, p.1). Unfortunately, despite the fact that data from these international achievement studies have been made publicly available for all participating countries [National Center for Education Statistics (NCES), 2007] onl y a small number of these countries was included in subsequent research studies. A re view of existing literature suggested that low income countries as well as those that performed poorly in international achievement studies such as South Africa, Chile, and E gypt were rarely incl uded in international
PAGE 16
2 research studies (details of these studies are provided in Chapter Two). In contrast, researchers tended to focus on a small group of developed and highperforming countries such as Japan, Korea, Hong Kong, Singapore, Germany, Canada, and the United States. Such bias in international achievement rese arch resulted in recent research findings related to studentsÂ’ math achievement that were based mostly on students in developed countries and lacking representation from developing countries. As a consequence, the problem of lacking research fi ndings related to studentsÂ’ ma th achievement in developing countries has led many of these countries to base their educationa l policy decisions or even to implement educational reform projects on research findings and educational models of other developed countries (Ri ddell, 1997). Such bases were problematic because countries differ in characteristics and a model that worked in a developed country might not work in a developing c ountry (Bryan et al., 2007; Delaney, 2000; Watkins & Biggs, 2001). Given the current problem, it is very impor tant for research studies related to international achievement to include a more diverse sample of countries (i.e., both developing and developed countries) and to u tilize analytic models that yield countryspecific research findings. In doing so, polic y makers and educators from the developing countries that were included in these studies can use the research findings pertaining to their own countries to support their educational decisions. Purpose of the Study The purpose of this study was to investig ate correlates of math achievement in both developed and developing countries. Sp ecifically, two developed countries and two developing countries that pa rticipated in the TIMSS 2003 eighthgrade math assessment
PAGE 17
3 were selected for this study. For each country, a series of twolevel models was constructed using background and contextual factors at both the student and the classroom/teacher/school levels to account for the variance in eighthgrade studentsÂ’ math achievement within and between schools. Ultimately, this study aimed to produce countryspecific research findings related to eighthgrade studentsÂ’ math achievement that can be used directly by national leader s and policy makers as well as educators from these countries, especially developing count ries, to support their ed ucational decisions. Finally, by visually and descriptively examini ng patterns of relationships between eighthgrade math achievement and contextual factor s, this study hoped to identify important trends of relationships that tended to exist among develope d and developing countries, as well as differences between these groups. Research Questions The study aimed to address the following set of research questions: 1) To what extent are student background variables (i.e., gender, selfconfidence in learning math, valuing of math, tim e on math homework, and tutoring in math) associated with TIMSS 2003 eighthgrade math scores in each country? 2) To what extent are home resources variab les (i.e., availability of calculator, computer, and desk for student use) a ssociated with TIMSS 2003 eighthgrade math scores in each country? 3) To what extent are instructional variab les (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional time) associated with TIMSS 2003 eighthgr ade math scores in each country?
PAGE 18
4 4) To what extent are teacherrelated vari ables (i.e., prepara tion to teach, ready to teach, and professional developmen t) associated with TIMSS 2003 eighthgrade math scores in each country? 5) To what extent are school related variables (i.e., class size, school resources for math instruction, and math instruc tional limitation) associated with TIMSS 2003 eighthgrade math scor es in each country? Rationale for the Study Several factors influenced the deci sion to use the TIMSS 2003 eighthgrade math data from four countries to investig ate the relationships between student math achievement and contextual as well as bac kground factors across countries in this study. First, mathematics was the subject of choi ce because of the incr easingly national and international focus on math education (Baker & LeTendre, 2005; Bush, 2001; Heyneman & Loxley, 1982, 1983; Wobmann, 2003). In the United States, the topic of how to improve student achievement in math has been hotly debated for decades. The No Child Left Behind Act issued by President Geor ge W. Bush in 2001 [No Child Left Behind (NCLB) 2001] was one of the many examples th at highlighted the importance of math as a school subject that has attr acted attention from the c ountryÂ’s top national leader. Second, at the time this study was conducted, the TIMSS 2003 database provided the largest and most ambitious set of international achievement as well as background data related to studentsÂ’ math and science achievement at fourthgrade, eighthgrade and twelfthgrade (TIMSS, 2003). In addition, the influence of previous TIMSS findings on international education, including the U.S. education, has been widely acknowledged. As indicated in the Benchmarking Introduction of the Trends in
PAGE 19
5 International Math and Science Study (TIM SS, 1999), Â“TIMSS results, which were first reported in 1996, have stirred debate, spurre d reform efforts, and provided important information to educators and decision make rs around the worldÂ” (p.16). In fact, the United States Department of Education (USDOE), through its federal entity, the National Center for Education Statistics (NCES), strongly encouraged educational researchers to use largescale existing secondary data such as the TIMSS 2003 for research because these data Â“provide cons istent, reliable, complete, and accurate indicators of education status and trendsÂ” (Stigle r et al., 1999, p.1). Third, eighthgrade math data were se lected because of the importance of the transition period from elementary sc hool to high school, where curriculum differentiation in math knowledge and skills is the greatest (Rodri guez, 2004). Also, as Reynolds (1991) asserted, middleschool years are a critical period for students in terms of learning math. How well stud ents perform in math during their middleschool years is likely to determine their choices and enrollme nt in high school math courses. This is because courses in math are often sequentia l and therefore, access to advanced math courses in high school are dependent on student sÂ’ success at lower level math courses at middle school (Singh, Grandville, & Dika, 2002). For example, if a student performs poorly in algebra at eighthgra de, he/she is much less likely to enroll in various math courses offered in high school. As a result, these curricular oppor tunities and choices further influence studentsÂ’ decision to ente r mathematicsrelated fields of study at postsecondary and occupational levels. Thus, ei ghthgrade is an important time point to study the complex interaction of contextual factors that are potentially related to studentsÂ’ math achievement.
PAGE 20
6 Last but not least, four countries (two developing countries and two developed countries) were sufficient for this study because this selection allowed the study to be conducted within a reasonable amount of time and resources while satisfying the important inclusion criterion of sufficient sample size and representative samples from each category of country (i.e., deve loping and developed countries). Theoretical Framework This research study was guided by the th eoretical framework of CarrollÂ’s (1963) Model of School Learning which was proposed to explain why students succeed or fail in their learning at school (Carroll, 1963). The model postulated five important factors that were theoretically related to studentsÂ’ succes s in learning: (1) Aptitude Â– the amount of time needed to learn the task under optimal instructional conditions, (2) Ability to understand instruction, (3) Perseverance Â– the amount of time the le arner is willing to engage actively in learning, (4 ) Opportunity to learn Â– time allowed for learning, and (5) Quality of instruction Â– the extent to which in struction is presented so that no additional time is required for mastery beyond that required in regard to aptitude. Of these five factors, aptitude, ability to understand instruction, and pe rseverance are related to the students; whereas opportunity to learn and qua lity of instruction are concerned with external conditions. A thorough examination of CarollÂ’s (1963) model of school learning and a comprehensive review of existing literature related to each of th e five factors of the model are presented in Chapter Two. Limitations The potential threats to the internal and external validity of this study are present at various research stages: instrument deve lopment, data collection, data analysis, and
PAGE 21
7 data interpretation. At the inst rument development stage, the process of test adaptation and translation from the source language (i.e., English) to other targ et languages of the test could have made the assessment unint entionally harder or easier by translators (Hambleton, Merenda, & Spielberger, 2005). As with test item format, Wang (2001) raised a concern that students from some count ries might be more familiar with test items in the constructed response format; whereas th eir peers in other countries might be more familiar with test items in the multiple choice format. Thus, such instrumentrelated issues could negatively influence the fair ness of student mathematics test scores. At the data collection stage, any discrepa ncies in the process of collecting TIMSS 2003 data across countries could affect th e validity of the data. Because TIMSS 2003 collected data on a largescal e (i.e., in 48 countries), from multiple sources (i.e., from students, teachers, and school principals), and on different time sche dules (i.e., in October and November for Southern Hemisphere c ountries and in April, May and June for Northern Hemisphere countries) the proce ss of monitoring data quality could be challenging. In addition, due to country di fferences, some countries opted not to administer certain test or questionnaire item s to their participants, resulting in some countries having no data for a set of variables (TIMSS, 2003). At the data analysis stage, the massive amount of missing data due to sampling procedures (i.e., multistage, stratified, and unequal probability), assessment design (each student took only one te st booklet or a subset of the enti re test items), and nonresponses from participants could negatively affect accu racy of statistical results, regardless of missing data treatment methods (imputation of missing data or dele ting all missing data).
PAGE 22
8 Finally, the threats to validity of the st udy at the data interpretation stage could stem from the variation in operationalization of the same construc ts among participants and across countries. For example, for some teachers, classroom activities included working on homework; whereas for other teachers, classroom activities were restricted to only school work. Similarly, instructional time could be defined as actual teaching time in some countries and as teaching time and cl ass management in other countries. Last but not least, the results of this study are based on the relationship between student mathematics test data and contextual data wh ich were selfreported by students, teachers, and school principals. Selfreported data, according to Rosenberg, Greenfield, and Dimick (2006), have several potential sour ces of bias such as selective memory (remembering or not remembering experiences or events that occurred sometime in the past), telescoping (recalling events that occurred at one time as if they had occurred at another time), and social desira bility (reporting behaviors that tend to be widely accepted by certain social groups rather than the beha viors actually exhibitt ed by the respondents). Thus, it is important to inte rpret findings of this study in light of these limitations. Definitions Developed country: According to the World BankÂ’ s (2007) world development indicators, developed countries refer to c ountries with highincome economies. The use of the term, developed country, however, is not intended to imply that developed economies have reached a preferred or fina l stage of development (The World Bank, 2007). Developing country: According to the World BankÂ’s (2007) world development indicators, developing countries refer to c ountries with lowincome and middleincome
PAGE 23
9 economies. The use of the term, developing country, however, is not intended to imply that all economies in the group are experi encing similar development (The World Bank, 2007). Math achievement: For this study, math achievement is defined as the overall mathematics scores of eighthgrade stude nts who participated in the TIMSS 2003 assessment. The overall mathematics scores can be computed by averaging studentsÂ’ scores on five mathematics domain contents : algebra, number, data, geometry, and measurement. Eighthgrade students: In the TIMSS study, eightgrade students are defined as all students enrolled in the upper of the two ad jacent grades that contained the largest proportion of 13year old students at the time of testing (TIMSS, 2003).
PAGE 24
10 CHAPTER TWO LITERATURE REVIEW Introduction In this chapter, the literature review is presented in six major sections: history of international mathematics achievement asse ssments, theoretical framework, studentrelated factors and student achievement, instru ctional practicesrelate d factors and student achievement, teacher backgroundrelated fact ors and student achievement, and school backgroundrelated factors and student achieve ment. Finally, the chapter concludes with a summary of significant findings synthesized from this compre hensive literature review. In broadening the scope of this literature review, the literature search was open to various student achievement outcomes such as math, science, reading, literacy, and civics. Also, empirical studies that examined student achievement outcomes at different grade levels (e.g., kindergarten to grade 12) and across countries were included in the review. It is important to note, however, that although the curren t literature search allowed for a broad inclusion of empirical studies related to student achievement, where possible, this synthesis of literature focused more on student mathematics achievement at middle school grades within the United States and across countries. The rationale for this selection focus was specified in Chapter One. History of International Mathematics Achievement Assessments Although the beginning of internationalis m in education might be traced to Â“ancient timesÂ” the idea of c onducting an official, largescal e international achievement
PAGE 25
11 study did not emerge until after World War II (Encyclopedia of Educational Research [EER], 1960, p. 618). In fact, th e first largescale international achievement assessment, Pilot TwelveCountry Study, was developed in 1959 with extensive support from the United Nations Educational, Scientific a nd Cultural Organiza tion (UNESCO) (EER, 1960). With the Pilot TwelveCountry Study, UN ESCO aimed to promote the conviction that Â“educational systems cannot be transfe rred from one country to another, but ideas, practices, and devices develope d under one set of conditions can always prove suggestive for improvement even where the conditions are somewhat differentÂ” (EER, 1960, p. 621). The Pilot TwelveCountry Study was origin ally constructed in French, English and German and then translated into eight languages by individual participating countries. The test was admini stered in 1961 to representative samples of 13yearold students across the 12 countries, including Be lgium, England, Finland, France, Federal Republic of Germany, Israel, Poland, Scotla nd, Sweden, Switzerland, United States and Yugoslavia. This study assessed studentsÂ’ achievement in five subject areas: mathematics, reading comprehension, geogr aphy, science and nonverbal ability. In addition, the test had two specifi c aims: (1) to investigate whet her some indications of the intellectual functioning could be deduced from the patterns of st udent responses across countries; and (2) to discover the possibilities and the difficulties at tending a largescale international study (Forshay et al., 1962). The success of the first largescale inte rnational achievement assessment shed new light on international educ ation. As Forshay (1962) put it Â“If custom and law define what is educationally allowable within a nation, the educational systems beyond oneÂ’s national boundaries suggest what is educationally possibleÂ” (p. 7). Within approximately
PAGE 26
12 50 years of development, 29 largescale in ternational achievement assessments were conducted, covering a vast array of subject areas including Mat h, Science, Reading, English, Literature Education, English as a Foreign Language, French as a Foreign Language Education, Writing, and Civic Educ ation (International Association for the Evaluation of Educational Achievement [IEA ], 2007; International Assessment of Education Progress [IAEP]; Organization fo r Economic Cooperation and Development [OECD], 2007). The target populations of th ese assessments were also expanded to students of fourth, eighth, and tw elfth grades in all countries. The popularity of internati onal achievement assessments was also reflected through the increasing number of participating countries over the years. By decade, the largest number of countries pa rticipating in an interna tional achievement assessment increased from 12 in the 1960s, to 19 in the 1970s, to 24 in the 1980s, to 46 in 1990s and 60 in 2000s (IEA, 2007). These powerful numerical indicators suggest that international achievement assessments have quickly gain ed special attention in education across countries. Importance of International Mathematics Achievement Assessments A review of the history of internatio nal achievement assessments yielded an interesting finding. Of the 29 international achievement assessments conducted by IEA, 13 were mathematics assessments (IEA, 2007). In fact, since 1995, the Trends in International Mathematics and Science Study (TIMSS) has been implemented regularly, on a fouryear cycle basis (TIMSS, 2007). It is worth noting that the number of countries participating in the TIMSS has also grown significantly over time. In the most recent administration of the TIMSS in 2007, more than 60 countries participated in the study,
PAGE 27
13 making it the largest and most ambitious intern ational achievement study in the history of international achievement assessments. Why have international mathematics ach ievement assessment attracted more attention from countries ar ound the world? The chief reas on behind the importance of international mathematics achievement assessme nt is not new and has been discussed for decades. Much research has linked student mathematics achievement with the future economic power as well as security of a country (Akiba, LeTendre, & Scribner, 2007; Baker & LeTendre, 2005; Carter & O' Neill, 1995; Heyneman & Loxley, 1982, 1983; Wobmann, 2003). For this reason, the differenc es in student mathematics achievement across countries were often inte rpreted as a national issue ra ther than a mere comparison of student achievement. For example, in the United States, it was not uncommon for national leaders to address the issue of st udentsÂ’ poor performance in international mathematics assessments in the national age ndas (see the nationÂ’s re sponse to Sputnik crisis in the 1960s [EER, 1960], to A Nation at Risk in 1983 [National Commission on Educational Excellence, 1983] and then to Goals 2000 in 1994 [Goals 2000, 1994]). Recently, President George W. Bush, after taking office in 2001, stated: Â“Quality education is a cornerstone of AmericaÂ’ s future and my Administration, and the knowledgebased workplace of the 21st century requires that our students excel at the highest levels in math and science.Â” (B ush, 2001, p.1,). As a result of such national addresses, a series of educational policies were issued in order to improve studentsÂ’ performance in mathematics. With the curren t BushÂ’s administrati on, the national act of Â“No Child Left BehindÂ” was implemented as a primary solution for the improvement of educational quality in the United States.
PAGE 28
14 Across countries in the world, the concer ns about what students know and can do in math as well as what can be done to improve student math ability has also been addressed at the national level (Beato n, 1998). As highlighted in the TIMSS 1999 Benchmarking report, the differences in studentsÂ’ performan ce in international mathematics achievements were taken seri ously by many countries: Â“TIMSS results, which were first reported in 1996, have stir red debate, spurred reform efforts, and provided important information to educat ors and decision makers around the worldÂ” (TIMSS 1999 Benchmarking Introduction, 1999, p.16). Results from international mathematics achievement assessments were used for many purposes, including making changes in educational policies, setting pe rformance standards for students, comparing with and validating national mathema tics assessments, and conducting various educational research studies (Baker & Le Tendre, 2005; OÂ’Leary, 2002; Rodriguez, 2004; TIMSS, 2003). Given the importance and profound impact of student mathematics achievement on national economic growth and security (A kiba, LeTendre, & Scribner, 2007; Baker & LeTendre, 2005; Carter & O'Neill, 1995; Heyneman & Loxley, 1982, 1983; Wobmann, 2003), and the foreseen rapid changes w ithin and across countries in the 21st century, it is important that educational researchers acro ss the world, collaborat ively and separately, continuously conduct empirical research to identify factor s associated with student mathematics achievement so as to maximize student learning in mathematics. Theoretical Framework In an attempt to explain why students su cceed or fail in their learning at school, John B. Carroll developed A Model of School Learning in 1963 (Carroll, 1963). This
PAGE 29
15 model proposes that the student will succeed in learning a given task to the extent that he/she actually spends the amount of time he /she needs to learn the task, with time defined as the time during which the student act ively engaged in his/ her learning (Carroll, 1963). According to Carroll (1963), there are five categories of variables which are associated with studentÂ’s success in learning : (1) Aptitude Â– the amount of time needed to learn the task under optimal instructiona l conditions, (2) Ability to understand instruction, (3) Perseverance Â– the amount of time the learner is willing to engage actively in learning, (4) Opportunity to learn Â– time allowed for learning, and (5) Quality of instruction Â– the extent to which instruction is presented so that no additional time is required for mastery beyond that required in regard to aptitude (Carroll, 1963). The five categories of variables, which can be expressed in te rms of time, can be worked into a formula with degree of learni ng as a function of the ratio of the amount of time a student actually spends on the learni ng task and the total amount of time the student needs to le arn the task. Thus: needed Time spent actually Time f learning of Degree The numerator of this fraction is equal to the smallest of the three quantities: (1) opportunity to learn, (2) perseverance, and (3 ) aptitude after adjustment for quality of instruction and ability to understand instruc tion. The last quantity, aptitude, is also the denominator of the fraction (Carroll, 1963). Inferring from the Model of School Learning the first three categories of variables (i.e., aptitude, ability to understand instruction, and perseverance) are related to the students; whereas the last two categories of variables are concerned with external
PAGE 30
16 conditions (i.e., opportunity to learn and quality of instruction). It is worthy of note, however, that of these categorie s of variables, oppor tunity to learn, quality of instruction, and perseverance are more amenable to inte rvention and manipulation than aptitude and ability to understand instructions which tend to be relatively resistant to change (Carroll, 1963). Studentrelated Factors and Student Achievement StudentsÂ’ Gender Evidence accumulated through multiple resear ch studies suggest that universally, a gender gap exists in math achievement (Beaton et al., 1996; Mullis et al., 2000; Peterson & Fennema,1985; Rodriguez, 2004). However, the size and direction of achievement gap varies across samples of stude nts and tests. For examples, Bielinski and Davison (2001) found that the gender gap, al beit small, favors females in elementary school, and males in high school, and neither group in middle school. In contrast, Fennema et al. (1998) observed that the gende r gap in math achievement increases during middle schools and becomes profound at the high er educational level. Generally, research findings in this area support the view th at boys tend to perform better than girls on mathematics tasks such as problems that in clude spatial representation, measurement, proportions as well as comp lex problems; whereas girls tend to score higher on computations, simple problems and graph re ading (Beaton et al., 1996). Similarly, using the Scholastic Aptitude Test (SAT) and classifying the math items into six levels of cognitive complexity (i.e., a zero was assigne d to items measuring recall of factual knowledge, and a five to items requiring appli cation of higher mental processes), Harris and Carlton (1993) reported th at females outperformed male s on the three lowest levels,
PAGE 31
17 whereas males outperformed females on the two hi ghest levels, after the total test scores were controlled. When these SAT math item s were grouped into two categories, applied or real world items and abstract or text book items, the researchers found that females outscored males in abstract items; whereas males outscored females in applied items (Harris & Carlton, 1993). Much effort has also been devoted to investigating reasons that are associated with differences in mathematics achievement between boys and girls in schools. For example, Davis and Carr (2001) suggest that the differences in use of strategies to tackle math problems in early elementary school age girls and boys are related to their achievement gap. Their study showed that boys are more likely to retrieve information from memory and use covert cognitive strategies, such as decomposition; whereas girls are more likely to use overt strategies su ch as counting on fingers or manipulative strategies to solve mathematics problems. Test item format is another factor that has often been linked with the gender gap in math achievement. Bolger a nd Kellaghan (1990), for example, have shown that boys perform better than girls in multiplechoice items and girls perform relatively better than boys in openended items. Findings from a more recent study conducted by Wester and Henrik sson (2000), however, did not support this conclusion. In fact, Wester and Henriksson (2000) found that there was no significant change in gender differences when the item format was altered. Females seemed to perform slightly better than males when using multiplechoice items. Finally, using three nationally representative achievement databases, Bielinski and Davison (2001) examined test item difficulty as a plausible reason of gender gap in math achievement. Evidence from this study suggested an association between item difficulty and sex differences.
PAGE 32
18 That is, easy test items tended to be easier for females than males, and hard test items tended to be harder for females than males. Ther efore, if a math test consists of more easy items than hard items then females will outperform males in such a test, and vice versa. StudentsÂ’ Selfconfidence Many research studies have investigated the relationship between student selfconfidence in learning math and student math achievement. For example, evidence from the study of House (2006) suggested that hi gher selfconfidence in learning math was significantly associated with higher math achievement in adolescent students. Similarly, based on a study of middle school students in Germany, Koller, Baumert and Schnabel (2001) concluded that students w ith higher initial levels of in terest in learning math were more likely to enroll in higher math courses. Likewise, there was also evidence that selfefficacy in learning math was significantly rela ted to math achievement in middle school students (Pajeres & Graham, 1999). It is worth noting, howev er, that such a clear and positive association between student selfconfidence in learning math and math achievement tended to be observed more fre quently within countri es. At the betweencountry level, the relationship between student selfconfidence in learning math and math achievement appeared to be more complex. For instance, whereas selfconfidence in learning math was found strongly and positively related to math achievement for students in Norway and Canada, it was not the case fo r students in the United States (Ercikan, McCreith & Lapointe, 2005). A similar pattern of results was also repo rted in the study of Mullis, Martin, Gonzalez and Chrostowski (2004) where the four countries with the lowest percentages of students in the high selfconfidence category (i.e., Chinese Taipei, Hong Kong SAR, Japan, and Korea) all had hi gh average math achievement. Likewise, in
PAGE 33
19 examining the relationship between math ach ievement and studentsÂ’ selfperceived competence in learning math across 38 countri es that participat ed in the TIMSS 1995, Shen and Pedulla (2000) and Shen (2002) have shown that a negative relationship between selfperceived competence in learning math and math achievement was present between countries. In an attempt to explain such interesti ng patterns of relati onship, Mullis, Martin, Gonzalez and Chrostowski (2004) suggested that in Asian Pa cific countries, students may share cultural traditions that encourage mode st selfconfidence and t hus, they tended to rate themselves low in selfconfidence in learning math but performed high in math assessments. Congruent with this explan ation were findings from a study of Leung (2002) where the researcher observed that Ja panese students tended to report more often that they were not doing well in math even though they scored high on mathematics tests. The researcher attributed such interesting pa tterns of studentsÂ’ res ponses regarding their level of selfconfidence in learning math to the unique culture in this region of the world where the expectations for student achievem ent in math tend to be high (Leung, 2002). StudentsÂ’ Valuing of Learning StudentsÂ’ valuing of learning, as define d by Ma and Kishor (1997), refers to studentsÂ’ affective responses to the easy or difficult as well as the importance or unimportance of a certain school subject. In existing literature, st udentsÂ’ valuing of learning is also referred to as studentsÂ’ at titudes, or beliefs, or perceptions towards learning. Thus, statements such as Â“I enjoy learning mathÂ” or Â“I think learning math will help me in my daily lifeÂ” can be defined as studentsÂ’ valuing of math or studentsÂ’ attitudes towards learning math.
PAGE 34
20 StudentsÂ’ valuing of learning has often b een viewed as an important determinant of student achievement. As indi cated by Ma and Kishor (1997): Teachers and other mathematics educators generally believe that children learn more effectively when they are interested in what they learn and that they will achieve better in mathematics if they like mathematics. Therefore, continual attention should be dire cted towards creating, developing, maintaining and reinforcing positive attitudes (p. 27) Empirical evidence, however, has s hown little consensus concerning the relationship between studentsÂ’ attitudes towa rd learning and student achievement. AbuHilal (2000), for example, asserted that stud entsÂ’ perceptions regard ing the importance of mathematics exerted a significant effect on math achievement. Similarly, findings from the study of Beaton et al. (1996) revealed that eighth grad e students with more positive attitudes had higher average mathematics ach ievement. In a metaanalysis study, Ma and Kishor (1997) examined 113 studi es that investigated the eff ects of studentsÂ’ attitudes on math achievement for the period from 1966 to 1993 and found that approximately 90% of the studies showed positive re lationship between attitudes and achievement. The overall weighted mean effect size obtained from this metaanalysis study was 0.12, with a 95% confidence interval from 0.12 to 0.13, s uggesting a positive, albeit not strong, relationship between attitudes and math achievement. More evidence supporting significant relationships between the value students attach to math and their achievement in math can also be found in Marsh, Ha u, and Kong (2002), Rao, Moely, and Sachs (2000) and Singh, Granv ille, and Dika (2002).
PAGE 35
21 Opposing this view, Shen (2002) conduc ted a study on the relationship between eighth grade studentsÂ’ achievement and their selfperception of learning math across 38 countries that participated in the TIMSS 1995 and concluded that empirical evidence is insufficient to support the claim that attit udes and achievement are strongly related. Although the researcher did find a positive re lationship between math achievement and three measures of selfpercep tion (i.e., how much students li ke the subject, their selfperceived competence in the subject, and th eir perceived easiness of the subject) for withincountry data, the betweencountry anal ysis yielded opposite findings. That is, there was a negative relati onship between selfpercepti ons and achievement. The correlation between math achievement and how much students like mathematics crossnationally is .68. The two countries with the highest scores for liking math (i.e., Morocco and South Africa) were also the c ountries that had the poorest performance in math. When correlating student sÂ’ perceived easiness of math and math achievement across countries, a stronger negative correla tion (.72) was observed, indicating that in poor performing countries students were likely to think of ma th as being easy whereas in high performing countries, studen ts were likely to think of math as being difficult. Explaining this negative pattern of relationship between math achievement and studentsÂ’ selfperception, Shen (2002) suggested that this pattern might reflect low academic standards and expectations in low performi ng countries and high academic standards and expectations in high performing countries. Evidence from crossnational studies of Papanastasiou (2000, 2002) also did not support the contention that positiv e attitude towards learning is associated with greater student achievement. Of the three countries included in PapanastasiouÂ’s (2000) study
PAGE 36
22 (i.e., Cyprus, the United States and Japan) Cyprus had the highest proportion of student who reported positive or strongly positive attitu des toward learning mathematics (79% as compared to 70% for the U.S. and 51% for Japan). However, Cypr us studentsÂ’ average mathematics score was the lowest among the th ree countries (474 as compared to 500 for the U.S. and 605 for Japan). Later, in 2002, using the TIMSS 1999 data, Papanastasiou replicated this study on the samples of stude nts from Cyprus, Hong Kong, and the United Stated. Interestingly, the new study yielded similar results. That is, having the largest proportion of students reporting positive or str ongly positive attitude towards math did not make Cypress the country with the highest average math score. In a recent international study, House (2006) looked at the relationship between studentsÂ’ attitude towards math and math achievement in the TIMSS 1995. Fourth grade students from the United Stat es and Japan who participat ed in the TIMSS 1995 were included in the study. Three attit uderelated variables (i.e., I en joy learning math, math is boring, and math is an easy subject) were si multaneously entered in a multiple regression model to predict student math achievement in each country. Results from this study indicated that Â“I enjoy learning mathÂ” had a statistically significant positive relationship with math achievement in Japan but not in th e United States. Put differently, students in Japan who earned high math scores also tended to indicate that th ey enjoyed learning math; whereas the same relationship was not si gnificant when tested with the sample of United States students. The researcher, howev er, noted some similarity between the two countries. That is, in both Japan and the Unite d States, Â“math is boringÂ” was significantly negatively related to student math scores. Sp ecifically, students who expressed the belief that math was boring also tended to achieve low math test scores. Although Â“math is an
PAGE 37
23 easy subjectÂ” appeared to have a negative re lationship with math achievement in Japan and a positive relationship with math ach ievement in the United States, these relationships were not statistically si gnificant in either of the countries. In another international st udy, using the sample of Swedish eighth grade students ( n = 343) participating in the TIMSS 2003, Ek lof (2007) examined the relationship of math achievement with studentsÂ’ value of math, math selfconcept and testtaking motivation. In this study, value of math was a composite variable which was computed as a mean score of six indicators and ranged from 1 (strongly agree) to 4 (strongly disagree). Results of the multiple linear regression anal ysis suggested that altogether the three predictors explained about 31% of the variation in the student math scores. However, when the effects of math selfconcept and testtaking motivation were partialled out, the relationship between value of math and math achievement for the sample was weak and negative in direc tion (Eklof, 2007). StudentsÂ’ Family Background Following the Coleman report (1966) which suggested strong evidence that home backgroundrelated factors had si gnificant effects on student learning, extensive research has been carried out, both in the United St ates and in other countries, to validate ColemanÂ’s findings (Baker, Goesling, & LeTendre, 2002; Comber & Keeves, 1973; Coleman, 1975; Heyneman & Loxley, 1982; Fuller, 1987; Suter, 2000). Home background here refers to a vast array of f actors including, but not limited to, parental education level, family socioeconomic st atus, family size, and home resources. Unfortunately, results from these studies shared little agreement.
PAGE 38
24 Three studies that provided conflicting results with Coleman (1966) are Comber and Keeves (1973), Heyneman and Loxley ( 1982), and Fuller (1987). For the first two studies, the researchers used the same data source [i.e., IEAÂ’s (1971) First International Science Study (FISS)] for 18 countries to examine the relationship between homeand schoolrelated factors and student achievement in science. It is worthy of note that the 18 countries included in the study consisted of both developed and developing countries. In the study of Comber and Keeves (1973), th e researchers employed a threestep data reduction process to select variables for the an alysis. First, only b ackground variables that had correlation coefficients with achievement larger than twice their standard error were considered. Second, the effects of background va riables were partiale d out before other teacherand schoolrelated vari ables were entered in a regr ession model. The resulting standardized regression coeffi cients were then calculated for each background variable. Finally, the standardized re gression coefficient was averaged across 18 countries and those that exceeded .05 were included in s ubsequent analyses. Results from this study showed that across countries, teachera nd schoolrelated variables exerted stronger positive effects on student achievement than family background variables. Arguing that Comber and KeevesÂ’ (1973) method for variable reduction was essentially flawed because it assumed that a background variable had to be a strong predictor of science achievement in both developed and developing countries to be included in the final analysis, Heyneman and Loxley (1992) applied a new procedure of submitting each potential variable to the same test of importance, but in each country separately. As a result, the list of variable s to be included in Heyneman and LoxleyÂ’s (1992) stepwise regression analysis varied fr om one country to another as opposed to the
PAGE 39
25 same list of selected variable applied uniformly to all countries in the Comber and KeevesÂ’ (1973) study. As suggested by this study, the poorer the country in economic status, the more impact teacher and schoolrel ated variables seemed to have on student achievement. For the third study, Fuller (1987) used da ta from only developing countries to investigate the association between fam ily background and stude nt achievement. The statistical method employed in FullerÂ’s ( 1987) study was multiple regression analysis. Results from this study revealed that in developing countries, th e effects of family background on student achievement were nonsigni ficant relative to the effects of school. In fact, Fuller (1987) found that in India the effects of school explained up to 90% of the variance in student achievement. Contradicting with the conclusion made in the studies of Comber and Keeves (1973), Heyneman and Loxley (1982), and Fu ller (1987), were findings from a more recent study of Baker, Goesling, and LeTe ndre (2002). Evidence from this study suggested that the relationship between family SES, a composite variable of motherÂ’s and fatherÂ’s education level and number of books in the home and student achievement were similar across countries, regardless of nationa l income. For this study, the results were obtained from the analysis of the TIM SS 1995 data for both eighth grade math and science using hierarchical linear modeling (HLM) technique to control for the nesting structure of the data. A total of 36 co untries with both low and high economic development status were included in the study. Most recently, using the TIMSS 2003 eighth grade mathematics data, Mullis, Martin, Gonzalez and Chrostowski (2004) studi ed the association between student home
PAGE 40
26 resources and their math achievement. Once agai n, their findings did not agree with those of Baker et al. (2002). In pa rticular, Mullis et al. (2004) highlighted that in many countries, students from homes with a range of study aids such as computer, calculator, desk, and dictionary had higher achievement in math than their peers who did not have access to such resources at home. StudentsÂ’ Time on Homework Time student spent on homework is a key variable in CarollÂ’s model for school learning (Caroll, 1963) and subsequent homewor k studies. There is ample evidence that time on homework is positively related to studentsÂ’ academic performance (Cooper, 1989a; Cooper, Lindsay, Nye, & Greathouse, 1998; Cooper & Valentine, 2001; Keith & Cool, 1992; OECD, 2001; Pete rson & Fennema, 1985; Singh, Gr anville, & Dika, 2002). Of these studies, the work of Cooper (1989a) ha s been widely cited in existing literature. Cooper (1989a) reviewed approximately 120 studies on the effects of homework conducted between 1962 and 1987 which tended to fall in one of the two types of research designs: experimental and quasiexperimental. In the 50 studies that specifically examined the relationships between time on homework and academic achievement, Cooper (1989a) noted that time spent on homework was operationalized as time spent on homework per week. Regarding achievement measure, the majority of these st udies used standardized tests (33 studies), some used class grades (7 studies) and so me used other outcome measures such as motivation to learn (10 studies ). Statistical methods employed in these studies included structural equation modeling, path analysis and repeated measures ANOVA. As a result of this comprehensive review, Cooper (1989a) concluded that most research showed a
PAGE 41
27 positive relationship between the amount of time spent on homework and academic achievement. However, the effects of time on homework were larger for middle and high school students and near zero for elem entary school students (Cooper, 1989a). In an attempt to further elucidat e the positive relationship between time on homework and achievement, Singh, Granville, and Dika (2002) examined the effects of motivation, attitude, and academic time on math achievement by building a latent variable structural equation m odel via a confirmatory factor analysis approach. In this study, academic time was measured by time students spent on math homework and time students spent watching televisi on on the weekdays. The model was fit to the sample data of 24,599 eighth graders in the United States wh o participated in the National Education Longitudinal Study 1988 (NELS:88). Listwise deletion method was adopted to handle missing data on variables of inte rest and mathematics test scor es. The resultant sample of 3,227 students was used for subs equent analyses. Findings fr om this study supported the positive effects of the three factors: motivation, attitude and academic time on math achievement. Specifically, in examining the measurement part of the model, the researchers found that time on math homework was the better indicator of academic time than time watching TV on weekdays. In terms of structural relations in the model, of all the latent variables (motivation, attitude, and academic time), academic time had the strongest direct effect on math achievement ( = .50 as compared to .23 for attitude, and .16 for motivation). Altogether, this model accounted for 46% of the variance in math achievement (Singh, Granville, & Dika, 2002). Despite the history of homework res earch that supports the positive relationships between time on homework and math achieveme nt, several researchers recently argued
PAGE 42
28 that such findings are questionable. In review ing previous studies, Trautwein and Koller (2003) found two common major pitfalls: (a ) confounding operationalization of time on homework and (b) problematic handling of hier archically ordered data. According to the researchers, time on homework was defined very differently across studies, from time spent on homework per week which could mean time students spent on homework in all subjects or in a specific subject to after schoolrelated activities (Trautwein & Koller, 2003). WhatÂ’s more, in some studies, time on homework was an aggregated variable which consisted of homework frequency (i.e., frequency of homework assigned by the teacher, a classlevel variable) and homework length (i.e., the time typically spent on homework per day, a studentlevel variab le). Even when disaggregated, time on homework as reported by indi vidual students might have di fferent meanings ( e.g., time to complete the whole homework vs. part of the homework) (Trautwe in & Koller, 2003). On the statistical analytic methods of previous studies, Trautwein and Koller (2003) observed that two possibl e effects of time on homework (i.e., studentlevel effect and classlevel effect) were often mixed up. For example, in examining homework effects, Trautwein and KollerÂ’s (2003) found that, in previous studies, Â“no homework is ever requiredÂ” (teacher eff ect) and Â“I have homework, but I donÂ’t do itÂ” (student effect) were collapsed into a single response categ ory (p. 122). As a result, the complex measure of time on homework was often trea ted exclusively at only one level, either studentlevel or class/teacherlevel (Tra utwein & Koller, 2003). Such a conceptual model is problematic because it neglects the noni ndependence of individual student data and would likely lead to biased estimates of seve ral statistical paramete rs such as fixed and random effects, as well as an inflation of type I error rate (Bryk & Raudenbush, 2002).
PAGE 43
29 According to Trautwein and Koller (2003), th e solution for overcoming the identified shortcomings of statistical analyses in prev ious studies was to differentiate between the studentlevel effects and the class/teacher e ffects and simultaneously conceptualize time on homework at both of the levels as is possible in multilevel modeling (Bryk & Raudenbush, 2002). Recently, Trautwein (2007) reanalyzed the Programme for International Student Assessment (PISA) 2000 data used in the Organization for Economic Cooperation and Development (OECD) (2001) study that supported the view that longer homework time is associated with higher achievement in ma th. Homework time in PISA 2000 had four response categories: no time, less than one hour a week, between 1 and 3 hours a week, and 3 hours or more a week. Using multilevel modeling to account for studentlevel and schoollevel time on homework effects, Trau twein (2007) found that the relationship between homework time and achievement was on ly moderate at the school level and was negative at the student level. In other words, students who spent more time on math homework than their peers scored lower on the math assessment, whereas a high average homework time at the school level was positively related to achievement. Congruent with TrautweinÂ’s (2007) re sults were findings from RodriguezÂ’s (2004) study which showed that the amount of time students spent on homework was negatively related to math performance, afte r holding other variables constant. For this study, a twolevel Hierarchical Linear Mode ling (HLM) model was employed to analyze the sample data which included 328 teacher s and 6,963 eighth grade students from the United States who participated in the TIMSS 1999 study. Time on homework was dummy coded into two variables to capture three levels of student effort: (a) no
PAGE 44
30 homework, (b) more than 0 and up to 1 hour of homework, and (c) more than 1 hour of homework (Rodriguez, 2004). Findings from th is study suggested that students who did no homework each day performed slightly hi gher on average than those students who spent more than 1 hour a day on math home work. Similarly, comparing to their peers who spent more than 1 hour on math homewo rk each day, those students who spent about 1 hour doing homework tended to have higher math scores. In explaining this inverse pattern of relationship between time on math homework and math achievement, Rodriguez (2004) argued that the students who spent more than 1 hour each day studying math were likely poor math achievers and thus, they needed to study more to catch up with their peers. Academic Tutoring Academic tutoring has become more common for students around the world. The reason for student engagement in this act ivity is primarily academic improvement. However, there seems to be insufficient em pirical evidence to s upport the belief that academic tutoring consistently and positively increases student learning. In fact, many researchers are debating whether academic tuto ring is beneficial for students of various ability levels. Because academic tutoring ta kes many shapes and forms, the task to identify which academic tutoring programs ar e related to increasing student achievement is relatively challenging. In the United States, the past recent decad es have seen states, local districts, and schools focusing on making more time for teach ing and learning (Yair, 2000). This was in response to the report of the Natio nal Commission on Excellence in Education (NCEE), A Nation at Risk, that American students were repe atedly under performing in
PAGE 45
31 Math and Science compared to students of other countries (National Commission on Educational Excellence [NCEE] 1983). Academic tutoring as a result of Â“innovative manipulations of timeÂ” turned out to be one of the plausible solutions that schools could implement in the hope of improving their student achievement (Yair, 2000, p. 485). Examples of innovative manipulations of tim e include using lunch breaks for tutoring programs, rescheduling school bus time, and cu tting back sports and other extracurricular activities so as to give students more time to do homework in school (Yair, 2000). Cosden, Morrison, Albanese, and Macias (2001) reviewed nine studies that examined the effects of two types of atsc hool tutoring programs, ho mework assisted and academic enrichment, on student achieveme nt. This study found that students who participated in tutoring programs that offe red academic enrichment (e.g., literacy skill building, mathematics adaptive skills training, reading with specialist, etc.) tended to have higher achievement test scores in read ing, language, and math than did children in the control groups. However, for tutoring pr ograms that offered homework support, the results were mixed. For example, students in the study of Beck (1999) reported that their participation in the tutoring program where th ey were provided with time, a structured setting for homework completion, and in structional support ha d increased their performance at school. In contrast, evid ence from the study of Ross et al. (1992) indicated that extended homewo rk time for elementary children at tutoring programs was counterproductive in terms of st udent performance on standardi zed tests. It is important to note here that all of the children part icipating in these tutoring programs were identified as minority students or students atrisk for school failure.
PAGE 46
32 Consistent with these findings, Adler ( 1996) stated that minority students were often provided with additional study hours relative to White and Asian students. Schools implemented homeworkatschool programs, enrichment programs, and extra academic or language classes to allow mi nority students to catch up more easily with their majority peers. In fact, educational and political activis ts had joined forces to provide more funds in terms of hours for these programs because time to learn was viewed as the most valuable asset in reducing ethnic, racial, a nd social inequalities in education (Adler, 1996). A similar educational strategy (i.e., provision of additional academic support to lowachieving students and students of special needs) has also b een well supported in England (Muijs & Reynolds, 2003). However, the quasiexperimental study conducted by Muijs and Reynolds (2003) did not yield fa vorable results. The study found that first grade and second grade students who had rece ived atschool academic tutoring did not make more progress in mathematics than t hose who had not. As such, this study did not provide evidence to support ac ademic tutoring as a way of improving the achievement of lowachieving students. In addition to onsite academic support, th ere are other types of academic tutoring that take place outside of the school setting. These progr ams include private tutoring, training for spelling bees, a nd preparation for quizzes and exams, to name a few. According to Lee (2007), private tutoring is a preferable form of tutoring in many countries. The focus of private tutoring, howev er, differs considerably across nations. For example, in Korea, private tutoring serv es primarily enrichment needs for higher achieving students; whereas private tutoring in the United States is primarily for meeting
PAGE 47
33 remediation needs of lower achieving student s (Lee, 2007). Still, for some other countries such as Mainland China and Hong Kong, privat e tutoring serves mainly to supplement students with what has been missed in the regular classrooms. As Bryan et al. (2007) pointed out, in Mainland China and Hong K ong where class sizes were often large, handson explorations became difficult and indi vidual care and guidance was often left to afterclass hours. With regard to the impact of thes e private tutoring programs on student achievement, it appears that the example of Japanese students was often used to argue for the benefits of private tuto ring. As indicated in the study of Walberg and Paschal (1995), in 1993, roughly 42% of sixth graders, 53% of seventh graders, and 59% of eighth graders in Japan attended a dditional lessons in private supplementary schools where students had the opportunity to do exercises re levant to their schoolwork and to learn mathematical skills. As a result of these programs, concluded by Walberg and Paschal (1995), Japanese students consistently perfor med on the top of international achievement tests. Such a conclusion, however, should be interpreted with cautions because in countries such as Japan and Korea, the majo rity of the students who participated in private tutoring were of medium to highability levels who pe rceived private tutoring as a tool that could help them exceed beyond the norm. In another international study, Papanastas iou (2002) used the TIMSS 1995 data to investigate the effects of extra math less ons on fourth grade student achievement in Cyprus, Hong Kong and the United States. Dissimilar to findings from prior research, this study found that among all the students in th e study, those who responded that they did not take extra math lessons had the highest ma th scores, and this pa ttern of relationship
PAGE 48
34 was consistent across the three countries. Acco rding to Papanastasi ou (2002), the results were somewhat expected because extra math lessons would most likely be needed by the students who are not strong in math. Instructional PracticesRelated Factors and Student Achievement Around the world, classrooms are the co mmon places where students come to learn and construct new knowledge, develop co mpetences and prepare themselves for the future. Instructional practices refer to thos e activities that are de signed and implemented by teachers in the classrooms in order to maximize student achievement. According to Cogan and Schmidt (1999), instructional activit ies are one of the most important factors related to student learning in the classrooms. The following s ection presents results from studies that investigated in structional practices in four areas: opportunity to learn mathematics, activities in ma th lessons, mathematics instru ctional hours, and amount of math homework and their association with student achievement in mathematics. Opportunity to Learn Although opportunity to learn wa s identified as one of the five key elements in the model for school learning (Caroll, 1963), it appe ars that little resear ch attention was paid to elucidate the effect of oppor tunity to learn on student achi evement (Pianta et al., 2007). Not until the 1990s, in responding to new le gislation related to Goals 2000 and the Elementary and Secondary Education Ac t (Muthen et al., 1995), did a group of researchers in the United States begin to focu s their interest on opportunity to learn as a potential factor that could help enhance student learning and imp rove teaching (Muthen et al., 1995; Wiley & Yoon, 1995). Research on o pportunity to learn appeared to receive more attention when results from various in ternational achievement comparative studies
PAGE 49
35 indicated that students from the United Stat es performed less well in math and science than their peers from other countries such as Japan, Korea, and Singapore (Baker, 1993; Westbury, 1992). There were increased concerns about the fairness of these international comparative studies. Many researchers argued that the fairness of these international comparisons could be compromised by differe ntial learning opport unities across schools and nations (Wiley & Yoon, 1995). It is very important to note, however, that as time progressed, the traditional de finition of opportunity to l earn (i.e., time allowed for learning) (Caroll, 1963) was reconceputalized to include not only time allowed for student learning but also othe r educational aspects such as content or curriculum coverage, instructional activities, and instructional time (Schmidt & McKnight 1995). In 1992, using the Second Internati onal Mathematics Study (SIMS) data, Westbury conducted a study to examine the effects of curriculum implementation on student achievement. To contro l for potential differences in curriculum or intentional emphasis on grade 7 and 8 algebra and grad e 12 calculus in Japan and in the U.S., Westbury (1992) classified sampled U.S. and Ja panese classrooms into course types (i.e., remedial, typical, enriched, algebra in the US versus math in Japan) and then matched the percentage of topic coverage reported by teachers for each math topic with the number of SIMS items in the same content areas. Next, based on th e opportunity to learn (i.e., percentage of topic taught in a course) students in enri ched and algebra courses in the U.S. were compared with all Japanese students. Results of this study showed that where the American curriculum was comparable to both the curriculum of the SIMS test and the curriculum of Japan, U.S. achievement was similar to that of Japan. Therefore, Westbury (1992) concluded that overall, the lower achievement of the United States was
PAGE 50
36 the result of curriculum that was not as we ll matched to the SIMS tests as was the curriculum of Japan. In addressing the same research ques tions raised in WestburyÂ’s (1992) study, Baker (1993) took a different approach to re analyze the SIMS data. According to Baker (1993), WestburyÂ’s (1992) analysis, which re sted on greatly restri cting the American sample to control for curriculum differences was problematic. In other words, Westbury compared Â“partsÂ” of the American sample, i.e., students in enriched and algebra who tended to be high ability students, with the Â“f ullÂ” Japanese sample. Thus, to correct for this issue, Baker (1993) compared achievement of American and Japanese students based on taught curriculum and untaught curriculum which could be computed from SIMSÂ’ teacher data. Not surprisingly, findings from BakerÂ’s ( 1993) reanalysis were quite dissimilar to those of WestburyÂ’s (1992) study. Baker (1993) showed th at substantial differences existed in the effectiveness of the two sy stems that went beyond curriculum coverage. Specifically, on average, Japanese students l earned over 60% of what they were taught in the target grade, whereas their American peers learned only 40%. Additionally, threefourths of Japanese scores were above the me dian of the American distribution, and over 9% of Japanese students learned all of the test ed material taught to them as compared to less than 2% of American students. The eff ect size of the mean differences between the two systems was 0.81, which was large. In regard to untaught material Japanese students appeared to know more material that had not been taught than American students, with an effect size for the mean difference of the tw o systems of 0.33. In addition, Baker (1993) also examined the distribution of the gain sc ores of students in th e two countries after
PAGE 51
37 accounting for curriculum taught and untaught duri ng target grade. It appeared that there was a wide variation in yearly performan ce of American classrooms, with a few doing better than the best Japanese classrooms a nd some showing no or even negative gain over the year. In contrast, Japan had relative sma ll variation in yearly performance between classrooms, with a relatively high minimum level of performance. In summary, Baker (1993) concluded that the Japanese system imparted more math knowledge to more students than the American system. To control for crosscultural differences, Wiley and Yoon (1995) used the data from the California Learning Assessment Syst em (CLAS) 1993 to i nvestigate the impact of learning opportunity on student math achie vement in the United States. For this study, approximately 1,750 math teachers (1,100, 420, and 230 teachers in Grades 4, 8, and 10, respectively) and 30,250 students (17,250, 10,100, and 3,000 students in Grades 4, 8, and 10, respectively) were included. The variable opportunity to learn was created based on mathematics teachers' responses on whethe r they actually taught mathematics to almost all of the students who were given the math test. In addition, teacherrelated variables such as familiarity with curriculum goals a nd standards, participation in professional development, and implementation of instructiona l practices were also used. StudentsÂ’ test scores on the CLAS 1993 were used as the outcome variable. Results from this study suggested that students' expos ure to different math topics, and the way in which these topics were covered affected their performance on tests. However, the impact of opportunity to learn differed across grade. In Grade 4, for those teachers who were familiar with mathematics instruction asse ssment guides, and who participated in mathematics curriculum activities, students pe rformed significantly better than students
PAGE 52
38 of the teachers who were not involved in thos e activities. Particip ation in professional conferences formed the notable exception to this pattern. A similar pattern was observed in Grade 8. Grade 10 showed the least impact despite the highest level of teachersÂ’ familiarity with math goals and standards and frequent participation in various instructional activities. Driven by the desire to further under stand how mathematics was taught and learned from a qualitative perspective, Schmidt, McKnight, Valverde, Houang, and Wiley (1997) conducted a largescale investigat ion of the mathematics curricular visions and aims of almost 50 countries participati ng in the TIMSS 1995. For the first time in a study of this scope, all the data came from national curricul ar documents (i.e., official curriculum guides and textbooks). Findings from th is study indicated th at curricular data varied substantially across countries with di fferences observed in the kinds of learning opportunities provided, in the mathematical c ontent involved, in th e expectations for students, and in the organizi ng and sequencing of the opportun ities provided (Schmidt et al., 1997). As an example of mathematics curricu lar coverage, although the same major mathematics topics (i.e., algebra, number geometry, measurement, and data) were introduced in all countries at some point within the schooling years, the focus on a particular topic as well as the duration a nd depth of content coverage for each topic differed from one country to another in myri ad ways (Schmidt et al., 1997). At eighth grade, in China, Czech, Japan, Korea, Ne therlands, Portugal, Romania, Slovak, and Spain, the largest proportion of textbooks (more than 30%) was devoted to the topic of algebra; whereas in Philippine s, South Africa, and Sweden, little emphasis (less than 5%
PAGE 53
39 of textbooks) was placed on this topic. Similarly, at four th grade, whereas Scotland, Romania, Belgium, the Russian Federati on, Slovenia, Mexico, and Portugal gave considerable attention to the topic of measurement (more th an 30% of textbooks), Israel and the Philippines featured little or no covera ge of this topic (Schmidt et al., 1997). As a result of such pervasive differences in opport unities to learn across nations, Schmidt et al. (1997) concluded that it was not surprising to observe s ubstantial differences in international student achievement. These di fferences in student achievement, however, must be carefully interpreted in context (Schmidt et al., 1997). Homework Assignment Homework assignments and their effects on student achievement is another topic that has recently drawn considerable attenti on from educational researchers. Because the effects of homework assignments should be conceptualized as occurring at the class/teacher level (Trautwein, 2007), it is no t surprising to observ e that most recent studies on homework assignments (De Jong et al., 2000; Rodriguez, 2004; Trautwein et al., 2002; Trautwein & Koller, 2003; Trautwei n, 2007) have adopted multilevel modeling methods for analyzing hierarchic ally structured data. In gene ral, empirical evidence from these studies suggested that the frequency and amount of homework assignments were positively related to student achievement. However, the effect size of homework assignments varied across grades. In the study conducted by De Jong et al (2000), the researchers examined the effect of homework assignments (i.e., amount of homework assigned or the number of homework tasks and frequency of homework assignments) on class achievement in math. A sample of 28 schools, 56 classes a nd 1,394 middle school students from the
PAGE 54
40 Netherlands was used in this study. The depende nt variable of the study was student math scores from the national standardized te st. The independent variables, amount of homework and frequency of homework, cam e from the teachersÂ’ log book during the school year. Data were also collected from students regarding homework time, homework problems, homework study tactics and the role of parents. In addition, classroom observation data were gath ered regarding homework assignment and homework discussion. In terms of data anal ysis, the relationships between homework assignment aspects and achievement were anal yzed using correlations, partial correlation, and multilevel modeling technique. Results of this study suggested that the amount of homework (i.e., the number of tasks assigned to the class in the respective school year) was significantly related to class achieveme nt (this variable explained 2.4% of the variance in class achievement) whereas the frequency of homework had no effect on math achievement at the class level and tim e on homework had no effect on student math achievement. According to the researchers, a possible explanation for the nonsignificant effect of frequency of homework was proba bly the restricted variance in homework frequency. That is, a large majority of teacher s in the Netherlands assigned homework to students relatively frequently. Another recent study using multilevel anal yses to investigate the effects of homework assignment on class achievemen t was Trautwein et al. (2002). Using a subsample of a largescale national database of adolescents in Germany, Trautwein et al. (2002) looked at the relationship of homewo rk variables on math achievement in school. Specifically, in this study, re peated measurement data were collected from 1,976 seventh grade students from 125 classe s at two time points (i.e., at beginning of the school year
PAGE 55
41 and at the end of the school year). Re garding key variables of the study, math achievement was measured by studentsÂ’ st andardized test scores for the 19911992 school year. Homework variables included frequency of homework assignment (an aggregated variable of homework frequenc y and frequency of teachersÂ’ monitoring of homework completion at the class level) a nd time typically spent on homework per day, a studentlevel variable. Results of the multile vel modeling analysis suggested that the frequency of homework assignment was positively related to math achievement (the explained variance at the cla ss level, after controlling for other variables in the model, was 8%); whereas time on homework had no si gnificant effect on math achievement. Interestingly, Trautwein et al. (2002) also found that, in this study, the positive effect of frequency of homework assignment did not depend on whether homework assignment was short or long. In the United States, Cooper (1989a) conducted a comprehensive review of studies on the effect of homework on academ ic achievement. To give a focus for his study, Cooper defined homework as Â“tasks assi gned to students by school teachers that are meant to be carried out during nonschool hoursÂ” (p. 7). This definition excludes (a) inschool guided study, (b) home study courses, and (c) extracurricula r activities (Cooper, 1989a). In the area of homework assignment, Cooper (1989a) review ed 20 experimental studies which compared the achievement of students given homework assignments with students given no homework or any other treatment to compensate for their lack of homework. These studies represented all levels of education, from elementary schools to middle and high schools.
PAGE 56
42 As a result of this study, Cooper (1989a ) concluded that homework assignment had positive effect on student academic achieveme nt but the size of effect varied across grades (Grades 4Â–6: d = 0.15; Grades 7Â–9: d = 0.31; Grades 10Â–12: d = 0.64). This was based on evidence that 14 of the 20 studies produced effects that support homework assignments. Interestingly, the researcher noted that there was no cl ear pattern indicating that homework was more effective in some subjects than in others. For middle school students, homework assignment was signifi cantly related to stude nt achievement. The amount of homework assignment was optimal when it required between 1 hour and 2 hours per night to complete. For high school students, more homework assignments were associated with better student achievement. As an example, an average high school student in a class with ho mework assignments tended to outperformed 69% of the students in a no homework clas s. At the elementary level, however, the effect of homework assignments, regardless of amount, tended to be negligible (Cooper, 1989a). Classroom Activities Prior research presents a fairly positive view toward the relationship between studentsÂ’ experiences in academic classroom s and their achievement (Yair, 2000). Yet, debates continue among research ers regarding types of activiti es and the extent to which those activities impact stude nt achievement (Staub & Stern, 2002). Believing in how much students learn is determined by the time they actually spent ontask (Caroll, 1963), Yair (2000) studied student engagement/dis engagement with instruction in various academic classrooms in the United States and found that activities such as laboratory work, small group discussions, and presenta tions were highly engaged by students. Activities that were teacherdirected or teac her lectures attracted the lowest rate of
PAGE 57
43 engagement from students. Yair (2000), ther efore, concluded that more individualized activities or active instructional practices would likely result in higher student achievement. YairÂ’s (2000) study, however, is ch allenged in that students may learn more not through an activity that they engage in at a higher rate but from lower engagement in an activity that is more conducive to lear ning. For example, a student may learn more from lower engagement in a lecture as oppos ed to higher engagement in group work. In addition, it is possible that misconception ex ists in small group discussion and, as a consequence, higher engagement in this activity is not conducive to knowledge acquisition (Yair, 2000). For some classrooms, Cooper (1989a) obser ved that the type of activities that teachers preferred to do in cla ss was to review, discuss or even allow students to do some homeworklike assignment. In a metaa nalysis study of homework, Cooper (1989a) reported a set of studies that compared the effect of homework with that of inclass supervised study on student academic achievement. In these studies, students not receiving assignments to complete at home we re asked to complete some assignments in class. These activities typically were assigned at the end of each unit or lesson. Results of these studies suggested that th e effect of homework was about half of that of inclass homeworklike assignments (Cooper, 1989a). In teresting to note, however that, after controlling for grade, the effect of homework a ssignment tended to be significantly larger than that of inclass homeworklike assignmen t. Such a pattern of findings was observed in studies where junior high and high school students were sampled but not in studies where elementary students were sampled (Cooper, 1989a).
PAGE 58
44 Across countries, it was found that nota ble differences exist in the types of activities exposed to students in the classroom s. In an observational study of activities in elementary school mathematics classrooms in th e United States and Japan, Stigler et al. (1987) revealed that teachers in Japanese cl assrooms spent significan tly more class time asking academic questions of the entire gr oup whereas teachers in the United States asked significantly more questions of individu al students. Later, re plicating the study in middle school mathematics classrooms, Stigler et al. (2000) found that students in Japan spent more class time on activities designe d for inventing and proving and less time on practicing routine procedures than did students in the United States. Similarly, a qualitative study of Bryan et al. (2007) found that there we re significant differences in activities in mathematics less ons between countries. Whereas Australian and American teachers tended to use handson manipulative activities frequently in math lessons, Mainland Chinese and Hong Kong teachers tended to engage students more in teacherled whole class activities or ve rbal activities where students ha ve opportunities to discuss, question, and answer. Also, ther e appear to be significantl y more group activities and inclass student collaboration in the United Stat es than in Australia, Mainland China, and Hong Kong (Bryan et al., 2007). Because Japanese students consistently performed better than American students in various international achievement tests so me researchers tended to attribute Japanese studentsÂ’ academic success to the types of activities they experienced in classrooms. Hiebert and Stigler (2000), fo r instance, recommended that mo re effective instructional strategies should be used in academic classrooms based on Japanese approaches. Bryan et al. (2007), however, argued th at the types of activities im plemented in Eastern schools
PAGE 59
45 were not necessarily more effective; rather they were strongly influenced by cultural factors. As indicated in their study, even though teachers from Hong Kong and Mainland China recognized the benefits of more indi vidualized and handon manipulative activities they simply could not use thes e activities due to large class size and pressures to cover a heavy load of subject materials in the time assigned (Bryan et al., 2007). Stipek, Givvin, Salmon, and MacGyvers (2001), on the other han d, asserted that the type of activities used in the classroom was strongly influenced by teacherÂ’s beliefs. As evident in the one of the studies that these researchers examin ed, teachers who believed that children learn mathematics by constructing their understand ing in the process of solving problems tended to give students more word problem s in instruction and spend more time developing childrenÂ’s counting strategies befo re teaching number facts (Stipek, Givvin, Salmon, & MacGyvers, 2001). Instructional Time CarollÂ’s (1963) model for school learni ng suggested that the amount of time devoted to learning is an important determin ant of how much is learned. It would be reasonable, therefore, to expect that the more instructional time is provided to students, the greater achievement is likely to result. Ho wever, existing literature suggests that this is not always the case. Cooper (1989b) at tributed the unclear relationship between instructional time and student achievement to the various definitions of instructional time used in existing literature. For example, in so me studies, instructiona l time was defined as scheduled or allocated time that was set aside by law, school, and/or teacher for a particular activity to take place; whereas in other studi es, instructional time was operationalized as actual amount of time spen t on academic material within the allocated
PAGE 60
46 time. Still in other studies, instructional time was measured as engaged time or timeontask that excluded time for classroom ma nagement and interruption (Cooper, 1989b). For Schmidt et al. (1997), instructional time shoul d be interpreted as tim e in which specific educational opportunities are made available to students within any school year. Yet, according to Baker, Fabrega, Galindo, and Mishook (2004), instructional time could be measured in several ways, from the number of days in the school year to the number of hours spent on a single subject. Results from EvertsonÂ’s (1980) study indi cated that a larger amount of time devoted ontask was associated with greater student achie vement. Specifically, the study found that lowachieving junior high school students tended to engage about 40% of the time; whereas highachieving students appear ed to engage about 85% (Evertson, 1980). These findings were supported by the research of Fredrick and Walberg (1980) where the researchers reviewed nine studies that examin ed the relationship of instructional time in terms of timeontask and student achievemen t and found that all nine showed a positive relationship (Fredrick & Walber g, 1980). In the studies review ed, the correlations ranged from .15 to .53. Even after controlling for ot her variables such as I.Q, ability and readiness, the correlations were still positive, ranging from somewhat weak to moderately strong ( r = .09 to .44). The researchers, therefor e, proposed that school days or year should be lengthened in order to increase student achievement (Fredrick & Walberg, 1980). Empirical evidence supporting the positive effects of instructional time on student achievement can also be found in one of the most influential reports in the history of United States educational reform, A Nation at Risk (NCEE, 1983). By correlating the
PAGE 61
47 number of school days with academic achie vement, the report suggested that Asian students performed better than American stude nts in math and science because they had more time to study. On average, Asian student s studied up to 240 days a year compared with about 180 days for American students. Th erefore, in order for American students to be competitive globally, NCEE recommended that United States schools should increase an extra school hour per day and up to 40 ex tra days per school year (NCEE, 1983). With international educational data b ecoming more accessible, many researchers recently had the opportunity to examine the effects of instructional time on student achievement in a much more diverse setting. Using three major inte rnational databases: Programme for International Student Assessment (PISA) 2000, Trends in International Math and Science Survey (TIMSS) 1999, a nd International Study of Civic Education (CIVICS) 1999, Baker, Fabrega, Galindo, and Mishook (2004) investigated the relationship between instructi onal time and student learning across countries. PISA 2000 tested mathematics, science and reading skil ls of 15yearold stude nts from 32 countries; TIMSS 1999 tested mathematics and science of eighthgrade student s from 38 countries, and CIVICS assessed eighthgra de studentsÂ’ knowledge in ci vics from 28 countries. In this study, test scores were the outcome va riable and instructional time during the academic year, in terms of hours, dedicated to formal educational activities was the independent variable. Results from this study indicated that there was no statistically significant relationship between instructi onal time and achievement sc ores at the crossnational level. Also, this pattern of relationship was observed consisten tly across the three databases and subjects tested. For example, in the TIMSS data, students attending math
PAGE 62
48 class for 5 hours or more during the week sc ore 481 on achievement tests, while students who receive less than 2 hours of math per week score on average of 485. About 90% of the students who received between 2 and 5 hour s of math class got an average 491 points on the math achievement test. Interestingly, the evidence that more hour s of math class did not result in better achievement scores was also observed within nations. The average statistically significant correlation for the positive effect is 0.09, m eaning that the relationship between total instructional time and math ach ievement within a country ac counted for only 0.8% of the variance in achievement scores. There were several cases where a negative effect was observed. In these cases, however, the average magnitude of the effect was also small, about 1.4%. Hungary and Japan were the only two countries where the magnitude of the effect was somewhat noticeable. For tenth gr ade, Hungarian students who received more than 912 hours of instructi on per year tended to scor e 55 points higher than their Hungarian peers who received less than 810 hours of instruction. Similarly, Japanese students who received more than 1,112 hours of instruction per year scored 25 points higher than their Japanese counterparts who received less than 935 hours of instruction (Baker et al., 2004). Commenting on the disagreement between findings from this study and those of previous studies, Baker et al. (2004) asserted that diffe rences in achievement as a function of instructional tim e only emerged from comparing extremely low amounts of time with some threshold amount, and then a diminishing return would be seen beyond that point. Therefore, schools should not wa ste resources in marginal increases in instructional time, as long as the system was within world norms. Baker et al. (2004)
PAGE 63
49 further suggested that if schools had a choi ce between using resources to increase time versus improving teaching and the curriculum then the schools should give priority to the latter. YairÂ’s study (2000) is another study that opposed the positive relationship between instructional time a nd student achievement. Using productive time rather than allocated time as an indicator of instructiona l time, Yair (2000) estimated the effects of productive time on the probability of studentsÂ’ engagement in inst ruction. Results from logistic regression analysis s howed that students were dise ngaged a large portion of the time in academic classes, and that the exis ting instructional met hods and strategies produced low rates of productive time, especi ally for minority st udents. Specifically, African American student s reportedly were disengaged fr om instruction 51% of the time, and Hispanic students 52% of the time. In contrast, Whites and Asian American noted engagement with their lessons 6% to 10% mo re often than their African American and Hispanic peers. The study also found that st udent disengagement became more prevalent as students advanced to higher grades. Not surprisingly, Yair (2000) found that student disengagement was associated with subjects taught and inst ructional methods and strategi es. For example, of all the instructional strategies (i.e ., laboratory work, presentation, group work, use of TV and video, individualized instruction, and teacher lectures), teacher lectures, the most prevalent strategies used in the United States schools appeared to attract the lowest rates of studentreported engagement. Based on th ese findings, Yair (2000) concluded that instructional reforms rather than the simple addition of time would be more productive in raising student achievement and in bringing a bout greater social equality in education.
PAGE 64
50 Teacherrelated Factors a nd Student Achievement What students are expected to learn, how the instruction is organized and delivered, and what students have learned are believed to stem from experiences and values embedded in the professional trai ning and development of teachers (Cogan & Schmidt, 1999). Similarly, Wright, Horn, a nd Sanders (1997) believed that teachersÂ’ effects are dominant factors affecting studentsÂ’ achievement gain. While it is clear that teacher quality play an important role in student learning, there is considerably less consensus on which teacherrelated characteri stics are strongly relate d to studentsÂ’ higher performance. The following section focuses on re search studies that examined the effects of teacherrelated factors such as prepar ation to teach, readiness to teach, and professional development on studentsÂ’ achieveme nt both in the United States and in the international setting. Preparation to Teach A variety of research stud ies have examined the asso ciation between how teachers were prepared to teach and their stud entsÂ’ achievement (Bankov, Mikova, & Smith, 2006; DarlingHammond, 2000; Greenberg, Rhode s, Ye, & Stancavage, 2004; Grouws, Smith, & Sztatjn, 2004). These st udies, however, produced mixe d results. For example, FergusonÂ’s (1991) analysis of Texas school districts found that te achersÂ’ expertise, including their scores on a licensing examin ation measuring basic skills and teaching knowledge; masterÂ’s degrees; and experience accounted for more of the interdistrict variation in studentsÂ’ readi ng and mathematics achievement in Grades 1 through 11 than student socioeconomic status. The effects were so strong, and the variations in teacher expertise so great, that after controlling for socioeconomic stat us, the large disparities in
PAGE 65
51 achievement between Black and White students were almost entirely accounted for by differences in the qualifications of thei r teachers (Ferguson, 1991). Similarly, DarlingHammond (2000) also examined a study conducted in 1999 by Los Angeles County Office of Education on elementary student re ading achievement and found that across all income levels, studentsÂ’ reading achievement was strongly related to the proportions of fully trained and certified teachers, much more so than to the proportion of new teachers in the school. The study concluded that differenc es in studentsÂ’ test scores was a teacher training issue and not due to new teachers Â’ lack of classroom experience (DarlingHammond, 2000). Likewise, evidence from a study of Grouw s, Smith, and Sztajn (2004) suggested that teacherÂ’s undergraduate major in mathem atics appeared to influence eighth grade student mathematics performance on the Natio nal Assessment of Educational Progress (NAEP). However, an examination of this e ffect for fourth grad e students yielded nonsignificant results. The researchers explained th at the observed differenc e in the effect of preparation to teach between eighth grade and fourth grade might be due to the fact that, at the elementary level, teachers were expect ed to teach different subjects regardless of their undergraduate major field of study. Howeve r, at the middle school level, teachers were expected to teach the subject rela ted to their undergraduate field of study. Therefore, it was reasonable to observe a stronger relationship between teachersÂ’ preparation to teach and student achievement at the higher educational level than at the lower educational level (Gr ouws, Smith, & Sztajn, 2004). Also using the data from NAEP 2000 fo r eighthgrade math, Greenberg, Rhodes, Ye, and Stancavage (2004) investigated the relationship between teacher qualifications
PAGE 66
52 (i.e., certification, academic major or minor, hi ghest degree, total teaching experience and experience teaching mathematics) and stude nt achievement. Multiple regression was employed as a statistical analytic method in this study. In order to estimate the independent effect of each teacher attribut e on math achievement, this model controlled for student gender, ethnicity, eligibility fo r free and reduced lunch program, number of reading materials at home, a nd parental education. In addi tion, this study also applied sampling weights, replicate weight and averag e plausible value as a way to address the issue of complex sample design of NAEP. Th is research supported the findings from previous studies that teaching certification was positively associated with higher math achievement. With regard to the effect of acad emic major, students across all math ability levels (i.e., low, medium, and high) who had te achers with a major in math scored higher than their peers whose teachers had a major outsi de of their field of teaching. In terms of teaching experience, students who had teachers with more than five years of experience teaching math tended to perform better in math than their friends who had less experienced math teachers (Greenberg, Rhodes, Ye, & Stancavage, 2004). Despite widespread evidence suggesting that preparation to teach enhances student learning, there was a gr oup of researchers who argued that such a conclusion was not always true. Using the example of T each for America (TFA) teachers, Glazerman, Mayer, and Decker (2006) demonstrated th at having no preparation to teach (i.e., not having a college degree in math education, ma th teaching certificat ion, or math teaching experience) did not prevent TFA teachers from contributing positively to math achievement of their 12th grade students. In fact, it was observed that TFA teachers tended to produce significantly higher student test scores than the other teachers in the
PAGE 67
53 same schools Â– not just certified novice teache rs but also certified veteran teachers. Glazerman, Mayer, and Decker (2006) concluded that the salient fact ors of TFA teachersÂ’ success in teaching are high academic record s in any field of study, motivation, and enthusiasm to teach. On the international setting, the conten tion that a positive relationship exists between teachersÂ’ preparation to teach and student achievement was also not supported. A study conducted by Bankov, Mikova, and Smith (2006) in Bulgaria is an example. Using HLM to analyze TIMSS 2003 data for eighth grade math and science, this research suggested that having a teacher who had a ma jor or main area of study in the subject taught was not associated with greater ma th or science achievement. Unexpectedly, students who had a lifescience teacher with a degree in biology tend to have lower scores on the lifescience assessment than st udents whose teachers did not have a degree in biology. In explaining the c ontradicting results observed in this study relative to those of the United States, the resear chers stated that Â“traditional measures of teacher quality, including a match between the subject matter th at teachers have studied, their level of experience, and their selfasse ssment of their readiness to teach their subject material, may not be relevant for Bulgaria at this juncture in its educat ional reformÂ” (p. 471). Readiness to Teach In an effort to define pathways that teachers can follow to become successful in the Â“theoryrich, openended, and contentin tensive classroomsÂ”, Shulman and Shulman (2004) created a model called Â“Teacher L earning CommunitiesÂ” (Shulman & Shulman, 2004, p. 259). In this model, readiness to teach is highlighted as one of the five important elements (i.e., ready, willing, able, reflective and communal) that teachers must develop
PAGE 68
54 along their paths to be a successful teacher. As Shulman and Shulman (2004) put it: Â“An accomplished teacher is a member of a prof essional community who is ready, willing, and able to teach and to learn from his or her teaching experiencesÂ” (p.259). According to Shulman and Shulman (2004), a teacherÂ’s r eadiness to teach is determined by the teacherÂ’s development of visions of teach ing and learning. Specifi cally, a welldefined vision of teaching and learning (e.g., teaching is a process other than telling, or learning is a process other than repea ting or restating) serves as a goal toward which teacher development is directed, as well as a sta ndard against which teachersÂ’ thoughts and actions are evaluated (Shulman & Shulman, 2004). Because teacherÂ’s preparedness to teach, selfconfidence, and motivation play important roles in shaping a teacherÂ’s vision of teaching and learni ng, it can be inferred that these factors are related to teacher readiness to teach (Bankov, Mikova, & Smith, 2006; DarlingHammond, 2000; Shulman & Shulman, 2004). DarlingHammond (2000) asserted that teachers who lack adequate t eaching preparation tend to have poor visions of teaching and learning which, in turn, negati vely affected teacher readiness to teach and their student outcomes. In illustrating this point of view, DarlingHammond (2000) cited a teacher, a graduate from Yale University, who did not have an undergraduate major in education but believed that with his intellig ence and enthusiasm for teaching he would be able to help his students learn. Un fortunately, this was not the case. I Â– perhaps like most TFAers [Teach for America teachers] Â– harbored dreams of liberating my students from public school mediocrity and offering them as good an education as I had received. But I wa s not readyÂ… As bad as it was for me, it was worse for the students. Many of mine Â… took long steps on the path toward
PAGE 69
55 dropping outÂ…. I was not a successful teac her and the loss to the students was real and large. (p.168) Interestingly, Glazeman, Mayer, and Decker (2006) recently conducted a randomized experimental study on the impacts of Teach for America teachers on student achievement and other outcomes and found c ontradicting results. This study suggested that the allowance of Teach for America teachers to bypass the traditional route to the classrooms did not seem to harm students. In fact, there were statistically significant positive effects of Teach for America teachers on 12th grade studentsÂ’ mathematics achievement. The study, however, found that Teach for America teachers were more likely to report problems with student behaviors than regular teachers who had teaching certificates or undergraduate majors in education. Bankov, Mikova, and Smith (2006), on the other hand, argued that having a teaching certification or a major in educa tion does not necessarily guarantee that a teacher is ready to teach. With schools around the world becoming increasingly diverse, a teacher who knows the subject matter well but lacks essential understanding of culturally diverse classrooms would be unlikely ready to teach. This is because, as Hollins (1995) points out, any cultural mismat ch between the teacher a nd students can potentially interfere with instruction and learning. Building on the similar point of view that teacher readiness to teach is greatly dependent on their understanding of a cultura lly diverse classroom, Wiggins and Follo (1999) emphasized the importance of teacher motivation and willingness to learn of othersÂ’ cultural differences. Wiggins and Follo (1999) contended that despite prior exposure to a culturally diverse environment, a teacher who is not willing to learn of
PAGE 70
56 othersÂ’ cultural differences is unlikely to ach ieve a desirable unders tanding level needed to foster his/her readiness to teach. It is important to note, however, that although this contention is sound from a theoretical perspectiv e, there is insufficient empirical evidence to support this view. Future stud ies, therefore, should pay more attention to this line of research, wherever possible. Professional Development Teacher professional development is instrumental in educational reform efforts to improve student learning (Borko, 2004). A ccording to Jacob and Lefgren (2004), professional development is a common practic e in the United States public schools. A study of Parsad et al. (2001) suggested th at approximately 72% of teachers reported having participated in training related to the subject area of their main teaching assignment during the previous 12 months However, despite the widespread implementation of teacher training programs across the country, research linking teacher professional development with student perf ormance is inconclusive (Jacob & Lefgren, 2004; Johnson, Kahle, & Fargo, 2007; Ross, Bruce, & HogaboamGray, 2006 ). In a metaanalysis study, Kennedy (1998) reviewed 93 studies but found positive effects of teacher development on student achievement in only 12. In line with these findings, Corconran (1995) and Little (1993) claimed that t ypically teacher professional development programs are lowintensity activit ies that lack conti nuity and accountability. More than half of the teachers surveyed re ported engaging in only ei ght hours or less of training per content area per year (Corconran, 1995). Hold ing a similar view, Borko (2004) criticized existing prof essional development programs for their failure to take into account how teachers learn. For example, in a seminar on community learning, teachers
PAGE 71
57 were expected to create a community of l earners among their students but the teachers themselves were not provided a parallel comm unity in the training to nourish their own growth. As a consequence, upon completion of these programs, many teachers still felt they were not ready to teach. Borko (2004), therefore, concluded that these programs were Â“woefully inadequateÂ” and Â“intellectually superficialÂ” (p. 3). Likewise, Ross, Bruce and HogaboamGray (2006) questioned findi ngs of several studies on professional development, arguing that all of them were deficient in some way. For example, with the study of Hamilton et al. (2003), it was impossi ble to extract the unique contribution of professional development on student outcomes because the study did not control for the provision of innovative curriculum materi als which could account for the student achievement (Ross, Bruce, & HogaboamGray, 2006). Similarly, for the study of Reys et al. (1997), the findings were biased because they were based on an unrepresentative sample of teachers (i.e., 80% had mastersÂ’ degree and 40% were members of National Council of Teachers of Mathematics) (Ross, Bruce, & HogaboamGray, 2006). Opposing the assertion that teacher pr ofessional development programs did not yield improved instructional practices a nd student learning, Smith and Neale (1991) conducted a study to examine the impact of the Cognitively Guided Instruction (CGI) project. The CGI project is a 4week summer workshop which aimed to increase teachersÂ’ ability to explore student thinki ng and to plan ways to build on studentsÂ’ knowledge in math instruction. In this proj ect, participating teachers were randomly assigned into two groups, treatment and control. This project showed that, by the end of the workshop, teachers in the treatment group re ported an increased awareness of the role that childrenÂ’s thinking plays in the learni ng process, and the importance of listening
PAGE 72
58 carefully to students in orde r to build on their understandi ng and misconceptions (Smith & Neale, 1991). Specifically, in comparison with the teachers in the control group, CGI teachers appeared to know more about the strate gies that children use to solve problems, the kinds of problems they find difficult, and different ways to pose problems to students (Smith & Neale, 1991). In re gard to student learning, th e study found that, during the year following the summer workshop, student s in the CGI classrooms solved a wider variety of math problems, used more pr oblemsolving strategies, and were more confident in their math ability than were students in control cl assrooms (Carpenter & Fennema, 1992). Consistent with this finding, Johnso n, Kahle, and Fargo (2007) recently conducted a quasiexperimental study to examine the effect of sustai ned, wholeschool professional development on sixth to eighth grad e student achievement in science. In this 3year (20022005) longitudina l study, science achievement of students of 11 science teachers from a treatment school was compared with that of students of six science teachers from a control school. Each school had approximately 750900 students. At the treatment school, science teachers were o ffered an intensive 80hour professional development program during the summer of the first year, followed by 36 hours across each of the three academic years, for a total of 198 hours. The training emphasized standardsbased instructional pr actices (i.e., instructional stra tegies focusing on inquiry as central mode for teaching science). For this study, a crosssectional multiple regression analysis that adjusted for cluster sampli ng was conducted for each year. Results of the study showed that there was a positive relatio nship between student science achievement and teacher participation in professional development program. Specifically, studentsÂ’
PAGE 73
59 repeated involvement in improved instruction resulted in significant achievement gains by both majority and minority students in Year 2 and 3. For Year 1, there was no significant difference in achievement scores between the two study groups of students. This study suggests that durat ion of professional developm ent is linked to increased student achievement scores. Th is is somewhat expected be cause the more opportunities teachers have to practice their newly learned skills, the deeper and more sustained their experiences became which, in turn, positivel y influence student learning (Johnson, Kahle, & Fargo, 2007). According to LoucksHorsley, Hewson, Love, and Stiles (1998), there are five types of teacher professional development: immersion, examining practice, curriculum development, curriculum implementation, and collaborative work. The first type, immersion strategies involve having teachers actually "do" science or mathematics and gain the experience of doing science or math with a scientist or mathematician. The second type, curriculum implementation involve s having teachers using and refining the use of instructional materials in the cla ssroom. The third type, curriculum development involves having teachers help create new instructional materi als to better meet the needs of students. The fourth type, examining pr actice includes case disc ussion of classroom scenarios or examining real classroom instruction. And finally, the fifth type, collaborative work includes study groups, peer coaching; mentoring and classroom observation and feedback. Interested in how eighthgr ade science and math achievement was associated with the types of professional development (i.e., immersion, examining pr actice, curriculum development, curriculum implementation, a nd collaborative work), Huffman, Thomas,
PAGE 74
60 and Lawrenz (2003) analyzed the data that were collected from 94 science teachers and 104 mathematics teachers in 46 schools across a southern stat e of the United States. The dependent variables were student achievement sc ores from the state standardized tests in math and science and the independent variab les were the five t ypes of professional developments. Results from regression analys es suggested that th ere was only a weak relationship between these types of professi onal development and student achievement on state exams. Specifically, only curriculum development for math teachers was found to relate to student math achievement; however, the relationship was ne gative. None of the different types of professional development we re significantly relate d to student science achievement. Mathematics teachers with st udents who have lower achievement were found to engage in more longterm curriculu m development. In this study, curriculum development for math teachers accounted for 16% of the variance of student achievement. Using the data from the California Learning Assessment System (CLAS) 1993, Wiley and Yoon (1995) examined the relationships between student math achievement and teacherrelated variables such as familia rity with curriculum goals and standards, participation in professional development, and implementation of in structional practices. For this study, approximately 1,750 math teachers (1,100, 420, and 230 teachers in Grades 4, 8, and 10, respectively) and 30,250 students (17,250, 10,100, and 3,000 students in Grades 4, 8, and 10, respectively) were included. Findings from this study suggested that, in Grade 4, for those teach ers who were familiar with mathematics instruction assessment guides, and who particip ated in mathematics curriculum activities, students performed significantly better than students of the teachers who were not
PAGE 75
61 involved in those activities. Interestingly, partic ipation in profe ssional conferences formed an exception to this pattern. A sim ilar pattern was also observed in Grade 8. Grade 10, however, showed the least impact despite the highest level of teachersÂ’ familiarity with math goals and standards and frequent participation in various instructional activities. Similarly, Cohen and Hill (1998) conducted an experimental study to examine the extent to which student math achievement wa s associated with teacher participation in professional development programs that focu sed on teaching mathematics content in the state of California. Findings of this study showed that, after adjusting for student background variables, experimental schools wher e teachers participated in professional development programs had significantly higher average math achievement than control schools where teachers did not participate in this type of professional development. Kennedy (1998), however, found that students whos e teachers participated in specific contentrelated professional developments s howed better conceptu al understanding in math and science than their peers whose teache rs only participated in general professional development programs. Schoolrelated Factors a nd Student Achievement School systems around the world differ in many respects. Existing literature has identified many potential factors that can bear upon the differences in student achievement between and within schools. Exam ples of these factors include availability of school resources, differences of studen t backgrounds and characteristics, teacher quality, class size, and instru ctional time to name a few. Given these differences, many researchers have argued that findings from one school system should not be compared
PAGE 76
62 with or generalized to other school systems, and therefore, research ers should focus their attention to research issues within indivi dual school systems. This point of view, however, could not stand with time without criticism. There were a group of researchers who strongly believed that despite sizeable differences across school systems, every school system can greatly benefit from other systems by carefully examining and interpreting their findings. In so doing, each school system would learn more about themselves where they are relative to othe r school systems and most important of all, what opportunities as well as threats they s hould consider in the que st to improve their student achievement. Thus, it w ould be very worthwhile for educational researchers to pursue this line of research. In this particul ar section, the focus of the literature review will be on research studies that investigated the effects of schoolrela ted factors such as class size, availability of school resour ces and instructional limitation on student achievement across countries. These research topics were selected because they have been hotly debated in the United States fo r many years and have recently expanded to other countries (Luyten et al., 2005 ) In addition, a better under standing of these topics would likely result in better application of research findings to school policy and practices. Class Size There is ample literature on the relati onship between class size and student learning. The results of this work, howeve r, are varied. In 1978, Glass and Smith published results from their metaanalysis of 77 studies that investigated the effects of class sizes on student achievement (Gla ss & Smith, 1978). Several important findings were drawn from this study: (a) overall, small class sizes were associated with higher
PAGE 77
63 student achievement, (b) the effects of class si ze appeared to grow as size was reduced, meaning a reduction from 10 to 5 students had a greater impact than a reduction from 30 to 25 students, and (c) the relation between class size and achievement was similar across students of different ages and ab ility levels (Glass & Smith, 1978). Interestingly, in the same year, the Educational Research Service (ERS, 1978) conducted a review of 41 studies to examine the relationship of cl ass size and student achievement across grade levels. Major conclu sions from this study are quite different: (a) the relationship between class size and student achievement was highly complex, (b) the effects of class sizes were a product of many variables, in cluding subject areas, student characteristics, lear ning objectives, class and sc hool resources, and teacher qualities, (c) within the midra nge of 25 to 34 students, class size appeared to have little impact on achievement of students in the primary grades, and (d) small class size appeared to be most beneficial for stude nts with either lower academic ability or economically or socially disadvantaged backgrounds (Cooper, 1989b). In responding to the dissimilarities among the results of tw o studies, ERS (1980) cr iticized Glass and Smith (1978) for obscuring important distinctio ns in class size research and for over generalizing their major findings which were based on too few studies. As a conclusion, ERS (1980) expressed critical need for further research in this area. Disagreeing with both sets of results produced by Glass and Smith (1978) and ERS (1978, 1980) because Â“neither adequately considers the quality of the critical evidenceÂ” (Slavin, 1989, p. 102), Slavin (1989) conducted another meta analysis study of the effects of class size on achievement. Unlike prior research, Slavin imposed several restrictions to his study such as (a) achievement scores had to be standardized scores; (b)
PAGE 78
64 large classes had to be compared to classes that were at least 30% smaller and contained no more than 20 students, and (c) the study ha d to use random assignm ent to alternative class sizes. As a result, this re search suggested that reducing class size would not in itself make a substantial difference in student achie vement even at the lower grades. Similarly, reducing class size was not likel y to solve the achievement problems of atrisk student unless the class size was reduced to one student per class (Slavin, 1989). As one of the most ambitious experiments ever attempted in the United States education, Project STAR, a longitudinal st udy (19851989) investigat ed the effects of class size on math and reading achievement of 6,829 kindergartens to third grade students in Tennessee who were randomly assigned to small classes (1317 students) and large classes (2226 students) (Pong & Pallas, 2001) In convergence to the results of ERS (1978), findings from Project STAR suggested that small classes tended to increase student math performance in the early grades by about one third of a standard deviation (Pong & Pallas, 2001). Also, a followup experime nt of the students participating in the Project STAR revealed that th e benefits of small classes persisted significantly for six years after the students returned to regulars ized classes at ninth grade (Nye, Hedges, & Konstantopoulos, 2001). Additionally, in exam ining whether the effects of class size functioned different across students of different backgrounds, Nye, Hedges, and Konstantopoulos (2001) found that the lasting effects of sma ller classes were greater for minority students than for White students. Recently, with international educationa l data becoming more accessible, many researchers have taken this opportunity to investigate the eff ects of class size on achievement across countries. Not surprisingl y, results from these studies were also
PAGE 79
65 mixed. As indicated in the study of Pong and Pallas (2001), class sizes in Asian countries tended to be quite large by the United States standards. For example, data from the TIMSS 1995 suggested that about 60% of Hong KongÂ’s classes had an average of 3942 students. Similarly, in Korea, the majority of the classes (64%) had an average of 5054 students. However, students in these countri es consistently scor ed at the top in international math achievement tests. An exam ination of class sizes within countries also yielded similar results. That is, high performan ce classes tended to be larger than average classes. After adjusting for spurious fact ors in HLM models fo r individual countries, Australia and Canada were the only nonAsian c ountries where larger classes led to better performance in math than did smaller classe s. By contrast, in the United States, small classes with fewer than 19 students outperfo rmed their large cla ss counterparts (Pong & Pallas, 2001). Also using the TIMSS 1995 data for math and science, Woobmann and West (2006) estimated the effect s of class size on seventh and eighth grade student performance in 11 countries (i.e., Belgium, Canada, Czech Republic, France, Greece, Iceland, Portugal, Romania, Singapore, Slove nia, and Spain). This study found sizeable beneficial effects of small cl asses in Greece and Iceland, but not in other countries. It is important to note, however, that in both Greece and Iceland, students tended to perform below the international averag e whereas in the remaining c ountries, where the effects of class size were statistically nonsignificant, students tended to perform above the international average. These results were interpreted by Woobma nn and West (2006) to mean that within their own educational sy stems (i.e., Greece and Iceland) classsize reduction seemed to be associated with im provement of student achievement. However,
PAGE 80
66 in consideration of a larger picture, stude nts in small classes in Greece and Iceland did not perform as well as students in large clas ses of other countrie s included in the study (Woobmann & West, 2006). School Resources Research on school effects on student ach ievement has a fairly long history, dating back since 1966 when the seminal wo rk of Coleman and associates on the relationship of school effect s relative to family effect s on student achievement was published. In this report, Coleman et al. (1966) showed that, in the United States, compared with familyrelated factors, schoolr elated factors had only modest effects on student achievement (Suter, 2000; Baker et al., 2002). Later, in 1972, Mosteller and Moynihan conducted a reanalysis of ColemanÂ’ s study and found similar results. That is, school variance had little influence on st udent achievement (Mosteller & Moynihan, 1972). Challenging these findings, Comber and Keeves (1973) conducted a much larger study which included 19 countries in the wo rld, including the United States. The data came from the IEAÂ’s (1971) Firs t International Science Study (FISS). Evidence from this study suggested that school quality (i.e., instructional practices and instructional resources) was directly related to scien ce achievement in middle and high schools in participating countries. In questioning su ch contradicting results, Coleman (1975) conducted a reanalysis of the Comber a nd KeevesÂ’ (1973) stu dy using a different research design and statistical data analysis method. Despite several differences in the obtained results compared to those of Comber and KeevesÂ’ (1973), Coleman (1975)
PAGE 81
67 concurred that school variables had significan t effects on 10and 14year old studentsÂ’ achievement in science in the six coun tries he studied, including the U.S. More empirical evidence of school eff ects on student achievement was reported by Heyneman and Loxley (1982) when they re analyzed the same IE A data for science education in 19 countries that were analy zed earlier by Comber and Keeves (1973) and Coleman (1975). Findings from this reanalysi s study were important in that they not only confirmed the significance of the eff ects of schools on student achievement but more importantly, they suggested that in so me developing countries, school effects could outweigh the effects of home background. For ex ample, in India, the effects of school and teacher quality could account for up to 90% of the variance in student achievement (Heyneman & Loxley, 1982). In an attempt to better understand the variation of school effects on student achievement in developing countries, Fuller ( 1987) examined a series of studies and concluded that after accounting for the eff ect of student backgr ound, schools exerted a greater influence on achievement of students in developing countries than in developed countries. Three reasons were used to explain the findings. First, due to lack of available material resources at home and at schools in developing countries, the influence of social practices within classrooms ma y play a greater role than do material inputs, as appeared to be the case in the United States. Second, soci al class structures in developing countries often are less differentiated than in highly industrialized societie s. Thus, advantages rooted in social class and re lated parenting practices tended to be less influential in developing countries. Lastly, the school in stitution often operate s within communities where any commitment to written literacy or numeracy is a historically recent event.
PAGE 82
68 Therefore, a school of even modest quali ty may significantly influence academic achievement (Fuller, 1987). More recently, with an interest in fi nding out whether the effect of national development on the association among fa mily SES, school resource quality, and achievement found in data from the 1970s were still evident in the mid1990s, Baker, Goesling, and LeTendre (2002) analyzed the TIMSS 1995 data for 36 countries. In this study, national economic development was de fined using World Ba nkÂ’s (1994) index of gross domestic product (GDP) per capita. Family SES was a composite score of motherÂ’s and fatherÂ’s education level and number of books in the home. School resource quality was a composite score representing availability of 11 indicators: in structional material, budget for supplies, school building space, he ating and lighting, instructional space, computer hardware, computer software, calc ulators, library materials, audiovisual resources, and library equipment. In analyz ing the data, Baker et al. (2002) applied a more advanced statistical analysis method (i .e., hierarchical linear modeling) to account for the dependence of nesting data of the TIMSS 1995. Interestingly, results from this study indicated that the rela tive effect of school res ources and family background on achievement within nations was no longer associ ated with national income levels in the way originally described in the studies of 1970s. Specifically, lowincome nations did not show stronger school effects than highincome nations. However, across nations, it was evident that lowincome countries tend have low achievement scores and highincome countries tend to have high achieveme nt scores (Baker et al., 2002) Similarly, Wobmann (2003) used TIM SS 1995 data from 39 countries with 260,000 middle school students to investigate the impact of differences in schooling
PAGE 83
69 resources and educational inst itutions on student performance. The data were analyzed using hierarchical multilevel modeling (HLM). Missing data in the study were handled by imputation. If missing values came from a discrete variable ordinary least square (OLS) estimation was used. If missing data came from a binary variable, a probit model was employed. And, if missing data were from a polytomous variable, an orderedprobit model was applied. The results from studentlev el estimation suggested that inte rnational differences in student performance could not be attributed to school resource differences but were considerably related to institutional differences. In particular, institutional combined factors that yielded positive effects on student achievement included centralized examinations and control mech anisms, school autonomy, individual teacher influence over teaching methods, limits to teacher unions' influence on curriculum scope, scrutiny of students' achievement and competition from private schools (Wobmann, 2003). Instructional Limitations TeachersÂ’ perception of the extent to wh ich their instruction is limited due to student factors such as unwillingness of st udent to learn, heterogeneity of student background (e.g., family SES, language, specia l needs) and differences in student academic levels is related to teacherÂ’s efficacy, teacher confidence, and teacher flexibility. According to TschannenMoran a nd Hoy (2001), Â“A teacherÂ’s efficacy belief is a judgment of his or her capabilities to bring about desired outcomes of student engagement and learning, even among t hose students who may be difficult or unmotivatedÂ” (p. 783). Because efficacy affects the effort teachers invest in teaching, the goals they set, and their level of aspirati on, teacher efficacy, in theory, is related to
PAGE 84
70 student achievement (TschannenMoran & Hoy, 2001). Research studies that lend support to this position are numerous. For example, Guskey (1987) indicated that teachers with a strong sense of efficacy tend to exhibit greater leve ls of willingness to experiment with new methods to better m eet the needs of their students. Similarly, several researchers found that, in the face of setbacks, e fficacy beliefs tend to influence teachersÂ’ instructional approaches in a myri ad of positive ways: being less critical of students when they make errors (Ashton & Webb, 1986), working longer with a student who is struggling (Gibson & Dembo, 1984), e xhibiting greater enthusiasm for teaching (Allinder, 1994), and having greater comm itment to teaching (Coladarci, 1992). Believing in the powerful effect of t eacher efficacy on student achievement, Glazerman, Mayer, and Decker (2006) c onducted a randomized experiment study to compare the effects of teacher efficacy w ith the effects of teaching qualification and teaching experience. In this study, efficacious teachers were defined as Teach for America (TFA) teachers. TFA teachers were recruited by TFA program and assigned to teach in schools that serve a disadvantage d, largely minority population of students. In general, TFA teachers were re cent graduates of the nationÂ’ s top colleges with strong academic records (GPA = 3.5 and above). They were described as enthusiastic and committed to teaching even though they did not have educationrelated majors in colleges or any student teaching experience. The teach ers in the comparative group consisted of regular teachers recr uited by schools who varied in bot h teaching qualifications (i.e., certified and uncertified) and teaching experi ence (i.e., beginning teachers and teachers with more than 5 years of teaching experien ce). Results from this study suggested that students who had efficacious teachers scored significantly higher in 12th grade math than
PAGE 85
71 their peers who had regular teachers. The size of the impact was relatively large, corresponding to about 10 per cent of grade equivalent or an additional month of instruction. More importantly, this finding was consistent across all the subgroups and regions included in this study (Baltimore, Ch icago, Los Angeles, Houston, New Orleans, and the Mississippi Delta). The researchers conc luded that the effects of teacher efficacy outweighed the effects of teacher qualifica tion as well as teaching experience. It is important to note, however, that in this st udy the researchers did not use any official instruments to assess the level of teacher efficacy in TFA teachers; rather the researchers assumed TFA teachers were efficacious te achers, basing on the definitions of TFA program. With regard to teacher selfconfidence, substantial evidence also indicates that teachers with a high level of confidence tend to be successful with students (DarlingHammond, 2000). Teacher selfconfidence in this context is not restricted to teacher adept knowledge in the subject matter; rather teacher selfconfidence reflects teacher competence in multiple skills areas such as knowing how to convey the material in different ways that can bene fit students with various l earning ability, knowing how to manage the classroom so that a sufficient am ount of time can be devoted for instruction, and knowing how to use different me thods to assess student learning. With schools promising to serve a much more diverse group of students to much higher standards, teacher flexibility in t eaching has become an important quality to warrant effective teaching and learning in school (DarlingHammond, 2000). Flexibility allows teachers to move beyond their own cu ltural boundary, to put themselves in the
PAGE 86
72 shoes of the students who are quite different from them, and to adapt instruction to studentsÂ’ individual learning needs (DarlingHammond, 2000). As educational reform movements in th e United States a nd around the world are setting ambitious goals for student learning, Bo rko (2004) believed that flexibility would help teachers to better implement the cha nges in the classroom practices demanded by these reforms. To elaborate, Berko (2004) added that because the magnitude of these changes can be large, a great deal of learni ng would be required on the part of teachers. Therefore, without flexibility of an open mind, it would be difficult for the teachers to make the changes. This position is also supported by evidence from the study of Carpenter et al. (2004) that te achers who had a lower level of flexibility tended to ignore new instructional practices required by the math reform if these practices conflicted with their views of mathematics t eaching. Similarly, building on this point of view, Borko and Putnam (1996) posited that, to foster student sÂ’ conceptual understanding, teachers must have rich and flexible knowledge of the subj ect they teach. That is, in addition to the essential knowledge of the discipline, teachers must be able to use multiple ways to connect ideas and organize learning processe s so as to help students construct new knowledge. Examining teacher flexibility from an international perspective, Bryan, Wang, Perry, Wong, and Cai (2007) found that overall teachers from the four countries studied (Australia, United States, Mainland China, and Hong Kong) shared the similar view regarding the importance of being flexible in teaching in order to meet studentsÂ’ needs. As one teacher from Hong Kong stated Â“The teacher should not just blindly follow the lesson plan and let the lesson go on without considering studentsÂ’ responseÂ” (p. 336).
PAGE 87
73 Similarly, a teacher from the United States a ffirmed Â“Being able to observe, and judge, and evaluate each student and meeting thei r individual needs probably is the most difficult and probably one of the most crucial parts for an effective teacherÂ” (p. 336). The teachers from Mainland China and Hong Kong, however, cautioned that in these countries, teacherÂ’s flexibility sometimes wa s compromised due to the large number of students in the class and the amount of content that is required to be covered in a lesson (Bryan et al., 2007). Based on these research findings, it can be inferred that, given a class of students with various backgrounds and ch aracteristics, those teachers who possess a high level of efficacy, selfconfidence, and flexibility are le ss likely to view the class as limitation to instruction; rather they tend to exert a higher commitment to improve student learning through application of customized instruction to meet their student needs. Therefore, from the school perspective, schools would seem to be in a better position to improve their studentsÂ’ achievement level if they have more teachers with a high level of efficacy, selfconfidence, and flexibility. One solution for schools to achieve this goal is to improve teacher efficacy, selfconfidence a nd flexibility by providing teachers with regular professional development programs. Schools should also emphasize these qualities when recruiting new teachers. Summary Student mathematics achievement at the na tional level has often been associated with the future economic power and security of a country. Thus, the desire to understand and identify factors that are related to in creased student mathematics achievement has become a national goal in many countries ar ound the world, includi ng the United States.
PAGE 88
74 Over the years, numerous studies have been c onducted across countries to investigate the effects of contextual factors on student mathematics achievement. These contextual factors include but are not limited to student background, instructiona l practices, teacher background, and school background variables. Findings from these studies, however, have shared little consensus. The purpose of this chapter was to provi de a comprehensive review of existing literature on the relationship be tween student mathematics achievement in middle school and the aforementioned contextual factors fr om a national as well as an international perspective. In order to allo w for a broader inclusion of em pirical studies, this chapter also reviewed research that examined stude nt achievement in subject areas other than mathematics, such as science, reading, literac y, and civics across different grade levels. Through this comprehensive review of literature, several important findings can be highlighted. First, in terms of vari able operationalization, it seems common that achievement outcomes were reported in the fo rm of standardized achievement scores. These standardized scores came from vari ous data sources, including international achievement assessments, national achiev ement assessments, state achievement assessments, and local achievement assessmen ts. With regards to operationalization of achievement outcomes, measures of student ach ievement tended to be fairly consistent. For example, there appear two common definitio ns for math achievement: (a) math as an average composite score of subcontent areas such as number, data, algebra, measurement, and geometry, and (b) math as a single subdomain score such as algebra, or measurement, or problem solving. In c ontrast, for contextual factors, the opposite seems true. Background variables were defined variously from one study to another. In
PAGE 89
75 addition, the majority of the background data came from selfreported questionnaires. For example, in some studies, time on homework referred to time student spent on homework in all subjects per week; whereas in other studies, this variable was defined as time student spent on mathematics homework per we ek. Still, in other studies, time on homework was operationalized as the time st udents typically spent on homework per day. Yet, in other studies, time on homework was an aggregated variable which consisted of homework frequency and homework length. Thus it is essential for research consumers to interpret research findings with caution due to differences in measure operationalizations. Second, there seems little consensus among the studies reviewed in this chapter regarding potential contextual factors that could improve student math achievement. In addition, the strength and direction of the re lationships between c ontextual factors and math achievement appeared to be inconsiste nt from one study to another. For example, whereas Coleman (1966) suggested that fam ilyrelated factors exerted stronger positive effects than schoolrelated factors on student achievement, Komber and Keeves (1973), Heyneman and Loxley (1982), and Fuller (1987) argued the opposite wa s true, especially if the studies were conducted in developing countries. Similarly, the relationship between time on homework and achievement has been hotly debated. Cooper (1989) found that time student spent on homework was positively related to greater student achievement. In contrast, evidence from Rodr iguezÂ’s (2004) study suggested that students who did no homework each day performed slightly higher on average than those students who spent more than one hour a day on math homewor k. In addition, Trautwein (2007) indicated that the relationship between homework time and achievement was only moderate at the
PAGE 90
76 school level and was negative at the student level. Likewi se, whereas DarlingHammond (2000) strongly believed that teacher prepara tion to teach was one of the most important determinants of student achievement, Glazerman, Mayer, and Decker (2006) demonstrated that having no preparation to t each did not prevent tale nted and enthusiastic individuals who neither majored in educat ion nor had prior teaching experiences from contributing positively to student achievement. Despite discrepancies in research findings, all of the researchers acknowledged th at their studies were limited to certain extents and thus, their findings should be caref ully interprete d within context. Similarly, all the researchers agreed that further rese arch was needed in order to provide more evidence regarding the relationship between c ontextual factors and student achievement. Third, the majority of studies reviewed in this chapter were guided by a correlational research design because the focus of these studies was the relationships between contextual factors and student ach ievement. However, these studies differed from one to another in several respects, including data sources, samples selection, variables of interest, data management (e.g., treatment of missing data and use of sample weight to account for complex, largescale surv ey design), and methods of data analysis. As an illustration, with regards to variables of interest, different approaches were used to identify the final set of variables for the study. In some studies, the researchers determined potential predictors of student achievement by examining prior research. In other studies, only variables that met certa in statistical sign ificance criteria (e.g., correlation coefficients of the variable with ach ievement had to be la rger than twice their standard errors or average standardized re gression coefficients across samples exceeded .05) could be selected. Still in some studies, variables of interest were only included if
PAGE 91
77 they had sufficient data across samples. Yet, in other studies, the list of variables could be different across samples. In terms of data analysis, several common statistical methods such as multiple regression analysis, structur al equation modeling (SEM), hierarchical linear modeling (HLM) were employed to examine the eff ects of contextual factors on student achievement across studies. For studies conducted before the 1990s, multiple regression analysis and SEM tended to be used more frequently and for studies conducted after 1990s, HLM appeared to be used more freque ntly. The reason behind this shift in the method of data analysis over time was due to the fact that HLM is a newer and more advanced statistical analysis method that allows researchers to conceptualize the effects of contextual factors on students achievement as occurring at multiple levels due to the nature of nested structure of educational data (i.e., student s nested within teachers, and teachers nested within schools, etc.). Other st atistical tests such as independent ttest, analysis of variance (ANOVA), and multivariate analysis of variance (MANOVA) were also used to examine differences across s ubsamples (e.g., gender groups, grade levels, and regions) and across time. There were also several studies that app lied qualitative approaches to analyze the data. The area of research that seemed to attract more qualitative studies includes opportunity to learn in terms of curriculum coverage, teacher quality (i.e., preparation to teach, ready to teach, and prof essional development) and inst ructional activities in the classroom. Thus, the data sources used in th ese studies came from interviews with study participants, classroom observat ions, instructional goals and curriculum, and participantsÂ’ reflection journals or field notes.
PAGE 92
78 Fourth, there appears bias in the inclusi on of countries in in ternational research studies that examine the relationships of c ontextual factors and student achievement. Specifically, these research studies tended to focus more on developed countries than on developing countries. Countries that were frequently include d in international research studies include the United States, Canada Germany, Japan, Korea, and Hong Kong. The lack of representation of deve loping countries in internationa l research is not desirable because it is very possible that the relations hips between contextual factors and student achievement that were significant in these c ountries may not be significant in developing countries due to substantial differences in c ountry economic status. In fact this contention was well supported by the study of Fuller (1987) where the res earcher showed that the inclusion of developing countries such as I ndia, Chile, or South Af rica actually changed the strength and direction of schoolrelated factors on student achievement from little or nonexistent (Coleman, 1966) to a strong a nd positive relationship (Fuller, 1987). Similarly, as Werf, Creemers, Jong and Klav er (2000) suggested, in Western countries, large differences in student achievement were noted between students from different socioeconomic backgrounds. However, in deve loping countries such differences were much smaller. Thus, it might not be realisti c for developed countries to set a goal to improve achievement level of all students with different SES background but it is possible for developing countries to aim to improve the achievement level of all the students in their edu cational systems. Finally, from this examination of literatur e, the importance of continuing research in the area of international achievement asse ssment is clear. Rep eatedly, international studies demonstrated that they significantly co ntributed to the advancement of the field of
PAGE 93
79 educational research by challenging exis ting beliefs and research findings, by illuminating new ideas and insights into how to improve educational systems, both from a theoretical and methodological perspective, and by offering a host of opportunities for educators, researchers as we ll as policy makers around the wo rld to share and learn from each otherÂ’s experience and expe rtise. Theoretically, results from these international studies are important in that th ey provide insights into the extent to which the effects of contextual factors such as family resources and school resources on student achievement could change over time or function differently across educational systems (Baker et al., 2002; Coleman, 1966; Heyneman & Loxle y, 1982, 1983; Woobman, 2003). This is because the effects of many of the contextual factors are in fluenced by national economic status which, in turn, is subject to cha nge across time. Methodologi cally, the largescale of international data as well as the level of variance among countries provides excellent advancement opportunities for new and impr oved statistical methodologies to be developed and tested.
PAGE 94
80 CHAPTER THREE METHOD Chapter Three is organized into the following major sections: the purpose, research questions, research design, and data analysis. Purpose of the Study The purpose of this study was to investigat e correlates of eighthgrade studentsÂ’ math achievement in TIMSS 2003 in four count ries. Specifically, within each of the countries included in the study, a series of twolevel models was constructed using contextual and background factors at both the student and the classroom/teacher/school levels to account for the variance in eighthgrade studentsÂ’ math achievement within and between schools. Research Questions This study was driven by the follo wing set of resear ch questions: 1) To what extent are student background variables (i.e., gender, selfconfidence in learning math, valuing of math, tim e on math homework, and tutoring in math) associated with TIMSS 2003 eighthgr ade math scores in each country? 2) To what extent are home resources variab les (i.e., availability of calculator, dictionary, computer, and desk for st udent use) associated with TIMSS 2003 eighthgrade math scores in each country?
PAGE 95
81 3) To what extent are instructional variab les (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional time) associated with TIMSS 2003 eighthgr ade math scores in each country? 4) To what extent are teacherrelated variab les (i.e., preparation to teach, ready to teach, and professional development) associated with TIMSS 2003 eighthgrade math scores in each country? 5) To what extent are school related variables (i.e., class size, school resources for math instruction, and math instruc tional limitation) associated with TIMSS 2003 eighthgrade math scor es in each country? Research Design Data Source Data from the Trends in Internationa l Mathematics and Science Study (TIMSS) 2003, a study conducted by the In ternational Associations for the Evaluation of Educational Achievement (IEA) and maintain ed by the National Center for Education Statistics (NCES), were used in this st udy. The TIMSS 2003 database comprised student achievement data in mathematics and scienc e as well as student, teacher, school, and curricular background data for 48 countries at eighth grade and 26 countries at fourth grade (Martin, 2005). For this study, the fo llowing databases from TIMSS 2003 for eighthgrade were used: student math ach ievement, student background, math teacher background, and school background. The TIMSS 2003 database for eighthgrade math was selected for this study because of its influence on education in bot h the U.S. and other countries has increased rapidly (Baker & LeTendre, 2005; OÂ’L eary, 2002, Rodriguez, 2004; TIMSS, 1995, 1999,
PAGE 96
82 2003). TIMSS was originally conducted in 1995 and continued every four years, in 1999, 2003, and most recently in 2007. The ultimate go al of TIMSS was to provide trend data on studentsÂ’ math and science achievement fr om an international perspective (TIMSS, 2003). In 2007, there were more than 60 c ountries participating in the TIMSS study (TIMSS, 2007). Until the 2007 data are rele ased, the TIMSS 2003 database had the largest and most recent international stude nt achievement data in mathematics and science (TIMSS, 2003). In addition, the TI MSS 2003 database for eighthgrade math included rich and timely information about student, curriculum, teacher, and school background that could be used to examine the relationship between contextual and background factors and student math achieve ment within and across countries. Sampling Procedures TIMSS 2003 used a twostage sampling de sign to select representative samples of students in each country. At the first stage, at least 150 schools were randomly sampled. However, because the school samp le was designed to optimize the student sample rather than provide an optimal sample of schools, large schools tended to occur in the sample more frequently than in the school population. This sampling design was also known as probabilities propor tional to size (Martin, 2005). At the second stage, one class was randomly sampled in each school. This resulted in a sample size of at least 4,000 students per country. The selection of teachers and school principals was determined by the selection of students because they were linked to the students (Marti n, 2005). However, countries could, with prior approval, adapt the sampling design to local circum stances. For example, countries could incorporate in their sampling design importa nt reporting variables (e.g., urbanicity or
PAGE 97
83 school type) as stratification variables. At the second stage, countries could also randomly sample one or two classes in each of their schools. Some countries took advantage of this option and sampled more sc hools and classes resulting in larger sample sizes (Martin, 2005). Data Collection The operations of data collection in TIMSS 2003 were on two schedules according to participating countries located in the northern and southern hemispheres. In countries in the southern hemisphere, where the school year typically ends in November or December, the assessment was conducted in October or November 2002. In countries in the northern hemisphere, wh ere the school year typically ends in June, the assessment was conducted in April, May or June 2003 (Martin, 2005). Each participating country was in char ge of carrying out data collection and maintaining quality control procedures. Traini ng manuals for data collection were created for test administrators, detailing standardized procedures such as test security, timing and rules for answering studentsÂ’ questions. In addition, each country nominated one or more persons such as retired school teachers to serve as quality control monitors. These monitors were provided with twoday training sessions (Martin, 2005). Sample For this study, two developed and two deve loping countries were selected from the TIMSS 2003 eighthgrade math database. Tw o criteria were applied in the sample selection. First, all the count ries in the TIMSS 2003 database were stratified into two categories: developed countries and deve loping countries. The World BankÂ’s (2003) world development indicators were used to cl assify countries that participated in the
PAGE 98
84 TIMSS 2003 assessment into two categories: developing countries and developed countries (The World Bank, 2003). In 2003, the World Bank used more than 500 indicators to measure deve lopment outcomes in 152 countries. Some examples of the World BankÂ’s (2003) world development indi cators include population dynamics, labor force structure, employment, na tional growth patterns, structur e of trade, private sector development, investment climate, busine ss environment, stock markets, financial efficiency, and integrated global economy amon g other specific and quantified targets for reducing poverty, and achieving progress in health, educatio n, and the use of environmental resources (The World Bank, 2003). This step was done to ensure that the study included representative samples of bot h developed and devel oping countries. A list of TIMSS 2003 countries grouped by country status (i.e., developing and developed countries) can be found in Appendix A. Next, for each category, the two countries with the largest number of schools were selected. Th is step was done to ensure that the study has sufficient level two units (i.e., number of schools) for examining the variance in eighthgrade math achievement within each c ountry. Table 1 provides a list of countries that met the selection criterion and their sample descriptions. It is important to note, however, that due to the complexity of the TIMSS 2003 database, several decisions regarding data management were made during the sample selection process, resulting in some reducti on of both level1 units (i.e., students) and level2 units (i.e., schools) in th e selected countries. For example, whereas in most of the countries, only one math classroom was random ly sampled to participate in the TIMSS 2003, in a few countries, more than one classr oom was selected. Similarly, whereas most of the schools had more than 10 students per classroom that participated in the TIMSS
PAGE 99
85 2003, a few schools had less than 10 students. In order to mainta in a similar data structure across countries, it was determined that those schools with less than 10 students were removed. In addition, for those schools with more than one classroom sampled, only the classroom with the most number of stude nts was kept for subsequent analyses. Furthermore, because only one class was selected for each school, the number of math teachers was equal to the number of schools. Table 1. Summary of the Samples Included in the Study Country status Country name No. of students No. of schools/ teachers Class size Total Female Male M Min Max Developed Canada 8,473 4,287 4,186 354 24 10 38 U.S.A 8,008 4,106 3,902 241 33 10 102 Developing Egypt 7,095 3,329 3,765 217 33 12 36 South Africa 8,927 4,470 4,355 253 35 10 56 Note: 102 students in South Africa and 1 student in Egypt did not report their gender Country Profiles In an effort to better understand the count ries that were selected for this study, a brief profile was created for each of these countries. The types of information that appeared relevant to this study included ge ographic location and size, population, ethnic groups, languages, political system, roles of government in schools, economic systems, and finally, educational issues. It is important to note that th e information in this profile was drawn from multiple sources and priorities were given to the information that was collected in 20072008. However, in some instances where 20072008 data were not available, the most recent information was included. Canada Geographically, Canada is located in North America with a total area of 9,984,670 square kilometers. The estimated population for Canada in 2008 was
PAGE 100
86 33,212,696. Canadian people consisted of several ethnic groups such as British Isles (28%), French (23%), other European (15 %), Amerindian (2%), Other, mostly Asian, African, and Arab (6%), and mixed background (26%). There are two official languages in Canada with English spoken by 59.3% and French by 23.2% of the people. The remaining 17.5% speak other languages (Central Intelligence Agency, 2008). CanadaÂ’s political system is a C onstitutional monarchy that is also a parliamentary democracy and a federation. Th e Chief of state is the Queen who is represented by the Governor and the Head of government who is known as the Prime Minister. Canada is an afflue nt, hightech industrial societ y in the trilliondollar class with a marketoriented economic system. In 2007, GDP for Canada was $38,200 (Central Intelligence Agency, 2008). Education in Canada is provided, funded and overseen by federal, provincial, and local government. Education is compulsory up to the age of 16 in every province, except for Ontario and New Brunswick, where the compulsory age is 18. In general, the educational system in Canada is divided in to Elementary (Primary School), followed by Secondary (High School) and PostSecondary (University and Colle ge) (Education in Canada, 2008). In recent years, the Canadian education system focused its attentions on several issues and problems such as deprofessionali zation; the dominance of a politicaleconomic imperative in the formulation of state edu cational policy (accountab ility, privatization, market, choice, and decentralization); multiculturalism and diversity; restructuring and retrenchment; and the demographic changes f acing all industrialized nations (Education in Canada, 2008).
PAGE 101
87 The United States Geographically, the United States is located in North America with a total area of 9,826,630 square kilometers. The estimated popul ation for the United States in 2008 was 303,824,646. The United States is made up of se veral ethnic groups such as Caucasian (81.7%), African American (12.9%), Asian (4.2 %), Amerindian and Alaska native (1%), native Hawaiian and other Pacific islander ( 0.2%). The official language in the United States is English, which is spoken by mo re than 82% of the people. Other spoken languages in the United States include Span ish (10.7%), other IndoEuropean (3.8%), Asian and Pacific island (2.7%), and Other (0.7%) (Central Intel ligence Agency, 2008). The United StatesÂ’ political system is a Constitutionbased federal republic with a strong democratic tradition. The Chief of state is the Presiden t and the Head of government is also the President. In the Unite d States, the President appoints the Cabinet. The United States is known as the largest and most technologically powerful economy in the world with a marketoriented economy system. In 2007, GDP for the United States was $46,000 (Central Intelligence Agency, 2008). Education in the United States is gene rally divided into Elementary (Primary School), followed by Junior and High Sc hool and PostSecondar y (University and College). School attendance is mandatory at the elementary, junior and high school levels. The ages for compulsory education vary by state, beginning at age of five to eight and ending at the age of fourteen to eight een. Students are placed in year groups known as grades, beginning with first grade and culm inating in twelfth grade. A growing number of states are now requiring school attendance until the age of 18 (Education in the United States, 2008).
PAGE 102
88 Education in the United States is prov ided mainly by the government, with control and funding coming from three levels : federal, state, and local. In 2005, the United States ranked the first in the world in terms of annual spending per student on its public schools (approximately $11,000 per st udent) (Education in the United States, 2008). In recent years, in the United States major educational issues centered on curriculum, funding, and contro l. Of critical importance, because of its enormous implications on education and funding, is th e No Child Left Behind Act of 2002. Under this Act, schools are held acc ountable for meeting the learni ng standards that are set by the state and school districts in the ar eas of reading, writing, math, and science (Education in the United States, 2008). Egypt Geographically, Egypt is located in No rth Africa with a total area of 1,001,450 square kilometers. The estimated populat ion for Egypt in 2008 was 81,713,517. In Egypt, Egyptians accounted for 98% of the populat ion. The remaining consisted of Berber, Nubian, Bedouin, and Beja (1%) and Greek, Armenian, other Eu ropean (primarily Italian and French) (1%). The official language in Egypt is Arabic. However, English and French are also widely understood by edu cated classes (Central Intelligence Agency, 2008). Egypt is a republic country. The Chief of state is the President and the Head of government is the Prime Minister. In Egypt, the Cabinet is appoin ted by the President. According to the Central Intelligence Agency (2008), Egypt's economy depends mainly on agriculture, media, petroleum exports, and tourism. Recently, the government has
PAGE 103
89 struggled to prepare the economy for the new millennium through economic reform and massive investments in communications and physical infrastructu re. In 2007, GDP for Egypt was $5,400 (Central In telligence Agency, 2008) Education in Egypt is highly centralized, and is divided into three stages: Basic Education, Secondary Education, and Post Secondary education. Basic education includes six years of primary school and 3 year s of intermediate school. Promotion from primary to intermediate school is determined by examination scores. Since 1981, the government in Egypt issued a law that stated that basic education is free and compulsory to all students ages 6 through 14. Beyond this st age, education depends on the student's ability (Education in Egypt, 2008). In Egypt, schools are referred to as gove rnment schools or private schools. There are two types of government schools: (1) the Arabic schools which provide the governmental national curriculum in the Ar abic language and (2) the experimental language schools which teach most of the government curriculum in English, and add French as a second foreign language. As for private schools, there are three types: (1) ordinary schools which are quite similar to that of the government schools, but pay more attention to the students' personal needs a nd to the school facil ities; (2) the language schools teach most of the government curriculum in English, and add French or German as a second foreign language; (3) the religi ous schools are religious ly oriented schools and their curricula differ from the remaining schools. In Egypt, the enrollment rate for girls is significantly lower than for boys. Ov erall studentsÂ’ attendan ce rate was also low (Education in Egypt, 2008). South Africa
PAGE 104
90 Geographically, South Africa is located in South Africa with a total area of 1,219,912 square kilometers. The estimated population for South Africa in 2008 was 43,786,115. In South Africa, black African accounted for 79%, White 9.6%, colored 8.9%, and Indian/Asian 2.5%. There are many languages currently spoken in South Africa: IsiZulu by 23.8% of the people, Is iXhosa 17.6%, Afrikaans 13.3%, Sepedi 9.4%, English 8.2%, Setswana 8.2%, Sesotho 7.9%, Xitsonga 4.4%, and other 7.2% (Central Intelligence Agency, 2008). South Africa is a republic country. The Ch ief of state is th e President and the Head of government is also the President. In South Africa, the Cabi net is appointed by the President. According to the Central In telligence Agency (2008), South Africa has a market economy with an abundant supply of natural resources, welldeveloped financial, legal, communications, energy, and transpor t sectors. However, economic problems remain from the apartheid era (e.g., povert y and lack of economic empowerment among the disadvantaged groups). In 2007, GDP for South Africa was $10,600 (Central Intelligence Agency, 2008). Education in South Africa has three levels : General Education and Training from grade 0 through grade 9, Furt her Education and Training fr om grade 10 through grade 12, and Higher Education and Training for tec hnical schools and college and university. Under the South African Schools Act of 1996, education is compulsory for all South Africans from age 7 (grade 1) to age 15, or grade 9. In S outh Africa, the government provides a national framework for school polic y and spends approximately 20% of their expenditure on education annually. Students can choose to attend public schools which
PAGE 105
91 are funded by the government or private schoo ls where they have to pay for education (Education in South Africa, 2006). The major educational issues faced by South Africa in recent years include imbalances in education remaining from the apartheid legacy, incr easing dropout rates for girls, and discrepancy in educational oppor tunities between rural and urban schools. Recent statistics suggested that among the South African population, only 14% of black Africans have an education of high school or higher, whereas 40% of Indians and 65% of Whites have an education of high school or higher (Education in South Africa, 2006). Instruments Eighthgrade Mathematics Assessment Survey TIMSS 2003 eighthgrade math assessmen t was a successor of TIMSS 1995 and 1999 eighthgrade math assessments, and thus the curriculum framework and test booklet design used in 1995 and 1999 were also used in 2003. This was to ensure reliable measurement of trends in math teaching and learning over time. However, because a large number of the items on the TIMSS 1995 a nd 1999 were released for public use after each cycle of the assessment, new items were developed to replace the retired items in the TIMSS 2003 eighthgrade math assessment (Mar tin, 2005). According to Martin (2005), of the 426 score points available in the enti re 2003 eight grade mathematics and science assessment, 47 came from items used also in 1995, 102 from items used also in 1999, and 267 from items used for the first time in 2003. Test booklet. The TIMSS assessment framework employed a matrixsampling technique that assigned each assessment item to one of a set of item blocks, and then assembled student test booklets by combini ng the item blocks according to a balanced
PAGE 106
92 design in order to achieve broad subject ma tter coverage (Martin, 2005). For the TIMSS 2003 eighthgrade math assessment, a total of 194 items were categorized into 14 blocks which were labeled M1 through M14. Blocks 1 through 6 contained secure items from earlier TIMSS assessments (TIMSS 1995 and 1999) to measure trends and blocks 7 through 14 contained new replacement items. These 14 blocks of items then were distributed across 12 student booklets. Each bookl et consisted of two to four blocks of items, resulting in a different number of items in each booklet. Each student was randomly assigned one booklet. A summa ry of the TIMSS 2003 eighth grade mathematics assessment booklet matrix is presented in Table 2. Table 2. TIMSS 2003 EighthGrade Math Assessment Booklet Assembling Matrix Booklet Block M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10 M 11 M 12 M 13 M 14 1 2 3 4 5 6 7 8 9 10 11 12 As a result of the matrixsampling design, the number of assessment items by booklet and domain varied considerably, ra nging from 26 to 60 items per booklet. Table 3 displays a summary of TIMSS 2003 eighth grade math item breakdowns by assessment domain and booklet.
PAGE 107
93 Table 3. Number of Items by Domain and Booklet in TIMSS 2003 EighthGrade Math Assessment Booklet Domain 1 2 3 4 5 6 7 8 9 10 11 12 Algebra 11 12 14 17 7 12 3 6 8 6 7 8 Data 9 8 5 8 4 9 6 3 2 5 4 6 Geometry 8 11 9 6 12 9 5 4 5 4 5 5 Measurement 10 7 11 11 9 5 3 6 8 4 5 4 Number 17 22 18 13 23 17 10 9 8 7 8 7 Total Items 55 60 57 55 55 52 27 28 31 26 29 30 Subject content areas. The TIMSS 2003 eighthgrade math assessment contained five content areas: number, algebra, meas urement, geometry, and data. The number content domain consisted of understandings and skills related to whole numbers, fractions and decimals, integers, ratio, proportion, and pe rcent. The major topic areas in algebra were patterns, algebraic e xpressions, equations and formulas, and relationships. The measurement content domain included attrib utes and units, tools, techniques, and formulas. For geometry, five major topics were included: lines and angles, twoand threedimensional shapes, congruence and sim ilarity, locations and spatial relationships, and symmetry and transformations. Finally, th e data content domain consisted of four topic areas: data collection a nd organization, data representa tion, data interpretation, and uncertainty and probability (Martin, 2005). Item writing and development. The development of the TIMSS 2003 eighthgrade math assessment was a collaborative proce ss spanning more than two years, from September 2000 to March 2003, and involving math educators and development specialists from all partic ipating countries (TIMSS, 2003). The eighthgrade math assessment development was guided by the a ssessment framework and specifications which focused on two dimensions: content domains and cognitive domains. There were five content domains: number, algebra, m easurement, geometry, and data. There were
PAGE 108
94 four cognitive domains: knowing facts and pr ocedures, using concep ts, solving routine problems, and reasoning. With support and training from the TIM SS International Stud y Center, National Research Coordinators (NRCs) from particip ating countries contri buted a large pool of items for review and field testing. The Intern ational Study Center esta blished a math task force to manage the item development process. To help review, sele ct and revise items for the assessment and to ensure their mathematical accuracy, the International Study Center convened the Math Item Review Committee, an international committee of prominent mathematics experts nominated by participating countries and representing a range of nations and cultures. As a result of this item development process, more than 2000 draft items covering a wide array of t opics and a range of cognitive domains and item types were submitted to the item pool for further review by the Math Item Review Committee. Because the items were devel oped in English and translated into 34 languages by the participating countries, both the Math Item Review Committee and National Research Coordinators were important in identifying any items that might prove difficult to translate cons istently (TIMSS, 2003). Of the new items developed, 190 were select ed for the field test in 41 countries. International item analysis of the results from the field test was used to inform the review and selection of items for the main su rvey. For the final TIMSS 2003 eighthgrade assessment, there were a total of 194 items (115 newly developed items and 79 trend items). Of these items, 128 were multiple c hoice items and 66 were constructedresponse items (TIMSS, 2003).
PAGE 109
95 Item types. Two item types were used in the TIMSS 2003 eighthgrade math assessment: multiple choice and constructedre sponse. For constructedresponse items, students were required to constr uct a written respons e, rather than se lect a response from a set of options like in multiplechoice item s. Correct answers for multiplechoice item were credited one point. Cons tructedresponse items were awarded one, two, or three points, depending on the nature of the task and the skills required to complete it. Up to twothirds of the total number of points represented by all the items came from multiplechoice items (Martin, 2005). Translation, cultural adapt ation, and verification. The TIMSS 2003 data collection instruments (achieve ment tests and background ques tionnaires) were originally developed and prepared in English and subs equently translated by the participating countries into 34 national languages of instru ction and cultural cont exts. To control the quality of translated versions, each tran slation went through a rigorous verification process that included verificat ion by an international transl ation company, review by the International Study Center, verification of the item translations at th e national centers and a check by International Quality Control Monitors. The goal of this translation, adaptation, and verification pro cess was to ensure that tr anslated instruments were accurately and internat ionally comparable (Martin et al., 2004). Reliability estimates. Item analysis and review we re conducted internally for the TIMSS 2003 achievement data in order to examine and evaluate the psychometric characteristics of each achievement item in all participating count ries. For all items, regardless of item format, multiple statistics were computed to yield information about the reliability estimates of an item. These statistics include: (1) the number of students
PAGE 110
96 that responded in each country, (2) the difficu lty level (the percentage of students that answered the item correctly), (3) the Rasch oneparameter IRT item difficulty index, (4) the discrimination index (the po intbiserial correlation betw een success on the item and a total score), (5) the distracter index (the percentage of student s that selected each of the distracters), (6) the reliabili tyscore (the percentage of exact agreement between two independent scorers), (7) the itembycountry interaction index (i.e., when a highscoring country has low performance on an item on whic h other countries are doing well, there is said to be an itembycountry interaction), (8) the scoring re liability (i.e., a particular student response should receive the same scor e, regardless of scor er), (9) the withincountry scoring reliability (i.e., a random sample of at least 200 stude nt responses to each item per country was selected to be scored in dependently by two scorers), (10) trend item scoring reliability (i.e., the percentage of exact agreement between scorers across years for the same item), and (11) the crosscountry scoring reliability (the percentage of exact agreement among scorers in 20 Englis hspeaking countries) (TIMSS, 2003). Of these statistics, the international m eans of the item difficulties and item discriminations served as guides to the overa ll statistical properties of the items. For TIMSS 2003 eighthgrade math assessment, the international mean of the item difficulties was .66, mean Rasch difficulties was 1.46, and mean item discriminations was .44, which indicates appropriately reli able assessment items. In addition, the international average of ex act percent agreement across items was high, 99%. Similarly, the scorer reliability across Englishspeaking countries was high, with th e percent exact agreement averaging 96% across th e 20 math items (TIMSS, 2003).
PAGE 111
97 Reported achievement scores. TIMSS 2003 reported trends in student achievement in both the genera l area of math and in the major subject matter content areas. Because each student responded to only part of the assessment, these parts had to be combined for an overall picture of the a ssessment results for each country. Using item response theory (IRT) methods, individual stude nt responses to math items were placed on common scales that link to TIMSS result s from 1995 and 1999 to track their progress in math achievement since then. A threeparameter IRT model was applied to multiplechoice items which were dichotomously scored (correct or incorrect). For constructedresponse items with 0, 1, or 2 av ailable score points, a genera lized partial credit models was used. The IRT scaling method produced a score by averaging the responses of each student to the items that he or she took in a way that take s into account the difficulty and discriminating power of each item (Martin, 2005). Raw scores. For TIMSS 2003 eighthgrade math assessment, raw scores were computed by adding the number of points obt ained by each student over all the items in the studentÂ’s test booklet. Multiplechoice item s were scored 1 for correct answers and 0 for incorrect answers and constr uctedresponse items were scor ed 0 for incorrect answers, 1 for partially correct answers, and 2 for correct answers. Because the raw score was dependent on the number of items in the st udentÂ’s test book, and b ecause the number of items varied from test book to test book, th e raw scores can be used only to compare studentsÂ’ performance on the same booklet in the same year (Martin, 2005). Table 4 shows the maximum number of score point s for eighthgrade math by booklet and by domain content.
PAGE 112
98 Table 4. Maximum Number of Score Points in TIMSS 2003 EighthGrade Math Assessment Booklet Domain 1 2 3 4 5 6 7 8 9 10 11 12 Number 19 22 19 14 25 17 11 10 8 7 8 7 Algebra 12 12 15 22 7 15 4 6 9 6 7 9 Measurement 13 7 11 13 11 5 4 8 8 4 5 4 Geometry 8 11 11 7 14 10 5 4 5 4 5 5 Data 11 8 6 8 4 12 8 3 2 8 5 6 Total scores 63 60 62 64 61 59 32 31 32 29 30 31 Standardized raw scores. In order to improve the utility of studentsÂ’ achievement scores, raw scores were standa rdized by booklet to provide a simple score that could be used in comparisons across booklets in the sa me year. The standardized score had the weighted mean score of 50 and a weighted st andard deviation of 10 within each booklet in a country (Martin, 2005). National Rasch scores. Based on the oneparameter Rasch model with maximum likelihood (ML) estimation, the national Rasch sc ores were standardized to have a mean score of 150 and a standard deviation of 10 within each country. The main purpose of national Rasch scores was to provide a prel iminary measure of overall math achievement that could be used as a criterion variable in studies of item discrimination prior to the TIMSS 2003 IRT scaling. Because each country has the same mean score and dispersion, these scores should not be used for international comparison (Martin, 2005). Plausible values. Due to the TIMSS 2003 assessme nt design, each student only responded to the items on one test booklet or a subset of the item pool. In order to derive estimates for each student of the overall score they would have achieved had they completed the entire assessment, TIMSS 2003 us ed a sophisticated psychometric scaling technique (known as item response theory scaling with conditioning and multiple imputation) to generate imputed scores for t hose items that were not administered to the
PAGE 113
99 student. Because there was some error inhere nt in the imputation process, the TIMSS database provided five separate plausible values for each student on each of the scales. In other words, each student had five plausibl e values of his or her achievement on the overall math and five plausible values on each content domain area. Overall plausible values were standardized w ith a mean of 500 and standard deviation of 100 and may be compared across test administrations. Conten t domain plausible values, however, should not be compared across test administrati on because when standardized they have different means and standard deviations (Martin, 2005). Background Surveys The background questionnaires are based on the TIMSS 2003 Contextual Framework, which specifies the major charact eristics of the educational and social contexts to be studied and identifies th e areas to be addressed in the background questionnaires. The background questionnaires were developed by an expert committee composed of international educators and meas urement specialists with much input from the National Research Coordinators (Marti n, 2005). The administration of the student questionnaires was conducted at participati ng schools by test administrators who also administered the student test booklets. As for the teacher questionnaires and school questionnaires, the school coordinators di stributed these background questionnaires to corresponding teachers and school principals an d made sure that the questionnaires were returned completed. Eighthgrade mathematics student background survey. The student mathematics questionnaire had a total of 18 forced choice questions that sought information about the studentsÂ’ demographic bac kground, home resources, their experiences in learning
PAGE 114
100 mathematics, and their perceptions about sc hool environment. It is worthy of note, however, that due to cultural differences, some questions or some question options in the student questionnaire were adapted or even removed to f it with the national contexts. For example, for the question Â“Do you have any of these items at home?Â”, there were a total of 16 options. Of these options, four were fixed (i.e., calculator, computer, study desk, and dictionary) and the remaining options were adapted to include common countryspecific items. Some countries (e .g., Australia, Bulgaria, Chile ) might also opt to include fewer than 16 options in this question. Eighthgrade mathematics teacher survey. The teacher questionnaire included 28 forced choice questions in order to gather information about the teachersÂ’ preparation and professional development, their pedagogical activities, and the implemented curriculum. Like in the student background questionnaire, due to cultural differences, some questions or some question options in this questionnaire were adapted or even removed to fit with the national contexts. For example, the que stion Â“What requirements did you have to satisfy in order to become a mathematics teacher at grade 8?Â” was not administered in the U.K and thus, data for this variable were not available for the U.K. School survey. The school questionnaire cont ained 18 questions asking school principals or headmasters to provide info rmation about the school contexts for the teaching and learning. Similar to the previ ous background questionnair es, not all of the questions in the school questi onnaire were administered in all of the countries that participated in the TIMSS 2003, resulting in lack of data for some schoolrelated variables in some countries.
PAGE 115
101 Variables The selection of variables for this st udy was guided by the conceptual model (Carroll, 1963), the review of existing literature on contextual factors related to student math achievement (see Chapter Two), and the pr actical implications of the variables to policy issues. The dependent variable of the study is Overall Mathematics Score, an IRTbased score, which was calculated by averagi ng five plausible subto pic scores: algebra, number, geometry, measurement, and data. The independent variables are five groups of factors: (1) st udent background, (2) home resources, (3) instructional practic es, (4) teacher background, and (5) school background. Each of these groups of factor s was precisely defined by using existing variables in the TIMSS 2003 database. For ex ample, student background was measured by five variables: gender, self confidence in math, valuing of math, time on math homework, and extra math lessons. Home re sources was represented by three variables indicating the availabil ity of: calculator, computer, and desk for studentÂ’s use at home. Instructional practices had nine indicators : opportunity to learn number, opportunity to learn algebra, opportunity to learn meas urement, opportunity to learn geometry, opportunity to learn data, amount of homework assignment, content related activities in math lessons, instructional practicerelated ac tivities in math lessons, and instructional time. Teacher background was represented by preparation to teach, ready to teach number, ready to teach algebra, ready to te ach measurement, ready to teach geometry, ready to teach data, and mathrelated pr ofessional development. Finally, school background was measured by class size, schoo l resources for math instruction, and teacherÂ’s perception of math instructiona l limitations due to student factors.
PAGE 116
102 Table 5 presents a mapping between the va riables selected for this study and the variables identified in CarrollÂ’s (1963) conceptual m odel of school learning. It is worthy of note that because the variables used in this study were selected from existing secondary database as opposed to being created for primary research, some loose connections between these variables and those from the model were anticipated. Table 5. Mapping of Variables in CarollÂ’s Model With Variables in the Study Variables in CarollÂ’s Model Variables in the study 1) Aptitude Â– the amount of time needed to learn the task under optimal instructional conditions 1) Student background (selfconfidence in learning math, valuing math, time on math homework, tutoring in math, and gender) 2) Home resources (availability of calculator, dictionary, computer, and desk at home for studentÂ’s use) 2) Ability to understand instruction 3) Perseverance Â– the amount of time the learner is willing to engage actively in learning 4) Opportunity to learn Â– time allowed for learning 3) Instructional practices (opportunity to learn in terms of topic coverage before the time of the test, amount of homework assignment, activities in math lessons, and average of math instructional hours per year) 5) Quality of instruction Â– the extent to which instruction is presented so that no additional time is required for mastery beyond that required in regard to aptitude 4) Teacher background (preparatio n to teach, ready to teach, and professional development) 5) School background (class size for math instruction, school resources for math instruction, and teachersÂ’ perceptions of math instructional limitations due to student factors) Table 6 presents an explicit description of the studyÂ’s contextual and background variables and their respective indicators. In th is table, a composite variable was marked as Â“TIMSS derived variableÂ” if it was provide d in the TIMSS database and as Â“computed by researcherÂ” if it was created by the rese archer. One exception occurred when the researcher renamed the TIMSS derived vari able from Â“Math classes with few or no instructional limitation due to student factor Â” with Â“ teacherÂ’s perception of math
PAGE 117
103 instructional limitations due to student fact orsÂ” to improve the meaningfulness of the variable name. Table 6. Description of Contextual and Background Variables Variable Name Variable Description Student Background Gender of student Are you a girl or a boy? 1 = girl, 2 = boy Student selfconfidence in learning math (TIMSS derived variable) Composite variable ranging from 13 (high, medium, low). Four items were used to create the composite variable. How much do you agree with th ese statements about learning mathematics? (4point scale: agree a lot, agree a little, disagree a little, and disagree a lot). 1) I usually do well in math 2) Math is more difficult for me than for many of my classmates 3) Math is not one of my strengths 4) I learn things quickly in math Student valuing math (TIMSS derived variable) Composite variable ranging from 13 (high, medium, low). Seven items were used to create the composite variable. How much do you agree with th ese statements about learning mathematics? (4point scale: agree a lot, agree a little, disagree a little, and disagree a lot). 1. I would like to take more math in school 2. I enjoy learning math 3. I think learning math will help me in my daily life 4. I need math to learn other school subjects 5. I need to do well in math to get into the university of my choice 6. I would like a job that involved using math 7. I need to do well in math to get the job I want Time on math homework On a normal school day, how much time do you spend before or after school doing mathematics homework? (5point scale: 1 = no time, 2 = less than one hour, 3 = 12 hours, 4 = more than 2 but less than 4 hours, and 5 = 4 or more hours) Tutoring/Extra math lessons During this school year, how often have you had extra lessons or tutoring in mathematics that is not part of your regular class? (4point scale: 1 = every or almost every day, 2 = once or twice a week, 3 = sometimes, 4 = never or almost never) Home Resources Home resources for learning (computed by researcher) Composite variable, a sum of studentsÂ’ responses for three variables. Do you have any of these items at home? (2point scale 1 = yes, 2 = no ) Calculator Computer (excluding Xbox, playstation or TV/Video game computer) Desk Instructional Practices Opportunity to Composite variable, an average pe rcent of students whose teachers checked option 1
PAGE 118
104 Table 6. Description of Contextual and Background Variables Variable Name Variable Description learn number (TIMSS derived variable) (mostly taught before this year) and option 2 (mostly taught this year) for the 10 items of the number domain. Details of these 10 items can be found in Appendix B. The following list includes the main topics addressed by the TIMSS mathematics test. Choose the response that best desc ribes when students in the TIMSS class have been taught each topic. If a topic was taught half this year and half before this year, please choose Â“Mostly taught this year.Â” (3 pointscale: 1 = mostly taught before this year, 2 = mostly taught this year, and 3 = not yet taught or just introduced) Opportunity to learn algebra (TIMSS derived variable) Composite variable, an average percent of students whose teach ers checked option 1 (mostly taught before this year) and option 2 (mostly taught this year) for the 6 items of the algebra domain. Details of these 6 items can be found in Appendix B. The following list includes the main topics addressed by the TIMSS mathematics test. Choose the response that best desc ribes when students in the TIMSS class have been taught each topic. If a topic was taught half this year and half before this year, please choose Â“Mostly taught this year.Â” (3 pointscale: 1 = mostly taught before this year, 2 = mostly taught this year, and 3 = not yet taught or just introduced) Opportunity to learn measurement (TIMSS derived variable) Composite variable, an average percent of students whose teach ers checked option 1 (mostly taught before this year) and option 2 (mostly taught this year) for the 8 items of the measurement domain. Details of these 8 items can be found in Appendix B. The following list includes the main topics addressed by the TIMSS mathematics test. Choose the response that best desc ribes when students in the TIMSS class have been taught each topic. If a topic was taught half this year and half before this year, please choose Â“Mostly taught this year.Â” (3 pointscale: 1 = mostly taught before this year, 2 = mostly taught this year, and 3 = not yet taught or just introduced) Opportunity to learn geometry (TIMSS derived variable) Composite variable, an average percent of students whose teach ers checked option 1 (mostly taught before this year) and option 2 (mostly taught this year) for the 13 items of the geometry domain. Details of these 13 items can be found in Appendix B. The following list includes the main topics addressed by the TIMSS mathematics test. Choose the response that best desc ribes when students in the TIMSS class have been taught each topic. If a topic was taught half this year and half before this year, please choose Â“Mostly taught this year.Â” (3 pointscale: 1 = mostly taught before this year, 2 = mostly taught this year, and 3 = not yet taught or just introduced) Opportunity to learn data (TIMSS derived variable) Composite variable, an average percent of students whose teach ers checked option 1 (mostly taught before this year) and option 2 (mostly taught this year) for the 8 items of the data domain. Details of these 8 items can be found in Appendix B. The following list includes the main topics addressed by the TIMSS mathematics test. Choose the response that best desc ribes when students in the TIMSS class have been taught each topic. If a topic was taught half this year and half before this year, please choose Â“Mostly taught this year.Â” (3 pointscale: 1 = mostly taught before this year, 2 = mostly taught this year, and 3 = not yet taught or just introduced) Amount of homework assignment Composite variable with 3 pointscale: 1 = high, 2 = medium, 3 = low. This composite variable was created using three variables. 1) Do you assign mathematics homework to the TIMSS class? (2 pointscale: 1 = yes, 2 = no)
PAGE 119
105 Table 6. Description of Contextual and Background Variables Variable Name Variable Description (TIMSS derived variable) 2) How often do you usually assign mathematics homework to the TIMSS class? (3 pointscale: 1 = Every or almost every lesson, 2 = About half the lessons, 3 = some lessons 3) When you assign mathematics homework to the TIMSS class, about how many minutes do you usually assign? (Consider the time it would take an average student in your class.) (5point scale: 1 = Fewer than 15 minutes, 2 = 1530 minutes, 3 = 3160 minutes, 4 = 6190 minutes, 5 = More than 90 minutes) Contentrelated activities in math lessons (computed by researcher) Composite variable computed by averaging student responses for the following 4 items. How often do you do these things in your mathematics lesson? (4point scale: 1 = every or almost every lesson 2 = about half the lessons, 3 = some lessons, and 4 = never) 1. We practice adding, subtracting, multiplying, and dividing without using a calculator 2. We work on fractions and decimals 3. We interpret data in tables, charts, or graphs, 4. We write equations and functions to represent relationships Instructional practicerelated activities in math lessons (computed by researcher) Composite variable computed by averaging student responses for the following 9 items. How often do you do these things in your mathematics lesson? (4point scale: 1 = every or almost every lesson 2 = about half the lessons, 3 = some lessons, and 4 = never) 1. We work together in small groups 2. We relate what we are learning in mathematics to our daily life 3. We explain our answers 4. We decide on our own procedures for solving complex problems 5. We review our homework 6. We listen to the teacher give a lecturestyle presentation 7. We work problems on our own 8. We begin our homework in class 9. We have a quiz or test Average math instructional hours per year (TIMSS derived variable) Composite variable computed from the following variables. From the school survey: 1) How many days per year is your school open for instruction for eighthgrade students? 2) How many instructional days are there in the school week (typical calendar week from Monday through Saturday) for eighthgrade students? (Sevenpoint scale from none Â– 6 days) 2A) Number of full days (over 4 hours)? 2B) Number of half days (4 hours or less)? 3) To the nearest halfhour, what is the total instructional time in a typical full day (excluding lunch break, study hall, and after school activities) for eighthgrade students? (Sixpoint scale: 1 = 4 hours or less, 2 = 4.5 hours, 3 = 5 hours, 4 = 5.5 hours, 5 = 6 hours, 6 = 6.5 hours or more) 4) How many full instructional days? From the teacher survey: 1) How many minutes per week do you teach math to the TIMSS class? Teacher Background Preparation to During your postsecondary education, what was your major or main area(s) of
PAGE 120
106 Table 6. Description of Contextual and Background Variables Variable Name Variable Description teach math content (computed by researcher) study? 1) Mathematics: 1 = yes, 2 = no 2) Education Â– Mathematics: 1 = yes, 2 = no Ready to teach number (computed by researcher) Composite variable computed by averaging math teacher responses for 2 items of the number domain. Details of these 2 items can be found in Appendix C. Considering your training and experien ce in both mathematics content and instruction, how ready do you feel you are to teach each topic at eighth grade? ( 3point scale: 1 = very ready, 2 = ready, 3 = not ready). Ready to teach algebra (computed by researcher) Composite variable computed by averaging math teacher responses for 4 items of the algebra domain. Details of these 4 items can be found in Appendix C. Considering your training and experien ce in both mathematics content and instruction, how ready do you feel you are to teach each topic at eighth grade? ( 3point scale: 1 = very ready, 2 = ready, 3 = not ready). Ready to teach measurement topic (computed by researcher) Composite variable computed by averaging math teacher responses for 4 items of the measurement domain. Details of these 4 items can be found in Appendix C. Considering your training and experien ce in both mathematics content and instruction, how ready do you feel you are to teach each topic at eighth grade? ( 3point scale: 1 = very ready, 2 = ready, 3 = not ready). Ready to teach geometry (computed by researcher) Composite variable computed by averaging math teacher responses for 4 items of the geometry domain. Details of these 4 items can be found in Appendix C. Considering your training and experien ce in both mathematics content and instruction, how ready do you feel you are to teach each topic at eighth grade? ( 3point scale: 1 = very ready, 2 = ready, 3 = not ready). Ready to teach data (computed by researcher) Composite variable computed by averaging math teacher responses for 4 items of the data domain. Details of these 4 items can be found in Appendix C. Considering your training and experien ce in both mathematics content and instruction, how ready do you feel you are to teach each topic at eighth grade? ( 3point scale: 1 = very ready, 2 = ready, 3 = not ready). Mathrelated professional development (computed by researcher) In the past two years, have you participated in professional development in any of the following? 1) Math content: 1 = yes, 2 = no 2) Math pedagogy/instruction: 1 = yes, 2 = no 3) Math curriculum: 1 = yes, 2 = no 4) Math assessment: 1 = yes, 2 = no 5) Problem solving/critical thinking: 1 = yes, 2 = no School Background Class size for math instruction (TIMSS derived variable) How many students are in the TIMSS class? Four categories were derived based on the teachersÂ’ responses 1 = 124 students 2 = 2532 students 3 = 3340 students 4 = 41 or more students School resources Composite variable with 3 pointscale: 1 = high, 2 = medium, 3 = low. This
PAGE 121
107 Table 6. Description of Contextual and Background Variables Variable Name Variable Description for math instruction (TIMSS derived variable) composite variable was computed from the school principalsÂ’ responses to the 10 following items: Is your schoolÂ’s capacity to provide instruction affected by a shortage or inadequacy of any of the following? (4point scale: 1 = none, 2 = a little, 3 = some, 4 = a lot). 1) Instructional materials (e.g., textbook) 2) Budget for supplies (e.g., paper, pencils) 3) School buildings and grounds 4) Heating/cooling and lighting systems 5) Instructional space (e.g., classrooms) 6) Computers for mathematics instruction 7) Computer software for mathematics instruction 8) Calculators for mathematics instruction 9 ) Library materials relevant to mathematics instruction 10) Audiovisual resources for mathematics instruction TeacherÂ’ perception of math instructional limitations due to student factors (TIMSS derived variable) Composite variable with 3 pointscale: 1 = high, 2 = medium, 3 = low. This composite variable was computed by averaging teachersÂ’ responses to six following items. In your view, to what extent do the following limit how you teach the TIMSS class? (5point scale: 1 = not applicable 2 = not at all, 3 = a little, 4 = some, 5 = a lot) 1) Students with different academic abilities 2) Students who come from a wide range of backgrounds (e.g., economic, language) 3) Students with special needs, (e.g., hearing, vision, speech impairment, physical disabilities, mental or emotional/psychological impairment) 4) Uninterested students 5) Low morale among students 6) Disruptive students The variables instructional practices teacher background and school background in this study can be manipulated by state, di strict and school policie s. It is possible to create more effective mathematics lessons, to increase opportunities to learn, to better utilize school resources, and to adjust class size and mathematics instructional practices so as to stimulate teaching and learning math ematics in a school. Although the predictor variables student background and home resour ces appear to be more difficult to manipulate by policy, they provide educators and parents with important information about studentrelate d factors that can influence mathematics achievement.
PAGE 122
108 Reliability of Composite Predictor Variables Of the contextual variables listed in Ta ble 6, 18 were composite variables created through principal factor analys is with promax rotation. Sp ecifically, selfconfidence was created from four variables re lated to studentsÂ’ reported leve l of confidence in learning math. Through an examination of the scree plot, eigenvalues, and interpretability of variables, a single factor was re tained. All the items with factor pattern coefficient larger than .30 were included in the computati on of the composite variable which had CronbachÂ’s of .73. Appendix D provides details of item factor pattern coefficients. The composite variable, valuing of ma th, was constructed from seven items related to how students perceive the importan ce of learning math. One single factor was retained as the result of examining the scree plot, eigenvalues, factor pattern coefficients greater than or equal to .30, and interpretability of items. This composite variable had a CronbachÂ’s of .79. Details of item factor patte rn coefficients can be found in Appendix D. The next composite variable is home resources for learning which was measured by three items related to availability of st udy aids (i.e., calculator, computer, and desk) for students to use at home. An evaluation of the scree plot, eigenva lues, factor pattern coefficients greater than .30 a nd interpretability of items s uggested that these items were unidimentional. However, CronbachÂ’s for this composite variable was relatively modest, .44. Details of item f actor pattern coefficients can be found in Appendix D. Five composite variables (i.e., Opportunity to learn algebra, number, geometry, measurement, and data) were constructed from a series of 45 items related to teachersÂ’ responses as to when during the school year each of the math topics was taught to the
PAGE 123
109 TIMSS class. Specifically, opportunity to learn number was measured by 10 items, opportunity to learn algebra by six items, opportunity to learn measurement by eight items, opportunity to learn geometry by 12, a nd opportunity to learn data by eight items. Using similar factor analysis criteria and se tting the number of fact or equal to 1, these five composite variables were essentially unid imentional. All the items measuring each of the composite variables had factor pattern co efficients larger than .30 and the obtained CronbachÂ’s for these composite variables were relatively high, ranging from .74 to .91. Details of item factor pattern coe fficients can be found in Appendix D. Two composite variables were created fr om nine items related to studentsÂ’ reported activities in math le ssons. Instructional pr acticerelated activities in math lessons were measured by 5 items and contentrelated activities in math lessons were measured by four items. Similar criteria were used to de termine item inclusion in the factor (i.e., examination of the scree plot, eigenvalues, f actor pattern coefficients greater than or equal to .30, and interpretability of items). As suggested by the results of factor analysis, both of these composite variables were essent ially unidimensional. Instructional practicerelated activities in math lessons had a CronbachÂ’s of .55 and Contentrelated activities in math lessons had a CronbachÂ’s of .60. Details of item factor pattern coefficients can be found in Appendix D. Five composite variables (i.e., ready to teach number, algebra, measurement, geometry, and data) were created from a seri es of 18 items related to teachersÂ’ reported level of readiness to teach each of the math topics. Specifically, Ready to teach number was measured by two items and each of the re maining composite variables by four items. Results of factor analyses suggested that these composite variables were essentially
PAGE 124
110 unidimesional. Factor pattern coefficients of all the items were greater than .30 and the CronbachÂ’s for each of the composite variable s was relatively high, from .71 to .86. Specific details of item factor pattern coefficients can be found in Appendix D. The next composite variable, professiona l development, was measured by five items related to various types of teacher tr aining in the area of math instruction and assessment. Based on an examination of the scree plot, eigenvalues, factor pattern coefficients greater than .30 a nd interpretability of items, a single factor was retained. All five items were included in the calculati on of the composite variable, professional development, which had a CronbachÂ’s of .78. A summary of item factor pattern coefficients for this composite variable can be found in Appendix D. The composite variable, school resources for math instruction, was constructed from10 items related to availability of vari ous school resources (e.g., textbook, supplies, computer, reference materials, and physical spaces and conditions) for math instruction. Through an examination of the scree plot, eigenv alues, and interpretability of variables, a single factor was retained. All the items were included in the computation of the composite variable, which had a CronbachÂ’s of .92. Details of item factor pattern coefficients can be found in Appendix D. Finally, a factor anal ysis was conducted for six item s related to the extent to which student factors (e.g., differences in academic abilities, background, and special needs) could affect math instruction. Results of the analysis suggest ed a single factor to be retained. All the items had factor pattern coefficients la rger than .30 and thus were included in a composite variable labeled t eacherÂ’s perception of math instructional limitations due to student factors. CronbachÂ’s for this composite variable was .81. A
PAGE 125
111 summary of item factor pattern coefficients for this composite variable can be found in Appendix D. Content ExpertsÂ’ Validation of the Selected Variables In order to ensure that the selected variables for this study were important variables and that the way these variables we re defined was in line with the view of teachers, educators, and content experts in th e field of math education, the researcher set up two personal interviews with two professors of mathematics education at the College of Education, University of South Florida. A brief summary of these interviews are detailed below. Interview with Content Expert One The interview with Content Expert One w ho was an associate professor in math education at the College of Education, Univ ersity of South Florida revolved around the topic of how studentsÂ’ math achievement was defined in the United States in particular, and whether such a definition was universally accepted. Content Expert One pointed out that math achievement could be represente d by both overall math and separate content areas of math. Universally, however, math achie vement had been frequently referred to as overall math. This was largely because in mo st research studies related to student math achievement, students were often asked about th eir confidence in learning overall math, and not in separate content areas of math. In terms of variables that were perceived to be positively related to math achievement, Conten t Expert One commented that she did not know of any variables th at had consistent relationships with math achievement. However, recently, educators and educational policy ma kers called for more research concerning the relationships of opportunity to lear n, studentsÂ’ selfconfidence, time on math
PAGE 126
112 homework, class size, instructional time, activ ities in math lessons, teacher majoring in math, and teachers with math pedagogical skil ls. Content Expert One also added that there were several variables that showed little relationships with math achievement. These included: having a desk or a dicti onary at home, valuing math, and tutoring (Content Expert One, pers onal interview, May 2, 2007). Interview with Content Expert Two The same questions that we re asked in the interview with Content Expert One were used again in the interview with C ontent Expert Two who was another associate professor in Math Education at the College of Education, University of South Florida. Content Expert Two stated that, generally, ma th educators across countries agreed that there were five important content areas of math: number, measurement, algebra, geometry, and data. Although globally math achie vement was often referred to as overall math, students did perform differently acro ss content domains. Thus, math achievement should be reported by both overall math and by separate content areas. In response to the question regarding relationships of contextu al variables and math achievement, Content Expert Two asserted that she had read a lot of research studies about math achievement but could not recall seeing any good predictor of math achievement. She could not recall any relationship between math scores and a va riable that showed a consistent pattern across time or contexts. Because it was impossi ble to tell exactly or even the range of these relationships there was no clear gui dance in terms of setting up hypotheses. Currently, math educators focus on investigati ng the impact of instru ctional practices (or opportunity to learn) and how well math le ssons are delivered in terms of content knowledge, math teaching pedagogy, understand ing of studentsÂ’ needs, teaching
PAGE 127
113 preparation in terms of number of math courses taken in undergraduate, and time on professional development on student math ach ievement (Content Expert Two, personal interview, May 4, 2007). Followup Interviews with Content Experts Prior to analyzing data for this dissertat ion, the researcher fo llowed up with both Content Experts One and Two to obtain their opi nions regarding the final list of variables for this study. Before the interviews, both c ontent experts were provi ded with the list of variables and two questions: (1) Are these variables appropriate to include in the context of this study? and (2) Are ther e any variables not in the list but should be included? If yes, what are they and why? The interviews took place in summer 2008. Consistently, both content experts indicated that the study had incl uded important variables that were required to address the purpose of the study. However, several changes were suggested in order to improve the study from the mathematics content perspective. 1) Activities in math lessons would be bette r measured by two composite variables: Contentrelated activities in math lessons (four items) and Instructional practicerelated activities in math lessons (five items). 2) Ready to teach overall math should be recreated to make five composite variables that reflect five content domains. Specifically, Ready to teach number measured by two items, Ready to teach algebra measured by four items, Ready to teach measurement measured by four items Ready to teach geometry measured by four items, and Ready to teach data measured by four items.
PAGE 128
114 3) Change the name of the variable, Prof essional development to Mathrelated professional development. 4) Change the name of the variable, Preparati on to teach to Prepar ation to teach math content. 5) Opportunity to learn overall math should be recreated to make five composite variables that reflect five math content domains. Specifically, Opportunity to learn number measured by 10 items, opportunity to learn algebra measured by six items, opportunity to learn measurement m easured by eight items, opportunity to learn geometry measured by 13 items, and opportunity to learn data measured by eight items. As a result of these followup interviews, changes to the final list of variables were made as suggested by the conten t experts. Details of the vari ables listed in Table 6 reflect these changes. Data Analysis Secondary Data Analysis The use of secondary data for research has become more common among social and behavioral science researchers. However, the choice to use secondary data must be made with consideration for its advantages and disadvantag es (Rosenberg et al., 2006). Advantages The primary advantage of using secondary da ta for research is the conservation of time and expense because researchers can elimin ate several steps in th e research process, such as development of the measurement in struments, obtaining a research sample, the collection of the data, and the preparation of data for analysis by statistical packages
PAGE 129
115 (Rosenberg et al., 2006). This is especially true when the target population for the research is national or international and the research questions requi re large sample sizes in order to obtain the power needed to ma ke generalizations. More importantly, many of these largescale secondary databases have high quality and are publicly accessible because they are maintained by wellestablish ed governmental organizations such as the National Center for Educati on Statistics (NCES), the Natio nal Science Foundation (NSF), and the Association for Instit utional Research (AIR). Another reason for researchers to make us e of the excellent sources of existing largescale secondary databases relates to the timeliness and richness of information provided in these databases (Martin, 2005). Researchers can easily find an educational longitudinal database or a tr end database that includes data on more than a thousand variables collected over years. Thus, using the same data base, researchers can conduct different studies to answer different research questions (Kiecolt & Nathan, 1985). Also, through archived secondary data sources, rese archers can conduct re analysis studies of past secondary databases using more advanced statistical methods in order to validate research findings produced by pr evious studies (Baker, 1992). In terms of implications, from a theoretical perspective, the great level of variance in largescale databases ma kes it possible for the testing of competing theoretical frameworks and revision of hypotheses (Ki ecolt & Nathan, 1985). From a measurement perspective, secondary data analysis can s upport the refinement and improvement of the instruments through reliability analysis or confirmatory factor analysis (Hilton, 1992). Finally, from a methodological perspective, larg escale secondary data can be useful for
PAGE 130
116 delineating and developing new and improve d statistical methods to solve pending research areas of concern (Kiecolt & Nathan, 1985). Disadvantages Despite the many advantages of using s econdary data in research, there are a number of drawbacks associated with seconda ry data analysis th at are important for researchers to consider. One of the major limitati ons of secondary data analysis is that the data are often collected for purposes other than the purpose of the secondary analysis (Gonzales, 2001). Additionally, although many existing seconda ry databases are publicly available for researchers to use, the time a nd costs associated with learning about the secondary data source including survey items, data structure, and technical documentation can be excessive (Hofferth, 2005). As pointed out by Gonzales (2001), HahsVaughn (2005), and Moriarty et al. (1999) the two reasons th at tend to keep researchers from using secondary data in clude the complex design of largescale secondary data (e.g., multiple stage, cluste ring, and unequal sampling probabilities with nonuniform sampling weights) and the lack of sufficient skills th at are required to effectively manage, manipulate and analyze larg escale data. Another limitation relates to data quality, including missing data, how latent constructs are defined, and the quality of supporting documentation for the data sour ce (Rosenberg, Greenfield, & Dimick, 2006). Last but not least is the pr oblem pertaining to the availa bility and acces sibility of advanced statistical methodologies and specialized sta tistical software that must be used for analysis of complex, largescale sec ondary data (Gonzales, 2001). As Gonzales (2001) put it:
PAGE 131
117 The application of sampling weights in simple statistical analyses is well understood, and methods for correctly estima ting standard errors from clustered samples are increasingly available an d acknowledged. In the case of more complex or more novel analytical met hods, however, such as hierarchical modeling, the use of weights is difficult and not well understood, and software is not well developed in this respect. (Gonzales, 2001, p. 93) Hierarchical Linear Modeling Hierarchical data structures are very co mmon in the social and behavior sciences. A hierarchy consists of lowerlevel observati ons nested within highe rlevel(s) (Kreft & Leeuw, 2004). Examples include students nested within classes and classes nested within schools, or residents nested within nei ghborhoods, or repeated m easurements nested within persons. Because of such naturally occurring clusters, researchers often collect data on variables at both the lowerlevel a nd the higherlevel(s) of the hierarchy. For instance, in the TIMSS 2003 data, there are variables describing students (e.g., gender, selfconfidence, attitudes towards learning, etc. ), as well as variables describing schools (e.g., type of school, school size, school resour ces, etc.). Multilevel models are developed for analyzing hierarchically structured da ta (Kreft & Leeuw, 2004). The primary purpose of multilevel models is to capture the specific relationship between the lowerlevel and the higherlevel(s) variables a nd the outcome variable. In th e following sections, there is a discussion of the major advantages of using multilevel models to analyze hierarchically structured data over the traditional statistical regression models as well as theoretical and statistical assumptions of multilevel models.
PAGE 132
118 Advantages of Multilevel Models One problem associated with traditiona l statistical approaches is related to analysis of data at the aggr egate level. Some researchers tend to collect data from individuals and then aggregate the data to gain insights into the groups to which those individuals belong. This approach is technical ly flawed because inferences about groups are incorrectly drawn from i ndividuallevel information (Luke 2004). In other cases, data collected at the group level are disaggregated to the individual level. This approach is problematic because by ignoring group inform ation, the model violates the independence of observations assumption leading to misestim ated standard errors (standard errors are smaller than they should be). In multilevel models, predictor variables are conceptually defined at different levels and the hypothesized relations between these predictor vari ables operate across different levels (Luke, 2004).Thus, unlike conv entional regression approaches, the data in multilevel models can be analyzed in the contex t of the level and in relation to the other levels (i.e., within and between groups). It is essential to realize, however, that in multilevel models, characteristics or processes occurring at a higher level of analysis tend to influence characteristics or proc esses at a lower level (Luke, 2004). Another advantage of using multilevel models over other traditional approaches addresses the issue of statistic al or structural properties of the data. A major assumption of singlelevel, ordinary leas t squares (OLS) models is th at the observations (and hence the error terms) are independent from one a nother. For hierarchically structured data, however, such an assumption is not valid be cause individuals who belong to the same group or context tend to have similar charac teristics and thus, error terms tend to be
PAGE 133
119 correlated. If OLS is used for clustered data with correlated errors, the resulting standard errors are smaller than they should be, resu lting in a greater chan ce of committing Type I errors. In contrast, by accounting for both with in and between group va riability at two or more levels simultaneously, multilevel models can estimate the appropriate, unbiased, errors (Luke, 2004). In addition, multilevel models allow for estimating crosslevel effects that cannot be conceptually defined in conventional singlele vel regression models Finally, unlike traditional statistical approaches where sample size must meet specific criteria, multilevel models are powerfu l in that they can handle relatively small sample size. Although the larger sample si ze will likely increase power of the study, multilevel models will be robust if the higher level sample size is at least 20 (Hox, 1995). According to simulation studies of Kreft (1996) there is adequate st atistical power with 30 groups of 30 observations each; 60 groups with 25 observations each; and 150 groups with 5 observations each. These results indi cate that the number of groups has more effect on statistical power in multilevel mode ls than the number of observations, although both are important. Assumptions of Multilevel Models In order to ensure the validity of infere nces based on the resu lts of hierarchical linear models, the following assumptions pertaining to the adequacy of model specification and the consistency of parameter estimates in hierarchical linear models must be carefully tested (Raudenbush & Bryk, 2002): 1) Conditional on the level1 variable s, the within group errors (ijr) are normally distributed and independent with a mean of 0 in each group and equal variance across groups.
PAGE 134
120 2) Any level1 predictors of the outcome variable that are excluded from the model and thereby relegated to the error term (ijr) are independent of the level1 variables that are included in the model (covariance equal 0). 3) In the random intercept only model, each group has a residual effect, ju0. The distribution of these level2 resi dual effects is normally distributed with mean 0 and variance00 4) The effects of any excluded level2 predictors from the model for the intercept are independent of other le vel2 variables (covariance equal 0). 5) The level1 error, ijr, is independent of the level2 residual effects, ju0 (covariance equal 0). 6) Any level1 predictors that are exclud ed from the level1 model and as a result relegated to the error term, ijr, are independent of the level2 predictors in the model (covariance equal 0). In addition, any level2 predictors that are excluded from the model and as a result relegated to the level2 random effects, ju0, are uncorrelated with the level1 predictors (covariance equal 0). Of these assumptions, assumptions 2, 4, and 6 focus on the relationship among the variables included in the struct ural part of the model (the level1 and level2 predictor variables) and those factors re legated to the error terms, ijr and ju0. Misspecification of the model can cause bias in estimating level1 and level2 fixed effects. Assumptions 1, 3, and 5 are related to the ra ndom part of the model (i.e., ijr and ju0). Their tenability affects the consistency of the estimates of the standard errors of leve l2 fixed effects, the
PAGE 135
121 accuracy of the level1 random effects, the variances for level1 and level2, and the accuracy of hypothesis tests and confid ence intervals (Raudenbush & Bryk, 2002). Analyses of TIMSS 2003 Database Sampling Weights Because TIMSS 2003 utilizes a complex sampling design (see earlier section on sampling procedures), it is necessary to apply sampling weights when conducting analyses of the data in or der to obtain unbiased population estimates (Martin, 2005). The sampling weights reflect the probability of se lection of each school and student, taking into account any stratifica tion or disproportional sampli ng of subgroups, and include adjustments for nonresponse (Martin, 2005). Because the students within each countr y were selected using a probability sampling procedure, the probability of each studen t being selected as part of the sample is known. The sampling weight is the inverse of this selection probability. In a properly selected and weighted sample, the sum of the weights for the sample approximates the size of the population. In TIMSS 2003 study, each studentÂ’s sampling weight, TOTWGT, is a composite of six factors: three weighti ng factors corresponding to the stages of the sampling design (i.e., school, cla ss, and student), and three adjustment factors for nonparticipation at each of these stages. The use of TOTWGT ensures that the various subgroups that constitute the sample are prop erly and proportionally represented in the computation of population estimates (Martin, 2005). When student and teacher data are to be analyzed together, the use of teacher weights are recommended. The math teacher weight, MATWGT, should be used to obtain estimates regarding students and thei r teachers. This wei ght is computed by
PAGE 136
122 dividing the sampling weight for the student by the number of math teachers that the student has (Martin, 2005). When conducting anal yses at the school level, the use of the school weight is recommended. The school weight, SCHWGT, is the inverse of the probability of selection of the school, multiplied by its corresponding nonparticipation adjustment factor (Martin, 2005). Managing Multiple Databases Once four countries were selected, furt her managing and screening of the data were conducted in order to prepare the data for subsequent statistical analyses. Figure 1 displays a flow chart for this data manageme nt process. As indicated in Step 3 of the flowchart, the screening and subsetting process of data management resulted in some reduction of sample sizes. For example, of th e four countries incl uded in this study, the United States is the only country where a se cond classroom was sampled in most schools. Therefore, in order to keep the data struct ures similar across countries, the researcher decided to keep only one classroom per school in this country. The classroom selected was the one with the most number of student s. In addition, because there was a small number of schools in each c ountry that had fewer than 10 students per school, a decision was made to eliminate these schools so that the final schools in the study had a minimum of 10 students.
PAGE 137
123 Figure 1. Flowchart for Managing Multiple Databases from TIMSS 2003 Treatment of Missing Data Due to the complex and largescale survey design of the TIMSS 2003, missing data were unavoidable and need to be addre ssed before any statistical analyses can be performed. In this study, an examination of missing data was conducted separately for each country at both the student level (level 1) and the classroom/school level (level 2). Next, listwise deletion as a mi ssing data treatment method was employed to eliminate all the missing data at both level1 and level2. This step was conducted because in twolevel HLM analyses, parameter estimates are computed based on complete cases. Listwise deletion method was selected for this study because it is the simplest and most common method of handling missing data Also, evidence from various research
PAGE 138
124 studies suggested that, in co mparison with other methods of handling missing data such as pairwise deletion, listwise deletion tends to produce the le ast biased estimates (Allison, 2001; Roth, 1994). In a recent study of the im pact of missing data in largescale assessments, Phan and Kromrey (2006) found th at statistical results produced from the listwise deletion method were comparable with those produced by the multiple imputations method, an increasingly promisi ng method of missing data treatment (SAS, 2006; Mullis, 2001; Kromrey & Hines, 1994). Univariate Analysis Descriptive statistics such as weighted frequencies and weighted means were computed for the criterion and predictor va riables by student leve l and school level by country using SAS v9.1.3 (SAS Institute Inc., 2005). In addition, figures and tables were used to display distributions of both criter ion and predictor variab les for each of the countries included in the study. Bivariate Analysis The bivariate relationships between le vel 1 predictor variables and level 2 predictor variables were al so examined by individual country using SAS v9.1.3 (SAS Institute Inc., 2005). Hierarchical Linear Modeling Analysis Because the TIMSS 2003 data were reporte d by students nested in classes where there was one class sampled for each selected school, the analysis of the data was accomplished by the use of hierarchical lin ear modeling (HLM), a multilevel multiple regression technique useful in analyzing nested data (Raudenbush & Bryk, 2002). In order to proceed with HLM, the number of leve ls in the data needed to be specified and
PAGE 139
125 models needed to be constructed. The TI MSS 2003 data were best described in two levels: student level (level1 ), and school and teacher le vel (level2). Level1 was represented by student background and home resources variables which were unique across students. Level2 was represented by instructional practices, teacher background and school background variables because each school had only one mathematics class sampled. Although a third level might be incorporated in multilevel m odels, the analysis consisted of a series of twolevel models for several reasons. First, the countries participated in the TIMSS 2003 voluntarily. No random selection procedure was applied at the country level. Thus, participating countri es were not representa tive of countries in the world (TIMSS, 2003). Secondly, in TIM SS 2003, country background data were not collected (TIMSS, 2003). In addition, data on some variables of interest were not available for all of the countries included in this study because some countries opted not to administer certain survey items for count ryspecific reasons (T IMSS, 2003). Further, of the 50 participating countries, only four were selected to be included in this study. If a third level was to be incorporated, the small nu mber of units of analysis at the third level (N = 4) would likely cause estimation pr oblems (Raudenbush & Bryk, 2002). Finally, the chief purpose of the proposed study was to build models that would yield countryspecific findings in terms of correlates of eighthgrade students Â’ math achievement. All subsequent HLM analyses were conducted using HLM 6, the specialized software developed for analysis of hierar chically structured data (Raudenbush, Bryk, Cheong, & Congdon, 2004). The chief reason that HL M 6 was selected for this study as opposed to other specialized software such as SAS Proc Mixed was that HLM 6 has the
PAGE 140
126 ability to incorporate appropriate complex de sign sampling weights at different levels of analysis (Raudenbush, Bryk, Cheong, & Congdon, 2004). Recoding Predictor Variables for HLM Analyses In order to improve interpretability of the results, both level1 and level2 predictors were recoded such that 0 represen ted the smallest category of a variable. For example, the predictor variable student self confidence in learning math originally had three categories: 1= high, 2 = medium, and 3 = low. After recoding, the three categories of this variable were: 0 = low, 1 = medi um, and 2 = high. There was one exception. The predictor, average number of ma th instructional hour per year was grandmean centered. Models of the Study For each country, 23 models were construc ted to represent level 1 and level 2 of the TIMSS 2003 data. The first model was th e baseline or unconditional model which had no level 1 or level 2 variables. Th e regression equation is as follows. j j ij j iju r Y0 00 0 0 In this model, ijY is Mathematics score of student i in school j. j 0 is regression intercept of school j. 00 is the overall average mathematics score for all schools. ju0 is the random eff ect of school j. ijris the random effect of student i in school j. Each of the student background variables (i .e., gender, self confidence in learning math, student valuing of math, time on ma thematics homework, and tutoring in math) then was entered separately in the uncondi tional model to make five level1 models,
PAGE 141
127 Models 2 to 6. Next, Model 7 was built to in clude all the student background variables. This model aimed to examine the extent to which student background variables were associated with math achievement. Similarl y, Model 8 was construc ted to examine the extent to which home resources for learning wa s related to student math performance. As an overall level1 model, Model 9 was created to include all the stude ntrelated variables to the baseline model. The purpose of this model (Model 9) was to examine the relationship of each of the stude ntrelated variables in the pr esence of other variables and eighthgrade studentsÂ’ mathematics achievement. It is important to note that only those variables that were statistically significant in Model 9 were re tained to include in level2 models. The regression equations for nine level1 models follow: Model 2: ij ij j j ijr Gender Y 1 0 Model 3: ij ij j j ijr Confidence Y 1 0 Model 4: ij ij j j ijr Valuing Y 1 0 Model 5: ij ij j j ijr rk TimeHomewo Y 1 0 Model 6: ij ij j j ijr Tutoring Y 1 0 Model 7: ij j ij j ij j ij j j ijrk TimeHomewo Valuing Confidence Gender Y4 3 2 1 0 + ij ij jr Tutoring 5 pj p pju 0 where p = 0, 1, 2, Â…5. Model 8: ij ij j j ijr ces HomeResour Y 1 0
PAGE 142
128 Model 9: ij j ij j ij j ij j j ijrk TimeHomewo Valuing Confidence Gender Y4 3 2 1 0 + ij ij j ij jr ces HomeResour Tutoring 6 5 pj p pju 0 where p = 0, 1, 2, Â…6. In Model 29, ijY,j 0 ,00 ju0, and ijrare as defined in the Baseline Model above. j 1 to j 6 refer to regression slopes of school j 0 p refer to the level 2 fixed effects pju refer to the level 2 random effects. Similarly, at level2, each of the instru ctional practices, te acher background and school background variables (i.e., opportunity to learn, homework assignment, activities in math lessons, instructional time, prepar ation to teach, ready to teach, professional development, class size, teacher percepti on and school resources) then were entered separately in Model 9. In a ddition, in order to estimate th e amount of variance in math achievement that a set of variables (i.e ., instructional practices, teacher background, school background, and all level2 variables) account for, four combined models were constructed. Specifically, Models 1014 repr esented instructiona l practices models, Models 1518 represented teacher background models, Models 1922 represented school background models, and finally, Model 23 repres ented the full model which included all level2 variables and crosslevel interaction te rms that were statisti cally significant in earlier models. All level2 models include d random errors. The purpose of the level2 models was to examine the relationship of instructional practices, teacherand schoolrelated factors as well as possible crossleve l interactions of these variables and eighth
PAGE 143
129 grade studentsÂ’ mathematics achievement. Th e regression equations for these models follow: Model 10: yMe Opportunit yAl Opportunit yNu Opportunitp p p p pj 3 2 1 0 pj p pu yDa Opportunit yGe Opportunit 5 4 Model 11: pj p p pju nt HWAssignme 1 0 Model 12: pj p p p pju Pra Activities Con Activities 2 1 0 Model 13: pj p p pju nalTime Instructio 1 0 Model 14: yMe Opportunit yAl Opportunit yNu Opportunitp p p p pj 3 2 1 0 nt HWAssignme yDa Opportunit yGe Opportunitp p p 6 5 4 pj p p pu nalTime Instructio Pra Activities Con Activities 9 8 7 Model 15: pj p p pju n Preparatio 1 0 Model 16: ReadyMe ReadyAl ReadyNup p p p pj 3 2 1 0 pj p pu ReadyDa ReadyGe 5 4 Model 17: pj p p pju t Developmen 1 0 Model 18: ReadyMe ReadyAl ReadyNu n Preparatiop p p p p pj 4 3 2 1 0 pj p p pu t Developmen ReadyDa ReadyGe 7 6 5 Model 19: pj p p pju ClassSize 1 0 Model 20: pj p p pju nalLimit Instructio 1 0
PAGE 144
130 Model 21: pj p p pju urces SchoolReso 1 0 Model 22: pj p p p p pju urces SchoolReso nalLimit Instructio ClassSize 3 2 1 0 Model 23: yMe Opportunit yAl Opportunit yNu Opportunitp p p p pj 3 2 1 0 nt HWAssignme yDa Opportunit yGe Opportunitp p p 6 5 4 nalTime Instructio Pra Activities Con Activitiesp p p 9 8 7 ReadyMe ReadyAl ReadyNu n Preparatiop p p p 13 12 11 10 t Developmen ReadyDa ReadyGep p p 16 15 14 pj p p pu urces SchoolReso nalLimit Instructio ClassSize 19 18 17 where p = 0, 1, to 19. In Model 1023, ijY,j 0 ,00 ju0, and ijrare as defined in the Baseline model above. j 1 to j 6 and pju are as defined in the Level 1 models. 0 p to 19 p refer to the level 2 fixed effects. Power Analysis In HLM 2level model analysis, the power of the study depends on the number of level2 unit of analysis. Because the number of schools (level2 units) in this study was relatively large, ranging from 52 to 271, ther e was enough power to detect the differences across schools (Kreft, 1996). Thus, no stat istical power analys is was conducted. Summary In summary, Research Question 1 was a ddressed by using statis tical results from Model 7 to make inferences about the extent to which student bac kground variables (i.e.,
PAGE 145
131 gender, selfconfidence in learning math, va luing of math, time on math homework, and tutoring in math) were associated with TI MSS 2003 eighthgrade math achievement in each country. As for Research Question 2, fi ndings from Model 8 were used to make inferences about the extent to which ho me resources for learning variables (i.e., availability of calculator, computer, and de sk for student use) were associated with TIMSS 2003 eighthgrade math achievement in each country. In terms of Research Question 3, infe rences about the extent to which instructional variables (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional ti me) were associated with TIMSS 2003 eighthgrade math achievement in each country were made by using statistical results from Model 14. Similarly, Model 18 addressed Resear ch Question 4 regarding the relationship between teacherrelated variables (i.e., pr eparation to teach, ready to teach, and professional development) with TIMSS 2003 eighthgrade math achievement. Likewise, to address Research Question 5 concerni ng the association be tween schoolrelated variables and TIMSS 2003 eighthgrade math ach ievement, statistical results from Model 22 were used. With statistical results obtained from the fu ll model, Model 23, inferences were made about the extent to which all th e statistically significan t level1 and level2 variables as a set were related to TI MSS 2003 eighthgrade ma th achievement. Finally, by visually and descriptively exam ining patterns of relationships between eighthgrade math achievement and contextual as well as ba ckground factors, this study identified important trends of relationshi ps that tended to exist among developed and developing countries, as well as differences between these groups.
PAGE 146
132 CHAPTER FOUR RESULTS Results for the United States Evaluation of Missing Data As a result of the listwise deletion method, the sample size for USA was reduced from 8808 students and 241 schools to 4,414 stud ents and 153 schools. This means only 55.12% of the original sample had complete data on all variables of in terest in this study. In order to evaluate the extent to which the data for USA were missing completely at random, the missingness on 19 level2 variables were correlated. Results of this analysis suggested a nonrandomness of missing data, with correlation coefficients ranging from .38 to .97 (n = 153, p <.001), indicating a modest to strong positive relationship among missingness indicators of the variables. In addition, when mi ssingness was correlated with values of itself as well as values of other variables, on ly marginal correlations were observed (r = .20 to .19, n = 153, p = .005). In summary, the missing data mechanism for USA was not missing completely at random. Univariate Analysis A descriptive examination of level1 va riables (i.e., overall math achievement, gender, selfconfidence in learning math, va luing of math, time on math homework, extra math lessons, and home resources for lear ning math) was conducted using SAS 9.13. Of the complete sample of 4,414 eighthgrad e students, 2,148 (48.66%) were female and 2,266 (51.34%) were male. On average, the weighted overall math achievement for USA
PAGE 147
133 students was 518.80 (SD = 72.43) with the lowest scor e of 274.96 and the highest score of 727.87 (see Table 7). With regard to level1 predictor variable s, it appeared that, on average, eighthgrade students in the US A had most resources at home for learning math (M = 2.80, SD = .49), were above medium level of se lfconfidence in learning math (M = 1.34, SD = .78) and valuing of math (M = 1.52, SD = .64), spent little time on math homework (M = .74, SD = .57), and only had extra math lessons occasionally (M = .46, SD = .77) (see Table 7). Table 7. Weighted Descriptive Statistics for L evel1 Variables for USA (N = 4,414) Variable M SD Min Max Overall math achievement 518.80 72.43 274.96 727.87 Selfconfidence in learning math 1.34 0.78 0 2 Valuing of math 1.52 0.64 0 2 Time on math homework 0.74 0.57 0 2 Extra math lessons 0.46 0.77 0 3 Home resources for learning math 2.80 0.49 0 3 Note: When weight was used to compute m eans in SAS, skewness and kurtosis were not produced In terms of distributions of level1 va riables, the unweighted descriptive results from Table 8 suggested that all but two variab les, extra math lessons and home resources for learning math, approximated normality, with skewness and kurtosis values within the range of 1.00 and 1.00. Table 8. Unweighted Descriptive Statistics for L evel1 Variables for USA (N = 4,414) Variable M SD Min Max Skewness Kurtosis Overall math achievement 516.95 73.12 274.96 727.87 0.03 0.31 Selfconfidence in learning math 1.35 0.78 0 2 0.68 1.02 Valuing of math 1.52 0.64 0 2 0.98 0.13 Time on math homework 0.73 0.56 0 2 0.02 0.46 Extra math lessons 0.46 0.77 0 3 1.71 2.25 Home resources for learning math 2.78 0.5 0 3 2.44 6.24
PAGE 148
134 Similarly, a descriptive analysis was c onducted on 19 predictor variables at the school level. As can be seen from Table 8, on average, the percenta ge of students whose teachers reported their opportunity to learn math content domains (i.e., algebra, number, geometry, measurement, and data) were relatively high, ranging from 70.10 (SD = 26.89) for geometry to 99.65 (SD = 2.04) for number. Although not every math teacher was prepared to teach math content (M = .75, SD = .42), on average, they participated in various types of mathrelate d professional development (M = 3.72, SD =1.62) and reported a high level of readiness to teach (M = 1.90, SD = .32 for measurement to M = 1.97, SD = .16 for number). The data also suggested that in about ha lf of the lessons, students were given activities related to math instructional practice (M = 1.76, SD = .25) and math content (M = 2.09, SD = .26). On average, a medium amount of homework was assigned to the students (M = 1.20, SD = .56). Finally, class size in USA schools tended to be small, less than 33 students (M = .57, SD = .67) and teachersÂ’ perception of instructional limitations due to student factors was low (M= .64, SD = .71). On average, the availability of school resources for math instruction was relatively high (M = 1.52, SD = .52). Noticeably, across 153 schools, the average math instructi onal hours per year varied greatly, ranging from 24.27 to 180, with a mean of 136.32 (SD = 28.52). Of the 19 level2 predictor variables, 10 approximated normal distributions with skewness and kurtosis values within the nor mality approximation range of 1.00 to 1.00. The nine variables that appeared to depart from normality included opportunity to learn number, measurement, and data; ready to teac h number, algebra, measurement, geometry,
PAGE 149
135 and data; and average math instructional hour s per year. specific skweness and kurtosis values for these variables can be found in Table 9. Table 9. Unweighted Descriptive Statistics for L evel2 Variables for USA (N = 153) Variable M SD Min Max Skewness Kurtosis Opportunity to learn number 99.65 2.04 80.00 100.00 7.44 62.55 Opportunity to learn algebra 78.86 23.20 16.67 100.00 0.67 0.78 Opportunity to learn measurement 83.71 19.89 0.00 100.00 1.69 3.58 Opportunity to learn geometry 70.10 26.89 0.00 100.00 0.71 0.30 Opportunity to learn data 83.96 21.65 0.00 100.00 1.70 3.05 Amount of homework assignment 1.20 0.56 0.00 2.00 0.17 0.16 Instructional practicerelated activities in math lessons 1.76 0.25 1.04 2.33 0.07 0.08 Contentrelated activities in math lessons 2.09 0.26 1.33 2.74 0.04 0.02 Preparation to teach 0.75 0.42 0.00 1.00 1.17 0.59 Ready to teach number 1.97 0.16 1.00 2.00 6.00 34.43 Ready to teach algebra 1.91 0.28 1.00 2.00 2.94 6.87 Ready to teach measurement 1.90 0.32 0.00 2.00 3.21 10.67 Ready to teach geometry 1.91 0.32 0.00 2.00 4.16 18.82 Ready to teach data 1.92 0.27 1.00 2.00 3.07 7.61 Mathrelated professional development 3.72 1.62 0.00 5.00 1.08 0.10 Class size for math instruction 0.57 0.67 0.00 3.00 1.16 1.25 School resources for math instruction 1.52 0.52 0.00 2.00 0.38 1.16 Teacher perceptions of math instructional limitation due to student factors 0.64 0.71 0.00 2.00 0.73 0.69 Average math instructional hours per year 136.32 28.52 24.27 180.00 1.88 5.39 Bivariate Analysis An examination of bivariate relationsh ips between variables was performed at each level. The results of weighted correlati ons among six level1 variables (i.e., gender, selfconfidence in learning math, valuing of math, time on math homework, extra math lessons, and home resources for learning math) are presented in Appendix E. It appeared from these results that level1 predictor vari ables were uncorrelated from each other. The
PAGE 150
136 magnitudes of the correlation coefficients ra nged from .01 between valuing of math and time on math homework to .39 between selfcon fidence in learning math and valuing of math. It was interesting to note that gende r tended to have a negative albeit weak relationship with all level1 variables excep t for home resources for learning math (r = .05). At level2, unweighted biva riate relationships were estimated for 19 predictor variables. The correla tion matrix for these va riables can be found in Appendix F. Unlike level1, correlation coefficients of level2 va riables had a wider range, from .31 between amount of homework assignment and teacher sÂ’ perception of math instructional limitation due to student factors to .70 between ready to teach algebr a and ready to teach measurement. As expected, correlation coe fficients among the variables measuring the same construct tended to be stronger than those measuring diffe rent construct. For example, the correlations ranged from .19 to .45 for opportunity to learn variables, and .47 to .70 for ready to teach variables. Intere stingly, it was observed that the number of math instructional hours per year had almost nonexistent to very weak relationships with all of the level2 variables (rs ranged from .09 for ready to teach data to .15 for opportunity to learn measurement). Anothe r interesting observation was that of 19 variables, 11 were found to have negative relationships (rs ranged from 02 to .31) with teachersÂ’ perception of math instructional limitations due to student factors. These variables include all opportuni ty to learn variables, amount of homework assignment, activities in math lessons, and ready to teach geometry and school resources.
PAGE 151
137 Evaluation of HLM Assumptions In order to ensure tenability of result s produced by multilevel models in this study, an evaluation of HLM assumptions thro ugh visual analysis of both level1 and level2 random effects of Model 14 was performed. Model 14 was selected because the results of HLM analysis suggested that it wa s the most efficient model to predict math achievement in the USA (see HLM Analysis for USA). The data from Figure 2 suggested that level1 residuals approximated a normal distribution. In terms of varian ce, the scatter plot between le vel1 residuals and predicted math achievement, as illustrated in Figure 3, suggested that there was evidence of homogeneity of level1 variance. Figure 2. Histogram for Level1 Residuals for USA
PAGE 152
138 Figure 3. Level1 Residuals by Predicte d Math Achievement for USA For level2 random effects, the empirical Bayes residuals for the intercepts and slopes as well as empirical Bayes predicted math scores were used to construct the graphs in Figures 49. As can be seen from Figures 49, level2 intercept re siduals appeared to have a normal distribution and homogeneous variance. Figure 4. Histogram for Level2 Intercept Residuals for USA
PAGE 153
139 Figure 5. Level2 Intercept Residuals by Predicted Intercept for USA Similarly, Figure 6 suggests that level2 resi duals for the slope of Valuing of math approximated a normal distribution and Figur e 7 provides evidence of homogeneity of variance. Figure 6. Histogram for Level2 Slope (V aluing Math) Residuals for USA
PAGE 154
140 Figure 7. Level2 Slope (Valuing Math) Residuals by Predicted Math Achievement for USA Finally, although it appears from Figure 8 th at level2 residuals for the slope of Time on math homework had a slightly skewed distribution, their va riances across school were relatively homogeneous (see Figure 9). In summary, visual analys es of both level1 and leve l2 random effects suggested that the assumptions of normality and homoge neity of level1 and level2 random effects were satisfied. Figure 8. Histogram for Level2 Slope (Time on Homework) Residuals for USA
PAGE 155
141 Figure 9. Level2 Slope (Time on Homework) Resi duals by Predicted Math Achievement for USA HLM Analysis Unconditional model (Model 1) The HLM analysis started with the uncondi tional model where none of the level1 or level2 predictor was included in the mode l. The results of the unconditional model are presented in Table 10. For USA, the fixe d effect for the in tercept was 517.36 (SE = 4.78, p <.001). The average level of math achieveme nt was significantly different across schools in the U.S (00 = 2,703.82, SE = 52, p <.001). Within sch ools, the amount of unexplained variance was somewhat sm aller than that between schools (2= 2,591.61, SE = 50.91). The computed intraclass correla tion (ICC) of .51 was substantial for the US, indicating a relatively strong level of na tural clustering of students occurred between schools. In other words, approximately 51% of the total variance in math scores occurred between schools.
PAGE 156
142 Table 10. Parameter Estimates for Unconditional Model for USA Model Effect Parameters Estimates SE p 1 Fixed ICC 0.51 INT 517.36 4.78 <.001 Random 00 2703.82 52.00 <.001 2 2591.61 50.91 Note: ICC = intraclass correlati on coefficient; INT = intercept Research Question 1 To what extent are student background variables (i.e., gender selfconfidence in learning math, valuing of math, time on math homework, and tutoring in math) associated with TIMSS 2003 eighthgrade math scores in each country? In order to answer this research question, first, ea ch of the student background variables was entered separately into Model 1 to predict math achievement. Then, as a group of variables, those that contributed sign ificantly in Models 26 were included in Model 7 to predict math achievement. Finally, in order to evaluate model fit in terms of proportion of variance accounted for, a pseudo R2 was computed for the current model against previously constructed models. Re sults of these models (Models 26) are presented in Table 11. The data from Table 11 suggested that all of the fixed and random effects estimated by Models 26 were statistically significant, except for the fixed effect of time on homework in Model 5 ( = 2.38, SE = 2.45, p = .333). Interestingly, whereas selfconfidence in learning math (Model 3) and valu ing of math (Model 4) appeared to have positive relationships with math achievement ( = 28.24 and 11.66, SE = 1.29 and 1.74; p <.001 and .001, respectively), gender (Mode l 2) and extra math lessons (Model 6)
PAGE 157
143 appeared to have negative relatio nships with math achievement ( = 6.47 and 19.36, SE = 2.04 and 1.22, p = .002 and .000, respectively). An examination of pseudo R2 across the five models (Models 26) suggested that the addition of individual pr edictors separately to the unconditional model (Model 1) to predict math achievement resulted in a reduc tion between 1% (Model 2) to 18% (Model 3) for the within school vari ance. For the between school va riance, however, the amount of reduction was smaller, up to 7% (Model 3) In fact, in some models, the amount of between school variance sli ghtly increased (for example, 1% for Model 1 and 2% for Model 6). Table 11. Parameter Estimates for Models 26 (Level1 Student Background) for USA Model Effect Parameters Estimates SE p 00 2 2 Fixed INT 520.78 5.17 <.001 Gender 6.47 2.04 .002 Random 00 2957.66 54.38 <.001 Gender 64.11 8.01 .015 2 2564.59 50.64 Pseudo R2 0.01 0.01 3 Fixed INT 479.63 4.44 <.001 Selfconfidence 28.24 1.29 <.001 Random 00 2093.56 45.76 <.001 Selfconfidence 37.12 6.09 .023 2 2133.91 46.19 Pseudo R2 0.07 0.18 4 Fixed INT 499.82 4.84 <.001 Valuing math 11.66 1.74 <.001 Random 00 2071.39 45.51 <.001 Valuing math 71.04 8.43 .028 2 2513.45 50.13 Pseudo R2 0.03 0.03
PAGE 158
144 Table 11. Parameter Estimates for Models 26 (Level1 Student Background) for USA Model Effect Parameters Estimates SE p 00 2 5 Fixed INT 515.18 5.21 <.001 Homework time 2.38 2.45 .333 Random 00 3030.85 55.05 <.001 Homework time 330.89 18.19 <.001 2 2503.19 50.03 Pseudo R2 0.00 0.03 6 Fixed INT 526.47 4.68 <.001 Extra lessons 19.36 1.22 <.001 Random 00 2520.21 50.20 <.001 Extra lessons 14.48 3.80 .035 2 2389.20 48.88 Pseudo R2 0.02 0.08 Note: Pseudo R2 refers to the difference in proportion of variance accounted for between the current models (Models 26) and the unconditional model (Model 1). As a next step of model building, all of the student background variables (i.e., gender, selfconfidence in learning math, va luing of math, time on math homework, and tutoring in math) were included in the co mbined model, Model 7, to predict math achievement. Interestingly, in the presence of other variables in the model, only two out of five predictors had statistically significan t fixed effects. With fixed effect of 15.22 (SE = 1.50, p <.001) for extra math lesson, it could be inferred that for each unit increase in extra math lesson (i.e., from 0 for never to 3 for daily), the stude nts were expected to reduce 15.22 points in their math scores while controlling for othe r predictors in the model. Similarly, with fixed effect of 26.39 (SE = 1.24, p <.001) for selfconfidence in learning math, it could be interpreted that for each unit increase in level of selfconfidence in learning math (i.e., from 0 for lo w to 2 for high), it was expected that the students would improve 26.39 points in their ma th scores while controlling for other predictors in the model.
PAGE 159
145 In terms of random effects, all were f ound statistically significant, except for those of gender ( = 24.64, SE = 4.96, p = .500) and extra math lessons ( = 13.43, SE = 3.67, p = .078). With the variance for the intercept of 2,142.83 (SE = 46.29, p <.001), it could be inferred that statis tically significant differences existed across the school means of math achievement after ad justing for the five student background variables in the model. Similarly, it could be inferred that sc hools varied significantly in the relationships between math achievement and studen t selfconfidence in learning math ( = 23.54, SE = 4.85, p = .021), student valuing of math ( = 95.32, SE = 9.76, p <.001), and time student spent on home work ( = 273.02, SE = 16.52, p <.001). An evaluation of model fit was also c onducted between Model 7 and previously constructed models, Models 26. As exp ected, the inclusion of student background variables in Model 7 yielded a considerable reduction in am ount of variance accounted for in math achievement within schools, from 11% to 26% (see Table 12). Between schools, the amount of variance reduction was also observed, from 1% to 6%, except for Table 12. Parameter Estimates for Models 7 (L evel1 Student Back ground) for USA Model Effect Parameters Estimates SE p 7 Fixed INT 492.91 5.24 <.001 Gender 1.48 1.50 .328 Extra lessons 15.22 1.05 <.001 Selfconfidence 26.39 1.24 <.001 Valuing math 0.16 1.76 .927 Homework time 3.24 2.18 .140 Random 00 2142.83 46.29 <.001 Gender 24.64 4.96 >.500 Extra lessons 13.43 3.67 .078 Selfconfidence 23.54 4.85 .021 Valuing math 95.32 9.76 <.001 Homework time 273.02 16.52 <.001 2 1892.30 43.50
PAGE 160
146 the comparison with Model 3 where the amount of between school variance appeared to increase by 3%. In sum, Model 7 was more ef ficient than earlier models in predicting math achievement in the U.S. Table 13. Comparison of R2 between Model 7 and Previously Constructed Models for USA Previous Model 00 2 2 0.05 0.26 3 0.03 0.11 4 0.01 0.25 5 0.04 0.24 6 0.06 0.21 Research Question 2 To what extent are home resources variab les (i.e., availability of calculator, computer, and desk for student use) asso ciated with TIMSS 2003 eighthgrade math scores in each country? When the level1 predictor home resources was added to the unconditional model to predict math achievement, a reduction of 1% was observed in the within school variance and a reduction of 5% in the between school variance (see Table 14). In this model, home resources had a st atistically significant relationship with math achievement ( = 9.74, SE = 1.86, p <.001). This means that for ev ery unit increase in home resources, math achievement was expect ed to increase by 9.74 points, while not controlling for other variables. In addition, with the random effect for home resources being statistically significant ( = 35.59, SE = 5.97, p = .015), it could be inferred that the relationship between home resources and math achievement differed significantly across schools in the U.S.
PAGE 161
147 Table 14. Parameter Estimates for Level1 Home Resources Model for USA Model Effect Parameters Estimates SE p 00 2 8 Fixed INT 489.62 6.06 <.001 Home resources 9.74 1.86 <.001 Random 00 1356.76 36.83 <.001 Home resources 35.59 5.97 .015 2 2576.33 50.76 Pseudo R2 0.05 0.01 Note: Pseudo R2 refers to the difference in the proportion variance between Model 8 and Model 1. Given the findings obtained from Models 7 and 8, five out of six studentrelated variables were entered into the unconditional model to make Model 9. Gender was excluded from Model 9 because both of its fi xed and random effects we re not statistically significant in Model 7. Also, in Model 9, the sl ope variance of extra math lessons was set to 0 because it was not statistic ally significant in Model 7. As can be seen from Table 15, with the presence of other pred ictors in Model 9, only extra math lessons, selfconfidence in learning math, and home resources had statistically significant fixed effects on math achievement. Specifically, whereas selfconfidence in learning math ( = 26.31, SE = 1.25, p <.001) and home resources ( = 3.99, SE = 1.66, p = .017) were positively related to math achievement; an inverse relationship was observed between math achievement and extra math lessons ( = 15.63, SE = 1.07, p <.001). This could be interpreted to mean that the more learning resources students had at home and the more selfconfidence they expressed in learning math, the higher were their math scores. Howe ver, it appears that the more frequently students took extra math lessons, the poorer math scores they achieved.
PAGE 162
148 In terms of random effects, all were f ound statistically significant, except for the slope variance of home resources ( = 28.29, SE = 5.32, p = .022). This suggests that the relationships between level1 predictors (i .e., extra math lessons, selfconfidence, and home resources) and math achievement varied significantly across schools. As compared to Model 7, Model 9 appeared more efficient in that it accounted for more variance between schools (2%), even t hough no improvement in the variance within schools was noted. Compared to Model 8, Model 9 accounted for a significantly higher amount (26%) of the variance within school and a modest amount (2%) of the variance between schools. As a resu lt of these comparisons, Mode l 9 was selected as the foundational level1 model for further examina tion of the relationships between level2 predictors and math achievement. Table 15. Parameter Estimates for Combined Level1 Predictors Model for USA Model Effect Parameters Estimates SE p Compared Model 00 2 9 Fixed INT 481.69 6.72 <.001 Extra lessons 15.63 1.07 <.001 Selfconfidence 26.31 1.25 <.001 Valuing math 0.58 1.72 .736 Homework time 3.17 2.14 .142 Home resources 3.99 1.66 .017 Random 00 1874.76 43.30 <.001 Selfconfidence 29.61 5.44 .045 Valuing math 92.34 9.61 .014 Homework time 252.02 15.87 <.001 Home resources 28.29 5.32 .223 2 1898.74 43.57 Pseudo R2 7 0.02 0.00 8 0.02 0.26 Note: Pseudo R2 refers to the difference in the proportio n of variance between Models 7 and 8 and Model 9.
PAGE 163
149 Research Question 3 To what extent are instructional variable s (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional time) associated with TIMSS 2003 eighthgrade math scores in each country? In addressing this research question, a similar strategy for model building used in Research Question 1 was applied. That is, each of the level2 instructional practice variables was first added to the foundationa l level1 model (Model 9) to make Models 1013. Then, as a group, those variables with significant fixed eff ects in Models 1013 were included in the combined instructional practice model, Model 14. It is important to note that in these models, the slope variance of home resources was set to 0 because it was not statistically significant in Model 9. Also, all possible cross level interactions between level1 and level2 predictors were allowed in these models. The results of Models 1014 are presented in Tables 1618. As can be seen in Table 16, Model 10 w ith opportunity to learn math topics as level2 predictors of math achievement yielde d three statistically significant crosslevel interactions: (1) opportunity to learn data by selfconf idence in learning math ( = .16, SE = .07, p = .018), (2) opportunity to learn geometry by time student spent on homework ( = .22, SE = .08, p = .006), and (3) opportunity to learn measurement by time student spent on homework ( = .23, SE = .11, p = .042). Table 16 also showed that when amount of homework assignment, contentrelated activities and instructional practicerelated act ivities in math lessons, and average number of math instructional hours per year were added to M odels 1113, no statistically significant crosslevel interaction effects were detected. Of the level2 main effects, only
PAGE 164
150 homework assignment was found statis tically significan t in Model 11 ( = 37.04, SE = 7.41, p <.001). This means that with every unit increase in amount of homework assignment, studentsÂ’ math scores were expected to increase by 37.04 points after adjusting for level1 variables but not fo r other level2 variab les in the model. Table 16. Parameter Estimates for Level2 Inst ructional Practices Models for USA Model Effect Parameters Estimates SE p 10 Fixed INT 390.22 143.87 .008 Opportunity_algebra 0.34 0.19 .076 Opportunity_data 0.33 0.26 .219 Opportunity_geometry 0.19 0.20 .350 Opportunity_measurement 0.24 0.24 .322 Opportunity_number 0.60 1.46 .678 Extra lessons 15.53 1.05 <.001 Selfconfidence 26.69 25.23 .292 Opportunity_algebra*Selfconfidence 0.11 0.06 .061 Opportunity_data*Selfconfidence 0.16 0.07 .018 Opportunity_geometry*Selfconfidence 0.04 0.07 .529 Opportunity_measurement*Selfconfidence 0.08 0.08 .345 Opportunity_number*Selfconfidence 0.07 0.28 .794 Valuing math 14.18 36.67 .699 Opportunity_algebra*Valuing math 0.00 0.09 .976 Opportunity_data*Valuing math 0.05 0.10 .573 Opportunity_geometry*Valuing math 0.06 0.06 .336 Opportunity_measurement*Valuing math 0.15 0.09 .071 Opportunity_number*Valuing math 0.01 0.38 .986 Homework time 20.61 41.85 .623 Opportunity_algebra*Homework time 0.04 0.09 .684 Opportunity_data*Homework time 0.18 0.13 .184 Opportunity_geometry*Homework time 0.22 0.08 .006 Opportunity_measurement*Homework time 0.23 0.11 .042 Opportunity_number*Homework time 0.40 0.45 .383 Home resources 3.44 1.58 .029 Random 00 1819.39 42.65 <.001 Selfconfidence 19.27 4.39 .073 Valuing math 92.47 9.62 <.001 Homework time 213.47 14.61 <.001 2 1909.71 43.70
PAGE 165
151 Table 16. Parameter Estimates for Level2 Inst ructional Practices Models for USA Model Effect Parameters Estimates SE p 11 Fixed INT 437.57 9.61 <.001 Homework assignment 37.04 7.41 <.001 Extra lessons 15.45 1.06 <.001 Selfconfidence 28.47 2.65 <.001 Homework assignment*Selfconfidence 1.77 2.33 .448 Valuing math 1.48 3.28 .652 Homework assignment*Valuing math 1.65 2.69 .540 Homework time 0.42 5.31 .938 Homework assignment*Homework time 2.75 4.08 .501 Home resources 3.68 1.57 .019 Random 00 1503.34 38.77 <.001 Selfconfidence 29.56 5.44 .042 Valuing math 97.11 9.85 <.001 Homework time 259.64 16.11 <.001 2 1904.70 43.64 12 Fixed INT 446.20 44.68 <.001 Content_activities 22.31 27.04 .411 Instruction_activities 5.57 23.02 .809 Extra lessons 15.68 1.06 <.001 Selfconfidence 15.95 10.77 .140 Content_activities*Selfconfidence 1.57 5.11 .759 Instruction_activities*Selfconfidence 7.71 5.57 .168 Valuing math 10.90 16.26 .504 Content_activities*Valuing math 1.37 7.52 .856 Instruction_activities*Valuing math 8.07 6.65 .227 Homework time 1.41 17.31 .936 Content_activities*Homework time 1.05 8.84 .906 Instruction_activities*Homework time 0.28 8.62 .975 Home resources 3.67 1.58 .020 Random 00 1935.48 43.99 <.001 Selfconfidence 29.50 5.43 .043 Valuing math 91.41 9.56 <.001 Homework time 257.30 16.04 <.001 2 1906.97 43.67 13 Fixed INT 483.21 6.62 <.001 Instructional hours 0.00 0.15 .995 Extra lessons 15.69 1.06 <.001
PAGE 166
152 Table 16. Parameter Estimates for Level2 Inst ructional Practices Models for USA Model Effect Parameters Estimates SE p Selfconfidence 26.20 1.27 <.001 Instructional hours*Selfconfidence 0.04 0.04 .342 Valuing math 0.08 1.65 .964 Instructional hours*Valuing math 0.10 0.06 .068 Homework time 3.08 2.17 .158 Instructional hours*Homework time 0.04 0.04 .398 Home resources 3.52 1.59 .027 Random 00 1971.50 44.40 <.001 Selfconfidence 29.07 5.39 .040 Valuing math 85.87 9.27 .001 Homework time 254.53 15.95 <.001 2 1906.11 43.66 In terms of model fit, in comparison w ith the foundational level1 model (Model 9), Model 10 appeared to be the most effi cient model because the amount of explained variance between schools in this model incr eased by 19% (see Table 17). As for the within school variance, no significant di fference was observed be tween Models 1013 and Model 9 (pseudo R2 = 0 in Models 1113 and .01 in Model 10). Table 17. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model for USA Compared Model 00 2 10 vs. 9 0.19 0.01 11 vs. 9 0.09 0.00 12 vs. 9 0.07 0.00 13 vs. 9 0.00 0.00 Similar to Model 10, when using all the le vel2 instructional pr actice variables to predict math achievement, Model 14 produced th ree statistically significant crosslevel interaction effects (see Table 18). First, opportu nity to learn data interacted with selfconfidence in learning math ( = .17, SE = .06, p = .012). Second, opportunity to learn measurement interacted with student valuing of math ( = .16, SE = .08, p = .048). And,
PAGE 167
153 third, opportunity to learn geom etry interacted with time student spent on homework ( = .23, SE = .08, p = .005). Interestingly, in this m odel, all the random effects were statistically significant, except for stude nt selfconfidence in learning math ( = 20.62, SE = 4.54, p = .074). This suggested that, in the U.S., the positive relationship between student selfconfidence in learning math a nd math achievement were similar across schools. Table 18. Parameter Estimates for the Combined Level2 Instructi onal Practices Model for USA Model Effect Parameters Estimates SE p 14 Fixed INT 363.49 146.56 .015 Homework assignment 37.80 7.59 <.001 Opportunity_algebra 0.37 0.19 .051 Opportunity_data 0.45 0.22 .048 Opportunity_geometry 0.31 0.17 .065 Opportunity_measurement 0.14 0.20 .469 Opportunity_number 0.41 1.46 .782 Content_activities 5.62 22.52 .803 Instruction_activities 2.32 17.86 .897 Instructional hours 0.03 0.14 .830 Extra lessons 15.51 1.05 <.001 Selfconfidence 13.41 23.58 .570 Homework assignment*Selfconfidence 0.60 2.31 .796 Opportunity_algebra*Selfconfidence 0.12 0.06 .057 Opportunity_data*Selfconfidence 0.17 0.06 .012 Opportunity_geometry*Selfconfidence 0.05 0.07 .515 Opportunity_measurement*Selfconfidence 0.07 0.08 .384 Opportunity_number*Selfconfidence 0.11 0.26 .676 Content_activities 2.18 5.23 .677 Instruction_activities 8.12 5.25 .124 Instructional hours 0.04 0.03 .134 Valuing math 7.79 39.72 .845 Homework assignment*Valuing math 2.65 2.83 .350 Opportunity_algebra*Valuing math 0.01 0.09 .955 Opportunity_data*Valuing math 0.08 0.10 .401 Opportunity_geometry*Valuing math 0.07 0.06 .253 Opportunity_measurement*Valuing math 0.16 0.08 .048 Opportunity_number*Valuing math 0.02 0.40 .958 Content_activities*Valuing math 0.13 8.21 .987 Instruction_activities*Valuing math 9.84 6.35 .123 Instructional hours*Valuing math 0.08 0.05 .149 Homework time 33.79 43.98 .444
PAGE 168
154 Table 18. Parameter Estimates for the Combined Level2 Instructi onal Practices Model for USA Model Effect Parameters Estimates SE p Homework assignment*Homework time 4.73 3.96 .235 Opportunity_algebra*Homework time 0.03 0.10 .763 Opportunity_data*Homework time 0.19 0.14 .166 Opportunity_geometry*Homework time 0.23 0.08 .005 Opportunity_measurement*Homework time 0.25 0.13 .050 Opportunity_number*Homework time 0.36 0.47 .447 Content_activities*Homework time 3.64 8.49 .669 Instruction_activities*Homework time 3.14 8.20 .702 Instructional hours*Homework time 0.01 0.04 .779 Home resources 3.52 1.55 .024 Random 00 1408.20 37.53 <.001 Selfconfidence 20.62 4.54 .074 Valuing math 90.99 9.54 <.001 Homework time 224.41 14.98 <.001 2 1907.62 43.68 As evident in Table 19, compared to pr eviously constructed models (Models 913), the amount of explained variance between schools in Model 14 was more significant. For example, an increase of 25% in the between school variance was observed when using Model 14 instead of Models 9 or 13. At minimum, changing from Models 10 to Model 14 would result in 7% more of the va riance between schools to be accounted for. However, for the variance within schools, no change was noted across these models. Thus, in consideration of the amount of th e explained variance between schools, Model 14 surpassed previously c onstructed models in predicting math achievement. Table 19. Comparison of R2 between Model 14 and Previously Constructed Models 913 for USA Compared Model 00 2 14 vs. 9 0.25 0.00 14 vs. 10 0.07 0.00 14 vs. 11 0.17 0.00 14 vs. 12 0.19 0.00 14 vs. 13 0.25 0.00 The modeled means of predicted math ach ievement for the significant interactions resulted from Model 14 are displayed in Figur es 1012. It is worth no ting that because the
PAGE 169
155 illustrations focused on the nature of the crosslevel interactions, the vertical axis on the interaction plots was not scaled to the act ual values of the predicted math scores. The data in Figure 10 suggested that th e relationship between studentsÂ’ reported valuing of math and math achievement was di fferent across levels of opportunity to learn measurement as a math topic. Specifically, when opportunity to learn measurement was low, there was little difference in math achievement among students with low, medium and high levels of valuing math. However, as opportunity to learn measurement increased, the size of the difference in math achievement among these students got larger. As expected, students with higher level of va luing of math tended to achieve higher math scores than those with medium and low level of valuing math. 0102030405060708090100 Opportunity to learn MeasurementPredicted Math Achievement Valuing math: Low Valuing math: Medium Valuing math: HighFigure 10. Interaction between Valuing of Math and Opportunity to Learn measurement for USA The data in Figure 11 depict the in teraction between time student spent on homework and opportunity to learn geometry. Interestingly, it was found that, for those
PAGE 170
156 students who reported spending a high amount of time on homework, their predicted math scores tended to be lower when they had more opportunity to learn geometry and higher when they had no or li ttle opportunity to learn ge ometry. However, as students spent lesser amounts of time on homework, an inverse pattern of re lationship between opportunity to learn geometry and math achie vement was observed. That is, for these groups of students, no or little opportunity to learn geometry was associated with a lower math scores and high opportunity to learn ge ometry was associated with a higher math scores. It appears that, increas ed opportunity to learn geometry worked best for the group of students who reported spending a low amount of time on homework. 0102030405060708090100 Opportunity to learn GeometryPredicted Math Achievement Time on homework: Low Time on homework: Medium Time on homework: HighFigure 11. Interaction between Time Student Sp ent on Homework and Opportunity to Learn geometry for USA The nature of the interaction between student selfconfidence in learning math and opportunity to learn data is displayed in Figure 12. Surprisingly, it was noted that
PAGE 171
157 students tended to perform similarly low in math when there was a high opportunity to learn data. As opportunity to learn data de creased, studentsÂ’ math scores increased significantly, with students who reported ha ving a high level of selfconfidence in learning math tended to perform better than th eir peers who reported a lower level of selfconfidence in learning math. 0102030405060708090100 Opportunity to learn DataPredicted Math Achievement Selfconfidence: Low Selfconfidence: Medium Selfconfidence: HighFigure 12. Interaction between Selfconfidence in Learning Math and Opportunity to Learn data for USA Research Question 4 To what extent are teacherrelated variab les (i.e., preparation to teach, ready to teach, and professional development) asso ciated with TIMSS 2003 eighthgrade math scores in each country? Similarly, incremental model building st rategies were appl ied to examine the effects of teacherrelated va riables (i.e., preparation to teach, ready to teach, and
PAGE 172
158 professional development) on math achievement Results of these models (Models 1518) are presented in Table 2023. Interestingly, as shown in Table 20, pr eparation to teach (Model 15), ready to teach math topics (Model 16), and professi onal development (Model 17) as single level2 predictors in the model did not appear to have statistically significant relationships with math achievement. In Model 16, however, ther e were three statistically significant crosslevel interaction effects: (1) ready to teach algebra by stude nt selfconfidence in learning math ( = 8.06, SE = 3.60, p = .027); (2) ready to teach number by time student spent on homework ( = .33.81, SE = 14.47, p = .021), and (3) ready to teach data and time student spent on homework ( = 15.46, SE = 6.61, p = .021). In these models, all of the random effect s were statistically significant, suggesting that a significant amount of variance in ma th achievement remained unexplained both within and between schools. Table 20. Parameter Estimates for Teacher Background Models for USA Model Effect Parameters Estimates SE P 15 Fixed INT 484.02 11.48 <.001 Preparation 1.81 11.74 .878 Extra lessons 15.59 1.05 <.001 Selfconfidence 29.27 2.07 <.001 Preparation*Selfconfidence 4.08 2.63 .123 Valuing math 4.77 4.12 .250 Preparation*Valuing math 6.02 4.59 .192 Homework time 6.60 3.59 .068 Preparation*Homework time 4.71 4.44 .290 Home resources 3.58 1.57 .023 Random 00 1957.94 44.25 <.001 Selfconfidence 29.00 5.39 .046 Valuing math 85.47 9.24 .001 Homework time 248.97 15.78 <.001 2 1906.63 43.66 16 Fixed INT 462.94 29.00 <.001 Ready_number 32.06 34.82 .359
PAGE 173
159 Table 20. Parameter Estimates for Teacher Background Models for USA Model Effect Parameters Estimates SE P Ready_algebra 2.01 17.41 .909 Ready_measurement 1.03 17.30 .953 Ready_geometry 30.78 24.48 .211 Ready_data 13.59 18.64 .467 Extra lessons 15.64 1.05 <.001 Selfconfidence 41.48 11.43 .001 Ready_number*Selfconfidence 10.94 7.08 .124 Ready_algebra*Selfconfidence 8.06 3.60 .027 Ready_measurement*Selfconfidence 2.05 6.36 .748 Ready_geometry*Selfconfidence 1.62 2.70 .549 Ready_data*Selfconfidence 4.34 4.05 .286 Valuing math 20.03 19.99 .318 Ready_number*Valuing math 10.48 11.98 .383 Ready_algebra*Valuing math 3.15 7.18 .661 Ready_measurement*Valuing math 4.95 4.55 .279 Ready_geometry*Valuing math 4.30 3.87 .268 Ready_data*Valuing math 2.04 5.94 .732 Homework time 10.90 14.55 .455 Ready_number *Homework time 33.81 14.47 .021 Ready_algebra *Homework time 8.43 6.63 .206 Ready_measurement *Homework time 3.63 9.61 .706 Ready_geometry *Homework time 0.05 6.83 .994 Ready_data *Homework time 15.46 6.61 .021 Home resources 3.48 1.58 .027 Random 00 1935.29 43.99 <.001 Selfconfidence 32.35 5.69 .035 Valuing math 100.41 10.02 <.001 Homework time 247.90 15.74 <.001 2 1904.28 43.64 17 Fixed INT 496.57 14.45 <.001 Professional development 3.59 3.00 .233 Extra lessons 15.61 1.06 <.001 Selfconfidence 26.61 3.58 <.001 Professional development*Selfconfidence 0.08 0.85 .923 Valuing math 5.22 5.33 .330 Professional development*Valuing math 1.25 1.21 .301 Homework time 7.18 5.32 .179 Professional development*Homework time 1.05 1.33 .431 Home resources 3.52 1.58 .026 Random 00 1927.98 43.91 <.001 Selfconfidence 30.29 5.50 .037 Valuing math 92.78 9.63 <.001 Homework time 253.49 15.92 <.001 2 1905.96 43.66
PAGE 174
160 When comparing the proportion of variance accounted for by Models 1517 with that of the foundational level1 model (Model 9), it appears that Model 16 was the most efficient one (see Table 20). As an example, wh ereas the inclusion of ready to teach math topics in Model 16 resulted in a reduction of 4% in the be tween school variance to be explained, the addition of mathrelated profe ssional development resu lted in an increase of 3% of the between school variance to be explained. No improvement in the within school variance was noted by use of these models. Table 21. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for USA Compared Model 00 2 15 vs. 9 0.00 0.00 16 vs. 9 0.04 0.00 17 vs. 9 0.03 0.00 When including all the teacherrelated va riables (i.e., prepara tion to teach, ready to teach math topics, and mathrelated profe ssional development) in Model 18 to predict math achievement, one statistically significant crosslevel interacti on effect was produced (see Table 22). Specifically, ready to teach number was found to interact with time student spent on homework ( = .33.73, SE = 15.54, p = .031). Also, in this model, all the random effects were statis tically significant, meaning th at a considerable amount of variance remained to be explaine d within and between schools. Table 22. Parameter Estimates for the Combined Teacher Background Model for USA Model Effect Parameters Estimates SE p 18 Fixed INT 472.15 47.76 <.001 Preparation 7.23 10.20 .479 Professional development 3.45 2.73 .209 Ready_number 33.50 35.68 .350 Ready_algebra 3.70 22.42 .870 Ready_measurement 0.40 20.33 .984 Ready_geometry 32.16 20.50 .119 Ready_data 14.32 22.43 .524
PAGE 175
161 Table 22. Parameter Estimates for the Combined Teacher Background Model for USA Model Effect Parameters Estimates SE p Extra lessons 15.62 0.94 <.001 Selfconfidence 39.58 12.57 .002 Preparation*Selfconfidence 4.10 2.72 .133 Professional development*Selfconfidence 0.02 0.74 .980 Ready_number *Selfconfidence 10.42 9.01 .250 Ready_algebra *Selfconfidence 8.99 5.78 .122 Ready_measurement *Selfconf idence 3.20 5.19 .538 Ready_geometry *Selfconfidence 2.59 5.36 .630 Ready_data *Selfconfiden ce 3.13 5.68 .582 Valuing math 21.20 17.07 .217 Preparation*Valuing math 5.88 3.61 .105 Professional development*Valuing math 1.05 0.96 .276 Ready_number *Valuing math 10.78 12.32 .383 Ready_algebra *Valuing math 5.95 8.13 .466 Ready_measurement *Valuing math 6.55 6.95 .348 Ready_geometry *Valuing math 5.59 7.34 .448 Ready_data *Valuing math 0.50 7.27 .945 Homework time 8.88 20.25 .661 Preparation*Homework time 3.10 4.72 .513 Professional development*Homework time 0.96 1.24 .442 Ready_number *Homework time 33.73 15.54 .031 Ready_algebra *Homework time 6.84 9.89 .490 Ready_measurement *Homework time 4.19 9.21 .650 Ready_geometry *Homework time 0.84 8.89 .926 Ready_data *Homework time 15.18 9.53 .113 Home resources 3.42 1.47 .020 Random 00 1946.84 44.12 <.001 Selfconfidence 32.85 5.73 .034 Valuing math 95.69 9.78 <.001 Homework time 250.65 15.83 <.001 2 1903.98 43.63 As evident in Table 23, Model 18 appeared to be more efficient than Models 9, 15, and 17 in terms of the amount of explaine d variance accounted for between schools. Specifically, an increase of 2% to 5% in th e between school variance was likely to result
PAGE 176
162 when using Model 18 as opposed to Models 9, 15, or 17. However, when compared to Model 16, a decrease of 2% in the between school varian ce was noted in Model 18. Table 23. Comparison of R2 between Model 18 and Previously Constructed Models 9 and 1517 Compared Model 00 2 18 vs. 9 0.02 0.00 18 vs. 15 0.02 0.00 18 vs. 16 0.02 0.00 18 vs. 17 0.05 0.00 As shown in Figure 13, the interaction be tween teacher ready to teach number and time student spent on homework suggested that studentsÂ’ math achievement was inversely related to time student spent on hom ework. That is, the less time students spent on homework, the better they seemed to perf orm in math. Interesti ngly, this pattern of relationship was observed for both groups of students (i.e., those with ready to teach teachers and those with very ready to teach teachers). Unexpectedly, in comparing two groups of students, the one w ith teachers who reported read y to teach the number topic consistently achieved higher math scores th an the other group of students whose teachers reported very ready to teach the subject. The size of differences in math achievement between the two groups, however, was small wh en students spent a small amount of time on homework and became substantially large wh en they spent a high amount of time on homework.
PAGE 177
163 LowMediumHigh Time on homeworkPredicted Math Achievement Ready to teach Very ready to teach Figure 13. Interaction between Time Student Spen t on Homework and Teacher Reported Readiness to Teach Number for USA Research Question 5 To what extent are school related variables (i .e., class size, school resources for math instruction, and teacher perception of ma th instructional limitations due to student factors) associated with TIMSS 2003 eighthgrade math sc ores in each country? Table 24 provides a summary of the results for Models 1921 where schoolrelated variables (i.e., class size, school resources for ma th instruction, and teacher perception of math instructional limitations due to student factor s) were separately included in the models to predict math ach ievement. It was found that, of the three variables, teacher perception of math instructional limitations due to student factors in Model 20 was the only one that significantly contributed to the prediction of math achievement ( = 22.77, SE = 6.74, p = .001). This means the more limitations due to student factors that the teacher perceived to have with teaching math, the poorer the
PAGE 178
164 students tended to achieve in math. Specifi cally, for every unit increase in teacher perception of instructional limitations due to student factors, it was expected that the students would lower their math scores by 22.77 points after controlli ng for all the level1 but not other level2 va riables in the model. Also, it was observed that Model 19 produ ced two statistically significant crosslevel interaction effects, with one between cl ass size for math instruction and student selfconfidence in learning math ( = 4.97, SE = 1.93, p = .011) and the other between class size for math instruction and student valuing of math ( = 5.52, SE = 2.64, p = .038). In terms of random effects, all were found stat istically significant. One exception was the slope variance of selfconfidence in l earning math in Model 19, which was not statistically significant ( = 10.44, SE = 3.23, p = .116), meaning that the relationship between selfconfidence in le arning math and math achievement tended to be similar across schools in the U.S. Table 24. Parameter Estimates for School Background Models for USA Model Effect Parameters Estimates SE p 19 Fixed INT 483.56 8.59 <.001 Class size 2.08 9.21 .822 Extra lessons 15.67 1.05 <.001 Selfconfidence 28.96 1.50 <.001 Class size*Selfconfidence 4.97 1.93 .011 Valuing math 3.37 2.64 .204 Class size*Valuing math 5.52 2.64 .038 Homework time 2.53 2.83 .375 Class size*Homework time 1.28 3.14 .683 Extra lessons 3.66 1.58 .020 Random 00 1959.93 44.27 <.001 Selfconfidence 10.44 3.23 .116 Valuing math 87.62 9.36 <.001 Homework time 253.26 15.91 <.001 2 1911.06 43.72 20 Fixed INT 496.39 8.28 <.001 Instructional limitation 22.77 6.74 .001
PAGE 179
165 Table 24. Parameter Estimates for School Background Models for USA Model Effect Parameters Estimates SE p Extra lessons 15.53 1.06 <.001 Selfconfidence 25.61 1.92 <.001 Instructional limitation*Selfconfidence 1.21 1.64 .463 Valuing math 0.24 2.52 .925 Instructional limitation*Valuing math 1.08 2.23 .627 Homework time 4.70 2.89 .106 Instructional limitation*Homework time 2.56 2.76 .356 Home resources 3.55 1.58 .024 Random 00 1683.59 41.03 <.001 Selfconfidence 29.28 5.41 .038 Valuing math 95.34 9.76 <.001 Homework time 254.32 15.95 <.001 2 1906.00 43.66 21 Fixed INT 493.60 13.88 <.001 School resources 7.48 8.60 .386 Extra lessons 15.59 1.06 <.001 Selfconfidence 25.46 3.70 <.001 School resources*Selfconfidence 0.60 2.48 .809 Valuing math 4.28 5.51 .438 School resources*Valuing math 2.64 3.21 .413 Homework time 9.96 5.88 .092 School resources*Homework time 4.57 3.82 .234 Home resources 3.59 1.58 .024 Random 00 1930.61 43.94 <.001 Selfconfidence 29.73 5.45 .038 Valuing math 91.01 9.54 <.001 Homework time 249.80 15.81 <.001 2 1906.71 43.67 In comparing Models 1921 with Model 9 in terms of the proportion of variance accounted for, none of these models worked better than Model 9 (see Table 25). Whereas Model 9 accounted for 1% more of the with in variance than Model 19, it accounted for 3% more of the between variance than Mode l 20, and 2% more of the between variance than Model 21.
PAGE 180
166 Table 25. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for USA Compared Model 00 2 19 vs. 9 0.00 0.01 20 vs. 9 0.03 0.00 21 vs. 9 0.02 0.00 Similar to Model 19, the combined m odel with all of the school backgroundrelated predictors produced tw o statistically significant cros slevel interaction effects. One interaction was between class size for ma th instruction and st udent selfconfidence in learning math ( = 5.20, SE = 1.95, p = .009) and the other in teraction was between class size for math instruction and student valuing of math ( = 5.23, SE = 2.58, p = .044). Again, in Model 22, the slope variance of se lfconfidence in learning math was not statistically significant ( = 23.18, SE = 4.81, p = .104) whereas the remaining random effects were statistically significant. This i ndicated that the relati onship between school backgroundrelated variables and math achieve ment were not statis tically significantly different across schools in the U.S. Table 26. Parameter Estimates for the Combined School Background Model for USA Model Effect Parameters Estimates SE p 22 Fixed INT 515.93 14.62 <.001 Instructional limitation 24.90 6.49 <.001 Class size 2.25 8.20 .784 School resources 13.00 8.54 .130 Extra lessons 15.58 1.05 <.001 Selfconfidence 26.24 3.74 <.001 Instructional limitation*Selfconfidence 1.67 1.60 .301 Class size*Selfconfidence 5.20 1.95 .009 School resources*Selfconfidence 1.26 2.39 .599 Valuing math 5.55 6.79 .415 Instructional limitation*Valuing math 0.98 2.32 .671 Class size*Valuing math 5.23 2.58 .044 School resources*Valuing math 2.01 3.27 .539 Homework time 12.08 6.77 .076 Instructional limitation*Homework time 3.18 2.84 .265 Class size*Homework time 2.36 3.19 .461 School resources*Homework time 5.50 3.91 .162
PAGE 181
167 Table 26. Parameter Estimates for the Combined School Background Model for USA Model Effect Parameters Estimates SE p Home resources 3.56 1.56 .022 Random 00 1688.59 41.09 <.001 Selfconfidence 23.18 4.81 .104 Valuing math 93.15 9.65 <.001 Homework time 255.36 15.98 <.001 2 1903.64 43.63 As shown in Table 27, Model 22 appears to be less efficient than Models 9 and 1921. Whereas the amount of explained vari ance within schools in Model 22 did not change compared to these models (pseudo R2 = 0), the amount of explained variance between schools in Model 22 decreased by 4% (compared to Model 20) to 7% (compared to Models 9 and 19). Table 27. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for USA Compared Model 00 2 22 vs. 9 0.07 0.00 22 vs. 19 0.07 0.00 22 vs. 20 0.04 0.00 22 vs. 21 0.05 0.00 The modeled means of predicted math achievement for the two interactions observed in Model 22 are displayed in Figures 1415. The data in Figure 14 suggested that for students who reported having hi gh selfconfidence in learning math, changes from small class size (i.e., 124 students) to large class size (i.e., 41+ students) tended to lower their math scores significantly. Conve rsely, for students who reported having low selfconfidence in learning math, increases in class size appeared to improve their math scores. Thus, it appears math achieveme nt gap among eighthgrade students with different levels of selfconfidence in lear ning math was most substantial when they learned math in small class size and became smaller when they learned math in large class size.
PAGE 182
168 124 Students2532 Students3340 Students41+ Students Class sizePredicted Math Achievement Selfconfidence: Low Selfconfidence: Medium Selfconfidence: HighFigure 14. Interaction between Class Size for Math Instruction and Selfconfidence in Learning Math for USA Interestingly, as shown in Figure 15, in schools with small class size (i.e., 124 students), students who reported having a low le vel of valuing math tended to achieve higher math scores than their peers who reported having medium or high levels of valuing math. This pattern of relationship, however, appears to reverse in schools with larger classes. That is, students with medium and high levels of valuing math tended to perform better than their p eers who reported having a low level of valuing math. Nevertheless, a similar trend was noted for all of the students, regardle ss of their levels of valuing math. That is, changes from smaller cl asses to big classes we re associated with increased math scores.
PAGE 183
169 124 Students2532 Students3340 Students41+ Students Class sizePredicted Math Achievement Valuing math: Low Valuing math: Medium Valuing math: HighFigure 15. Interaction between Class Size for Math Instruction and Valuing of Math for USA Final Model With an intention to identify the most efficient and parsimonious model to predict eighthgrade math achievement in the USA, Model 23 was built and compared with the three combined models, Models 14, 18, and 22. It is also worth noting that in Model 23, the slope variance for student selfconfidence in learning math was set to 0 because it was not statistically significant in Models 14 and 22. As can be seen from Table 28, Model 23 pr oduced several statistically significant fixed effects. The three significant level2 ma in effects included teacher perception of math instructional limitations due to student factors ( = 32.35, SE = 7.29, p <.001), opportunity to learn data ( = .53, SE = .22, p = .015), and opportunity to learn geometry ( = .37, SE = .16, p = .025). The three level1 sign ificant main effects were:
PAGE 184
170 extra math lessons ( = 15.43, SE = 1.06, p <.001), selfconfidence in learning math ( = 26.63, SE = 1.26, p <.001), and home resources ( = 3.65, SE = 1.54, p = .018). And, the two significant crossle vel interactions were found between opportunity to learn geometry and time student spent on homework ( = .21, SE = .07, p = .005) and between teacher ready to teach number and time student spent on homework ( = 38.58, SE = 18.95, p = .043). Also, in this model, all th e random effects were statistically significant. Table 28. Parameter Estimates for Full Model for USA Model Effect Parameters Estimates SE p 23 Fixed INT 393.91 167.34 .020 Homework assignment 32.35 7.29 <.001 Instructional limitation 11.75 6.23 .061 Class size 2.81 6.74 .677 Opportunity_algebra 0.43 0.22 .050 Opportunity_data 0.53 0.22 .015 Opportunity_geometry 0.37 0.16 .025 Opportunity_measurement 0.05 0.22 .829 Opportunity_number 0.47 1.86 .800 Ready_ number 19.05 30.45 .532 Ready_ algebra 7.44 14.37 .605 Ready_ measurement 6.46 19.89 .746 Ready_ geometry 13.68 20.96 .515 Ready_ data 6.48 18.44 .726 Extra lessons 15.43 1.06 <.001 Selfconfidence 26.63 1.26 <.001 Valuing math 3.80 46.33 .935 Homework assignment*Valuing math 3.15 2.60 .228 Instructional limitation*Valuing math 1.16 2.24 .605 Class size*Valuing math 3.75 2.54 .143 Opportunity_algebra *Valuing math 0.02 0.09 .777 Opportunity_ data *Valuing math 0.03 0.10 .746 Opportunity_ geometry *Valuing math 0.05 0.08 .554 Opportunity_ measurement *Valuing math 0.14 0.11 .195 Opportunity_ number *Valuing math 0.14 0.54 .799 Ready_ number *Valuing math 9.28 10.23 .366 Ready_ algebra 0.92 7.71 .905 Ready_measurement 2.22 6.04 .714 Ready_ geometry 1.71 4.47 .703 Ready_ data 2.89 6.42 .653
PAGE 185
171 Table 28. Parameter Estimates for Full Model for USA Model Effect Parameters Estimates SE p Homework time 14.22 60.67 .815 Homework assignment*Homework time 3.44 3.53 .332 Instructional limitation*Homework time 5.69 3.16 .073 Class size*Homework time 5.09 3.42 .139 Opportunity_algebra*Homework time 0.03 0.13 .798 Opportunity_data*Homework time 0.17 0.13 .198 Opportunity_geometry*Homework time 0.21 0.07 .005 Opportunity_measurement*Homework time 0.20 0.13 .127 Opportunity_number*Homework time 0.29 0.75 .696 Ready_number*Homework time 38.58 18.95 .043 Ready_ algebra 10.40 8.06 .199 Ready_measurement 3.00 10.58 .777 Ready_geometry 2.52 8.49 .767 Ready_data 12.29 7.65 .110 Home resources 3.65 1.54 .018 Random 00 1371.00 37.03 <.001 Valuing math 99.05 9.95 <.001 Homework time 225.58 15.02 <.001 2 1918.98 43.81 Surprisingly, when comparing this full model with earlier combined models, it was found that Model 23 was more efficient than Model 18 and 22 but less efficient than Model 14. Specifically, whereas the amount of between school variance accounted for by Model 23 was reduced by 17% as compared to Model 18 and 24% as compared to Model 22, it increased by 8% as compared to Model 14. Therefore, for USA, Model 14 serves as the best model for predicting math achievement. Table 29. Comparison of R2 between Model 23 and Previously Constructed Models 14, 18 and 22 For USA Compared Model 00 2 14 0.08 0.01 18 0.17 0.01 22 0.24 0.01 Figure 16 visually displays the nature of the crosslevel interactions between opportunity to learn geometry and time stude nt spent on homework. Interestingly, in the
PAGE 186
172 presence of other level2 fact or predictors in the model (i.e., student background, student home resources, instructional practice, teacher background, and schoolbackground), this interaction had a different pattern from what was obser ved in Model 14 where only instructional practicerelated predictors were included in the model. Specifically, Model 23 suggested that when the opportunity to learn geometry was low students tended to score low in math, and that the achievement ga ps across students with different levels of time on homework were relatively small. Howeve r, as the opportunity to learn increased, students who spent a medium or low amount of time on homework tended to gain higher math scores whereas students who spent a high amount of time on homework tended to perform slightly poorer in math. 0102030405060708090100 Opportunity to learn GeometryPredicted Math Achievement Time on Homework: Low Time on Homework: Medium Time on Homework: HighFigure 16. Interaction between Opportunity to Learn Geometry by Time Student Spent on Homework for USA
PAGE 187
173 Figure 17 presents the model predicte d math achievement based upon teacher ready to teach number and time student sp ent on homework. Unexpectedly, across the amounts of time on homework, students whos e teachers reported very ready to teach number tended to achieve lower math scores than their peers whos e teachers reported ready to teach number. The achievement gap between these students, however, was small when their time on homework was low and b ecame more substantia l when the amount of homework was high. It is impor tant to note that this gra ph was constructed based on 149 teachers who reported very ready to teach number and only 4 teachers who reported ready to teach number. LowMediumHigh Time on homeworkPredicted Math Achievement Ready to teach Very ready to teachFigure 17. Interaction between Teacher Reporte d Ready to Teach Number by Time Student Spent on Homework for USA
PAGE 188
174 Results for Canada Evaluation of Missing Data As a result of the listwise deletion method, the sample size for Canada was reduced from 8,473 students and 354 school s to 6,248 students and 271 schools. This means approximately 73.74% of the original samp le had complete data on all variables of interest in this study. In order to evaluate the extent to wh ich the data for Canada were missing completely at random, the missingness on 19 level2 variables was correlated. Results of this analysis suggested a nonra ndomness of missing data, with the majority of the correlation coefficients larger than .70, indicating strong positive relationships among missingness indicators of the variables. In addition, when mi ssingness was correlated with values of itself as well as values of other variables, only weak correlations were observed (r = .15 to .23, p = .005). In summary, the missing data mechanism for Canada was not missing completely at random. Univariate Analysis A descriptive examination of level1 va riables (i.e., overall math achievement, gender, selfconfidence in learning math, va luing of math, time on math homework, extra math lessons, and home resources for lear ning math) was conducted using SAS 9.13. Of the complete sample of 6,248 eighthgrade students, 3,092 (49.49%) we re male and 3,156 (50.51%) were female. On average, the weight ed overall math achievement for Canadian students was 529.30 (SD = 61.17) with the lowest scor e of 322.82 and the highest score of 728.39 (see Table 30).
PAGE 189
175 With regard to level1 predictor variable s, it appeared that, on average, eighthgrade students in Canada had good support at home in terms of resources for learning (M = 2.85 SD = .41), were above medium level of selfconfidence in learning math (M = 1.47, SD = .75) and valuing of math (M = 1.58, SD = .61), spent little time on math homework (M = .85, SD = .58), and only had extra math lessons occasionally (M = .45, SD = .76) (see Table 30). Table 30. Weighted Descriptive Statistics for Level1 Variables for Canada (N = 6,248) Variable M SD Min Max Overall math achievement 529.30 61.18 322.82 728.39 Selfconfidence in learning math 1.47 0.75 0 2 Valuing of math 1.58 0.61 0 2 Time on math homework 0.85 0.58 0 2 Extra math lessons 0.45 0.76 0 3 Home resources for learning math 2.85 0.41 0 3 Note: When weight was used to compute m eans in SAS, skewness and kurtosis were not produced In terms of distributions of level1 va riables, the unweighted descriptive results from Table 31 suggested that all but two variables, extra math lessons and home resources for learning math, approximated nor mality, with skewness and kurtosis values within the range of 1.00 and 1.00. Table 31. Unweighted Descriptive Statistics for Level 1 Variables for Canada (N = 6,248) Variable M SD Min Max Skewness Kurtosis Overall math achievement 528.34 60.91 322.82 728.39 0.07 0.30 Selfconfidence in learning math 1.44 0.75 0 2 0.92 0.63 Valuing of math 1.56 0.61 0 2 1.05 0.08 Time on math homework 0.84 0.60 0 2 0.08 0.38 Extra math lessons 0.44 0.77 0 3 1.71 2.13 Home resources for learning math 2.85 0.42 0 3 3.03 10.37 Similarly, a descriptive analysis was c onducted on the 19 predictor variables at the school level. As shown in Table 32, on aver age, Canadian students had a moderate to
PAGE 190
176 high percentage of opportunity to learn math content domains, ranging from 59.14 (SD = 28.26) for algebra to 95.85 (SD = 8.88) for number. Although less than half of math teachers reported being prepar ed to teach math content (M = .44, SD = .50), on average, they participated in various types of mathrelated professional development (M = 2.87, SD =1.81) and reported a high leve l of readiness to teach (M = 1.75, SD = .44 for data to M = 1.94, SD = .24 for number). The data also suggested that in nearly half of the lessons, students were given activities related to math instructional practice (M = 1.81, SD = .27) and math content (M = 1.70, SD = .22). On average, a medium amount of homework was assigned to the students (M = 1.14, SD = .61). Finally, class size in Canadi an schools tended to be small, less than 33 students (M = .73, SD = .62) and teachersÂ’ per ception of instructional limitations due to student factors was low (M = .52, SD = .70). On average, the availability of school resources for ma th instruction was relatively high (M = 1.41, SD = .56). Noticeably, across 271 schools, the average math instructional hours per year varied greatly, ranging from 30 to 388, with a mean of 161.57 (SD = 44.72). Also, as shown in Table 32, 12 out of th e 19 level2 predictor variables appeared to have approximately normal distributions wi th skewness and kurtosi s values within the normality approximation range of 1.00 to 1.00. The seven variables that appeared to depart from normality included opportunity to learn number, preparation to teach, ready to teach number, algebra, measurement, geometry, and data, and average math instructional hours per year.
PAGE 191
177 Table 32. Unweighted Descriptive Statistics for Level 2 Variables for Canada (N = 271) Variable M SD Min Max Skewness Kurtosis Opportunity to learn number 95.85 8.88 40 100 3.01 11.41 Opportunity to learn algebra 59.14 28.26 0 100 0.18 0.75 Opportunity to learn measurement 77.04 23.23 0 100 1.00 0.36 Opportunity to learn geometry 72.86 19.12 0 100 0.92 1.24 Opportunity to learn data 60.61 32.30 0 100 0.35 1.15 Amount of homework assignment 1.14 0.61 0 2 0.09 0.42 Instructional practicerelated activities in math lessons 1.81 0.27 1.13 2.71 0.18 0.01 Contentrelated activities in math lessons 1.70 0.22 1.04 2.32 0.21 0.43 Preparation to teach 0.44 0.50 0 1 0.26 1.95 Ready to teach number 1.94 0.24 1 2 3.76 12.25 Ready to teach algebra 1.82 0.39 0 2 1.89 2.15 Ready to teach measurement 1.83 0.39 0 2 1.92 2.32 Ready to teach geometry 1.88 0.35 0 2 2.84 7.67 Ready to teach data 1.75 0.44 0 2 1.26 0.05 Mathrelated professional development 2.87 1.81 0 5 0.35 1.25 Class size for math instruction 0.73 0.62 0 2 0.26 0.62 School resources for math instruction 1.41 0.56 0 2 0.28 0.84 Teacher perception of math instructional limitations due to student factors 0.52 0.70 0 2 0.99 0.32 Average math instructional hours per year 161.57 44.72 30 388 1.10 4.92 Bivariate Analysis The results of weighted bivariate correl ations among six level1 variables (i.e., gender, selfconfidence in learning math, va luing of math, time on math homework, extra math lessons, and home resources for learni ng math) are presented in Appendix G. It appeared from these results that level1 pred ictor variables were unc orrelated from each other, with correlation coefficients ranging fr om .24 between selfconfidence in learning math and extra math lessons to .37 between selfconfidence in lear ning math and valuing of math. It was interesting to note that ge nder tended to have a negative albeit weak
PAGE 192
178 relationship with all level1 variables excep t for home resources for learning math (r = .03). At level2, unweighted biva riate relationships were estimated for the 19 predictor variables. The correla tion matrix for these variables can be found in Appendix H. Unlike level1, correlation coefficients of level2 va riables had a wider range, from .27 between percentage of opportunity to learn measurem ent and preparation to teach to .53 between ready to teach algebra and ready to teach measurement. As expected, correlation coefficients among the variables measuring the same construct tended to be stronger than those measuring different construct. For exam ple, the correlations ranged from .39 to .53 for ready to teach variables. Interestingly, opportunity to learn number had very weak correlations with other opport unity to learn variables (r = .08 to .14). Another interesting relationship was observed betw een the number of math inst ructional hours per year and other variables where most correlation coeffi cients were close to 0. Similarly, teachersÂ’ perception of math instructiona l limitations due to student fa ctors was found to have very weak to almost no relations hip with other variables (r = .11 to .09). Evaluation of HLM Assumptions In order to ensure tenability of result s produced by multilevel models in this study, an evaluation of HLM assumptions thro ugh visual analysis of both level1 and level2 random effects of Model 23 was performed. Model 23 was selected because the results of HLM analysis suggested that it wa s the most efficient model to predict math achievement in Canada (see HLM Analysis for Canada). The data from Figure 18 suggested that level1 residuals approximated a normal distribution. In terms of varian ce, the scatter plot between le vel1 residuals and predicted
PAGE 193
179 math achievement, as illustrated in Figure 19, suggested that ther e was evidence of homogeneity of level1 variance. Figure 18. Histogram for Level1 Residuals for Canada Figure 19. Level1 Residuals by Predicted Math Achievement for Canada For level2 random effects, the empirical Bayes residuals for the intercepts and slopes as well as empirical Bayes predicted math scores were used to construct the graphs in Figures 2027. As can be seen from Figur es 2027, level2 intercep t residuals appeared to have a normal distributi on and homogeneous variance.
PAGE 194
180 Figure 20. Histogram for Level2 Intercept Residuals for Canada Figure 21. Level2 Intercept Residuals by Pr edicted Intercept for Canada Similarly, Figure 22 suggests that level2 residuals for the slope of Gender approximated a normal distribution and Figure 23 provides evidence of homogeneity of variance.
PAGE 195
181 Figure 22. Histogram for Level2 Slope (G ender) Residuals for Canada Figure 23. Level2 Slope (Gender) Residuals by Predicted Math Achievement for Canada Likewise, for the slope of extra math lessons, it can be seen from Figures 2425 that the slope residuals approximated a normal distri bution and had homogeneous variance.
PAGE 196
182 Figure 24. Histogram for Level2 Slope (Ext ra Lessons) Residuals for Canada Figure 25. Level2 Slope (Extra Lessons) Residual s by Predicted Math Achievement for Canada Finally, an examination of Figures 2627 al so suggested that the level2 residuals for the slope of selfconfidence had an a pproximately normal distribution and their variances across school were relatively homogeneous. In summary, visual analys es of both level1 and leve l2 random effects suggested that the assumptions of normality and homoge neity of level1 and level2 random effects were satisfied.
PAGE 197
183 Figure 26. Histogram for Level2 Slope (Sel fConfidence) Residuals for Canada Figure 27. Level2 Slope (SelfConfidence) Resi duals by Predicted Math Achievement for Canada HLM Analysis Unconditional model (Model 1) The HLM analysis started with the uncondi tional model where none of the level1 or level2 predictor was included in the mode l. The results of the unconditional model are presented in Table 33. For Canada, the fi xed effect for the intercept was 527.33 (SE = 2.55, p <.001). The amount of variability in ma th achievement was significantly different across schools in Canada (00 = 1,028.87, SE = 32.08, p <.001). Within schools, the
PAGE 198
184 amount of unexplained variance was much larger than that between schools (2= 2,650.40, SE = 51.48). The computed intraclass correlation (ICC) of .28 indicated a modest level of natural clustering of stude nts occurred between sc hools in Canada. In other words, approximately 28% of the tota l variance in math sc ores occurred between schools. Table 33. Parameter Estimates for Unconditional Model for Canada Model Effect Parameters Estimates SE p 1 Fixed ICC 0.28 INT 527.33 2.55 <.001 Random 00 1028.87 32.08 <.001 2 2650.40 51.48 Research Question 1 To what extent are student background va riables (i.e., gender selfconfidence in learning math, valuing of math, time on math homework, and tutoring in math) associated with TIMSS 2003 eighthgrade math scores in each country? In order to answer this research question, first, ea ch of the student background variables was entered separately into Model 1 to predict math achievement. Then, as a group of variables, those that contributed sign ificantly in Models 26 were included in Model 7 to predict math achievement. Finally, in order to evaluate model fit in terms of proportion of variance accounted for, pseudo R2 was computed for the current model against previously constructed models. Re sults of these models (Models 26) are presented in Table 34. The data from Table 34 suggested that all of the fixed effect s estimated by Models 26 were statistically significant, except for that of time on homework in Model 5 ( = 1.98, SE = 1.64, p = .23). Likewise, all of the random effects estimated in Models 26
PAGE 199
185 were statistically significant, except fo r those of valuing math in Model 4 ( = 14.64, SE = 3.83, p = .44) and time on homework in Model 5 ( = 13.85, SE = 3.72, p = .10). Interestingly, whereas selfconfidence in lear ning math (Model 3) and valuing of math (Model 4) appeared to have positive re lationships with math achievement ( = 40.04 and 21.13, SE = 1.05 and 1.41; p <.001 and .001, respectively), gender (Model 2) and extra math lessons (Model 6) appeared to have ne gative relationships with math achievement ( = 4.74 and 21.70, SE = 2.26 and 1.41, p = .036 and <.001, respectively). An examination of the pseudo R2 across the five models (Models 26) suggested that the addition of individual predictors se parately to the uncondi tional model (Model 1) to predict math achievement resulted in a reduction between 0% (Model 5) to 32% (Model 3) for the within school variance. Fo r the between school variance, however, the amount of reduction was smaller, up to 11% (Model 6). In fact, in Models 24, the amount of between school variance even increas ed (i.e., 6% in Model 3 to 15% in Model 2). Table 34. Parameter Estimates for Models 26 (L evel1 Student Back ground) for Canada Model Effect Parameters Estimates SE p 00 2 2 Fixed INT 529.78 2.86 <.001 Gender 4.74 2.26 .036 Random 00 1186.81 34.45 <.001 Gender 195.52 13.98 <.001 2 2597.50 50.97 Pseudo R2 0.15 0.02 3 Fixed INT 468.80 2.83 <.001 Selfconfidence 40.04 1.05 <.001 Random 00 1086.53 32.96 <.001 Selfconfidence 43.98 6.63 .003 2 1792.93 42.34
PAGE 200
186 Table 34. Parameter Estimates for Models 26 (L evel1 Student Back ground) for Canada Model Effect Parameters Estimates SE p 00 2 Pseudo R2 0.06 0.32 4 Fixed INT 493.85 3.33 <.001 Valuing math 21.13 1.41 <.001 Random 00 1117.09 33.42 <.001 Valuing math 14.64 3.83 .443 2 2494.37 49.94 Pseudo R2 0.09 0.06 5 Fixed INT 525.69 2.83 <.001 Homework time 1.98 1.64 .229 Random 00 955.34 30.91 <.001 Homework time 13.85 3.72 .096 2 2645.16 51.43 Pseudo R2 0.07 0.00 6 Fixed INT 536.79 2.41 <.001 Extra lessons 21.70 1.41 <.001 Random 00 914.89 30.25 <.001 Extra lessons 79.82 8.93 <.001 2 2377.11 48.76 Pseudo R2 0.11 0.10 Note: Pseudo R2 refers to the difference in proportion of variance accounted for between the current models (Models 26) and the unconditional model (Model 1). As a next step of model building, all of the student background variables (i.e., gender, selfconfidence in learning math, va luing of math, time on math homework, and tutoring in math) were included in the co mbined model, Model 7, to predict math achievement. Interestingly, in the presence of other variables in the model, only three out of five predictors had statistically significan t fixed effects. With fixed effect of 12.80 (SE = 1.17, p <.001) for extra math lesson, it could be inferred that for each unit increase in extra math lesson (i.e., from 0 for never to 3 for daily), the stude nts were expected to
PAGE 201
187 reduce 12.80 points in their math scores while controlling for othe r predictors in the model. Similarly, with fixed effect of 35.88 (SE = 1.19, p <.001) for selfconfidence in learning math, it could be interpreted that for each unit increase in level of selfconfidence in learning math (i.e., from 0 for lo w to 2 for high), it was expected that the students would improve 35.88 points in their ma th scores while controlling for other predictors in the model. Likewi se, with fixed effect of 4.81 (SE = 1.30, p <.001) for student valuing of math, it could be inferred th at for each unit increase in level of student valuing of math (i.e., from 0 for low to 2 for high), it was expected that the students would gain 4.81 more points in th eir math scores after adjust ing for other predictors in the model. In terms of random effects, all were f ound statistically significant, except for those of student va luing of math ( = 16.81, SE = 4.10, p = .50) and time student spent on homework ( = 7.38, SE = 2.72, p =.50). With the variance for the intercept of 1,381.93 (SE = 37.17, p <.001), it could be inferred that sta tistically significant differences existed across the school means of math achievement after adjusting for the five student background variables in the model. Similarly, it could be interprete d that schools varied significantly in the relations hips between math achievement and student gender ( = 77.76, SE = 8.82, p = .001), extra math lessons ( = 48.11, SE = 6.94, p <.001), and student selfconfidence in learning math ( = 53.04, SE = 7.28, p <.001). Table 35. Parameter Estimates for Model 7 (Level 1 Student Backgr ound) for Canada Model Type Parameters Estimates SE p 7 Fixed INT 472.20 3.72 <.001 Gender 1.35 1.65 .415 Extra lessons 12.80 1.17 <.001 Selfconfidence 35.88 1.19 <.001 Valuing math 4.81 1.30 <.001
PAGE 202
188 Table 35. Parameter Estimates for Model 7 (Level 1 Student Backgr ound) for Canada Model Type Parameters Estimates SE p Homework time 0.08 1.21 .950 Random 00 1381.93 37.17 <.001 Gender 77.76 8.82 .001 Extra lessons 48.11 6.94 <.001 Selfconfidence 53.04 7.28 .001 Valuing math 16.81 4.10 >.500 Homework time 7.38 2.72 >.500 2 1656.53 40.70 An evaluation of model fit was also c onducted between Model 7 and previously constructed models, Models 26. As exp ected, the inclusion of student background variables in Model 7 yielded a considerable reduction in am ount of variance accounted for in math achievement within schools, fr om 8% to 37% (see Table 36). Unexpectedly, between schools, the amount of variance appe ared to increase notably, from 16% to 51%. In sum, Model 7 was more efficient than ea rlier models in that it accounted for more variance in math achievement within schools in Canada. However, Model 7 appeared to be less efficient than previously constructe d models in that it accounted for less variance in math achievement between schools in Canada. Table 36. Comparison of R2 between Model 7 and Previously Constructed Models for Canada Previous Model 00 2 2 0.16 0.36 3 0.27 0.08 4 0.24 0.34 5 0.45 0.37 6 0.51 0.30 Research Question 2 To what extent are home resources variab les (i.e., availability of calculator, computer, and desk for student use) asso ciated with TIMSS 2003 eighthgrade math scores in each country?
PAGE 203
189 Interestingly, when level1 predicto r home resources was added to the unconditional model to predict math achieveme nt, there was a considerable increase of 56% in the between school variance, wher eas the amount of reduc tion in the within school variance was trivial, .4% (see Table 37) In this model, home resources had a statistically significant relations hip with math achievement ( = 7.48, SE = 1.76, p <.001). This means that for ever y unit increase in home resour ces (i.e., from 0 to 3), math achievement was expected to increase by 7.48 points, while not controlling for other variables. In addition, with th e random effect for home res ources being not statistically significant ( = 11.51, SE = 3.39, p > .50), it could be inferred that the relationship between home resources and math achieveme nt tended to be similar across schools in Canada. Table 37. Parameter Estimates for Level1 Home Resources Model for Canada Model Effect Parameters Estimates SE p 00 2 8 Fixed INT 506.17 5.89 <.001 Home resources 7.48 1.76 <.001 Random 00 1604.39 40.05 .110 Home resources 11.51 3.39 >.500 2 2640.68 51.39 Pseudo R2 0.56 0.004 Note: Pseudo R2 refers to the difference in the proportion variance between Model 8 and Model 1. Given the findings obtained from Models 7 and 8, five out of six studentrelated variables were entered into the unconditional model to make Model 9. Time student spent on homework was excluded from Model 9 because both of its fixed and random effects were not statistically signifi cant in Model 7. Also, in Mode l 9, the slope variances of student valuing of math and home resources were set to 0 because they were not statistically significant in earlier models.
PAGE 204
190 As can be seen from Table 38, with the presence of other pred ictors in Model 9, only extra math lessons, selfconfidence in l earning math, and student valuing of math had statistically significant relationships with math achie vement. Specifically, whereas selfconfidence in learning math ( = 35.79, SE = 1.20, p <.001) and student valuing of math ( = 4.51, SE = 1.28, p <.001) were positively relate d to math achievement, an inverse relationship was observed between ma th achievement and extra math lessons ( = 12.89, SE = 1.17, p <.001). This could be interprete d to mean that the more selfconfidence students expressed in learning math and the higher value students placed in math, the better they achieved in math perf ormance. However, it appears that the more frequently students took extra math lessons, the poorer math scores they achieved. In terms of random effects, all were f ound statistically significant, suggesting that the relationships between leve l1 predictors (i.e., gender, extra math lessons, and selfconfidence) and math achievement varied significantly across schools in Canada. As compared to Model 7, Model 9 appeared more efficient in that it accounted for more variance between schools (11%), even t hough a marginal increase in the variance within schools (1%) was noted. Compared to Model 8, Model 9 accounted for a significantly higher amount of the variance within school (37%) and the variance between schools (23%). As a result of thes e comparisons, Model 9 was selected as the foundational level1 model for further examina tion of the relationships between level2 predictors and math achievement. Table 38. Parameter Estimates for Combined Level1 Predic tors Model for Canada Model Type Parameters Estimates SE p Compared Model 00 2 9 Fixed INT 464.55 6.19 <.001 Gender 1.29 1.65 .435
PAGE 205
191 Table 38. Parameter Estimates for Combined Level1 Predic tors Model for Canada Model Type Parameters Estimates SE p Compared Model 00 2 Extra lessons 12.89 1.17 <.001 Selfconfidence 35.79 1.20 <.001 Valuing math 4.51 1.28 .001 Home resources 2.90 1.76 .100 Random 00 1236.43 35.16 <.001 Gender 68.64 8.28 .005 Extra lessons 44.42 6.66 <.001 Selfconfidence 58.44 7.64 <.001 2 1665.98 40.82 Pseudo R2 7 0.11 0.01 8 0.23 0.37 Note: Pseudo R2 refers to the difference in the proportion of variance between Model 9 and Models 78. Research Question 3 To what extent are instructional variable s (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional time) associated with TIMSS 2003 eighthgrade math scores in each country? In addressing this research question, a similar strategy for model building used in Research Question 1 was applied here. That is, each of the level2 instructional practice variables was first added to the foundationa l level1 model (Model 9) to make Models 1013. Then, as a group, those variables with significant fixed eff ects in Models 1013 were included in the combined instructional practices model, Model 14. It is important to note that in these models, the predictor hom e resources was excluded because both of its fixed and random effects were not statistica lly significant in earlier models. Also, all possible cross interactio ns between level1 and level2 pr edictors were allowed in these models. The results of Models 1014 are presented in Tables 3941. As can be seen in Table 39, Model 10 w ith opportunity to learn math topics as level2 predictors of math achievement yielde d five statistically significant crosslevel
PAGE 206
192 interactions: (1) opportunity to learn data by gender ( = .14, SE = .07, p = .034), (2) opportunity to learn algebr a by extra math lessons ( = .10, SE = .04, p = .017), (3) opportunity to learn geometry by extra math lessons ( = .12, SE = .06, p = .040), (4) opportunity to learn data by student selfconfidence in learning math ( = .11, SE = .04, p = .004), and opportunity to learn measuremen t and student selfconfidence in learning math ( = .13, SE = .05, p < .013). Table 39 also showed that when amount of homework assignment, contentrelated activities and instructional practicerelated act ivities in math lessons, and average number of math instructional hours per year were added to M odels 1113, no statistically significant crosslevel interaction effects were detected. Of the level2 main effects, two were found statistically signi ficant: instructional practices related activities in math lessons in Model 12 ( = 29.07, SE = 12.28, p = .019) and average math instructional hours per year in Model 13 ( = .24, SE = .07, p = .001). This means that with every unit increase in instructional practicesrelate d activities in math lessons, studentsÂ’ math scores were expected to in crease by 29.07 points af ter adjusting for level1 variables but not for other level2 variables in the mode l. Surprisingly, howe ver, with every unit increase in average math instructional hours pe r year, student math scores were expected to decrease by .24 points, after ad justing for level1 variables but not level2 variables in the model. Table 39. Parameter Estimates for Level2 Instru ctional Practices Models for Canada Model Type Parameters Estimates SE p 10 Fixed INT 432.42 32.12 <.001 Opportunity_algebra 0.09 0.11 .421 Opportunity_data 0.46 0.11 <.001 Opportunity_geometry 0.03 0.14 .859 Opportunity_measurement 0.04 0.18 .839
PAGE 207
193 Table 39. Parameter Estimates for Level2 Instru ctional Practices Models for Canada Model Type Parameters Estimates SE p Opportunity_number 0.75 0.31 .017 Gender 8.34 17.75 .638 Opportunity_algebra*Gender 0.02 0.06 .769 Opportunity_data*Gender 0.14 0.07 .034 Opportunity_geometry*Gender 0.02 0.08 .817 Opportunity_measurement*Gender 0.08 0.09 .352 Opportunity_number*Gender 0.13 0.18 .464 Extra lessons 5.82 14.49 .688 Opportunity_algebra*Extra lessons 0.10 0.04 .017 Opportunity_data*Extra lessons 0.03 0.04 .494 Opportunity_geometry*Extra lessons 0.12 0.06 .040 Opportunity_measurement*Extra lessons 0.05 0.05 .339 Opportunity_number*Extra lessons 0.11 0.14 .456 Selfconfidence 29.35 11.09 .009 Opportunity_algebra*Selfconfidence 0.03 0.04 .421 Opportunity_data*Selfconfidence 0.11 0.04 .004 Opportunity_geometry*Selfconfidence 0.06 0.04 .168 Opportunity_measurement*Selfconfidence 0.13 0.05 .013 Opportunity_number*Selfconfidence 0.09 0.10 .368 Valuing math 4.63 1.33 .001 Random 00 991.40 31.49 <.001 Gender 71.93 8.48 .009 Extra lessons 36.52 6.04 .001 Selfconfidence 28.24 5.31 .048 2 1663.55 40.79 11 Fixed INT 481.13 6.15 <.001 Homework assignment 7.53 5.09 .140 Gender 4.02 3.98 .314 Homework assignment*Gender 4.57 3.07 .138 Extra lessons 17.22 2.67 <.001 Homework assignment*Extra lessons 3.71 1.99 .064 Selfconfidence 33.74 2.15 <.001 Homework assignment*Selfconfidence 1.80 1.69 .289 Valuing math 4.75 1.30 <.001 Random 00 1237.67 35.18 <.001 Gender 56.97 7.55 .011 Extra lessons 39.96 6.32 <.001 Selfconfidence 58.18 7.63 <.001 2 1667.94 40.84 12 Fixed INT 426.43 31.18 <.001 Content_activities 3.22 17.24 .852 Instruction_activities 29.07 12.28 .019 Gender 10.59 14.70 .472 Content_activities*Gender 3.05 8.38 .716
PAGE 208
194 Table 39. Parameter Estimates for Level2 Instru ctional Practices Models for Canada Model Type Parameters Estimates SE p Instruction_activities*Gender 8.13 6.49 .212 Extra lessons 13.95 12.67 .272 Content_activities*Extra lessons 6.58 7.03 .351 Instruction_activities*Extra lessons 6.90 5.50 .211 Selfconfidence 38.56 11.21 .001 Content_activities*Selfconfidence 4.96 5.57 .375 Instruction_activities*Selfconfidence 6.28 4.93 .204 Valuing math 4.58 1.29 .001 Random 00 1213.95 34.84 <.001 Gender 66.64 8.16 .006 Extra lessons 45.73 6.76 <.001 Selfconfidence 57.35 7.57 <.001 2 1666.73 40.83 13 Fixed INT 473.11 3.37 <.001 Instructional hours 0.24 0.07 .001 Gender 1.12 1.66 .500 Instructional hours*Gender 0.08 0.03 .017 Extra lessons 12.89 1.15 <.001 Instructional hours*Extra lessons 0.00 0.02 .988 Selfconfidence 35.71 1.21 <.001 Instructional hours*Selfconfidence 0.04 0.03 .128 Valuing math 4.74 1.31 .001 Random 00 1146.24 33.86 <.001 Gender 54.42 7.38 .013 Extra lessons 45.24 6.73 <.001 Selfconfidence 56.97 7.55 <.001 2 1667.79 40.84 In terms of model fit, in comparison w ith the foundational level1 model (Model 9), Model 10 appeared to be the most effi cient model because the amount of explained variance between schools in this model incr eased by 20% (see Table 40). As for the within school variance, no significant di fference was observed be tween Models 1013 and Model 9 (pseudo R2 = 0).
PAGE 209
195 Table 40. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model for Canada Compared Model 00 2 10 vs. 9 0.20 0.00 11 vs. 9 0.00 0.00 12 vs. 9 0.02 0.00 13 vs. 9 0.07 0.00 Similar to Model 10, when using all the le vel2 instructional pr actice variables to predict math achievement, Model 14 produced five statistically significant crosslevel interaction effects (see Table 41). First, average math instruc tional hours per year interacted with gender ( = .08, SE = .04, p = .030). Second, opportunity to learn algebra interacted with extra math lessons ( = .11, SE = .05, p = .020). Third, opportunity to learn geometry interacted with extra math lessons ( = .12, SE = .06, p = .029). Fourth, opportunity to learn data interacted w ith selfconfidence in learning math ( = .11, SE = .04, p = .002). Finally, opportunity to learn meas urement interacted with selfconfidence in learning math ( = .14, SE = .05, p = .007). In this model, all the random effects were statistically significant, sugge sting that, in Canada, the relationships between math achievement and gender, extra math less ons, and selfconfidence in learning math differed significantly across schools. Table 41. Parameter Estimates for the Co mbined Level2 Inst ructional Practices Model for Canada Model Type Parameters Estimates SE p 14 Fixed INT 338.66 41.43 <.001 Homework assignment 1.79 4.48 .688 Opportunity_algebra 0.04 0.10 .695 Opportunity_data 0.46 0.11 <.001 Opportunity_geometry 0.05 0.13 .687 Opportunity_measurement 0.01 0.17 .975 Opportunity_number 0.95 0.32 .004 Content_activities 30.23 16.22 .063 Instruction_activities 13.98 10.83 .198 Instructional hours 0.23 0.07 .002
PAGE 210
196 Table 41. Parameter Estimates for the Co mbined Level2 Inst ructional Practices Model for Canada Model Type Parameters Estimates SE p Gender 30.92 24.51 .208 Homework assignment*Gender 4.23 2.93 .150 Opportunity_algebra*Gender 0.01 0.06 .905 Opportunity_data*Gender 0.12 0.06 .060 Opportunity_geometry*Gender 0.01 0.07 .931 Opportunity_measurement*Gender 0.08 0.09 .355 Opportunity_number*Gender 0.18 0.19 .339 Content_activities*Gender 6.29 9.50 .508 Instruction_activities*Gender 5.09 7.09 .473 Instructional hours*Gender 0.08 0.04 .030 Extra lessons 1.85 18.38 .920 Homework assignment*Extra lessons 2.71 1.72 .116 Opportunity_algebra*Extra lessons 0.11 0.05 .020 Opportunity_data*Extra lessons 0.03 0.04 .433 Opportunity_geometry*Extra lessons 0.12 0.06 .029 Opportunity_measurement*Extra lessons 0.04 0.05 .356 Opportunity_number*Extra lessons 0.18 0.15 .229 Content_activities*Extra lessons 11.52 7.17 .109 Instruction_activities*Extra lessons 8.51 5.24 .105 Instructional hours*Extra lessons 0.00 0.02 .909 Selfconfidence 43.34 16.35 .009 Homework assignment*Selfconfidence 0.80 1.66 .631 Opportunity_algebra*Selfconfidence 0.03 0.04 .425 Opportunity_data*Selfconfidence 0.11 0.04 .002 Opportunity_geometry*Selfconfidence 0.06 0.04 .137 Opportunity_measurement*Selfconfidence 0.14 0.05 .007 Opportunity_number*Selfconfidence 0.11 0.11 .315 Content_activities*Selfconfidence 2.96 6.42 .644 Instruction_activities*Selfconfidence 3.73 4.38 .396 Instructional hours*Selfconfidence 0.03 0.02 .223 Valuing math 4.43 1.33 .001 Random 00 861.96 29.36 <.001 Gender 60.11 7.75 .025 Extra lessons 36.74 6.06 .001 Selfconfidence 28.88 5.37 .042 2 1661.12 40.76 As evident in Table 42, compared to pr eviously constructed models (Models 913), the amounts of explained variance be tween schools in Model 14 were more significant. For example, an increase of 30% in the between school variance was observed when using Model 14 instead of M odels 9 or 11. At minimum, changing from
PAGE 211
197 Models 10 to Model 14 would result in 13% mo re of the variance between schools to be accounted for. However, for the variance w ithin schools, no change was noted across these models. Thus, in consideration of th e amount of the explained variance between schools, Model 14 surpassed previously c onstructed models in predicting math achievement. Table 42. Comparison of R2 between Model 14 and Previously Constructed Models 913 for Canada Compared Model 00 2 14 vs. 9 0.30 0.00 14 vs. 10 0.13 0.00 14 vs. 11 0.30 0.00 14 vs. 12 0.29 0.00 14 vs. 13 0.25 0.00 The modeled means of predicted math achievement for the five statistically significant interactions resulted from Model 14 are displayed in Figures 2832. The data in Figure 28 suggested that there was an inverse relationship between average math instructional hours per year and math achieveme nt and that this relationship was different across female and male groups. Noticeably, re gardless of average math instructional hours per year, female students appeared to outperform male students in math achievement. However, with low average math instructional hours pe r years, there was a small gap in math achievement between female and male students. As the average math instructional hours per year increased, the math achieveme nt gap between female and male students became larger.
PAGE 212
198 3080130180230280330380 Math instructional hours per yearPredicted Math Achievement Male FemaleFigure 28. Interaction between Average Math Inst ructional Hours per Year and Gender for Canada The data in Figure 29 depict the inte raction between extra math lessons and opportunity to learn algebra. It appeared that when there wa s little opportunity to learn algebra, students tended to score similarly lo w in math, regardless of how frequently they took extra math lessons. However, the achie vement gaps among students with different levels of extra math lessons grew rapidly as the opportunity to le arn algebra increased. Specifically, students who reported taking extra math lessons everyday tended to achieve higher math scores than their peers w ho reported taking extra math lessons only sometimes or once or twice a week. As for the students who reported never taking extra math lessons, their math achievement seemed to decrease slightly when they had more opportunity to learn algebra.
PAGE 213
199 020406080100 Opportunity to learn AlgebraPredicted Math Achievement Extra math lessons: Never Extra math lessons: Sometimes Extra math lessons: Once or twice a week Extra math lessons: EverydayFigure 29. Interaction between Opportunity to Lear n Algebra and Extra Math Lessons for Canada The nature of the interaction between ex tra math lessons and opportunity to learn geometry is displayed in Figure 30. The resu lts suggest that for students who reported never taking extra math lessons, increases in the opportunity to learn geometry was associated with increased math scores. Howeve r, as the frequencies of extra math lessons increased, the relationship rapidly became an inverse association and increased in the opportunity to learn geometry was associated with lower math scores. As an example, students who reported taking extra math lessons everyday tended to score lowest in math when the opportunity to learn geometry was the highest.
PAGE 214
200 020406080100 Opportunity to learn GeometryPredicted Math Achievement Extra math lessons: Never Extra math lessons: Sometimes Extra math lessons: Once or twice a week Extra math lessons: EverydayFigure 30. Interaction between Opportunity to L earn Geometry and Extra Math Lessons for Canada Figure 31 presents the modeled mean of math achievement based upon student selfconfidence in learning math and opportunity to learn data. Overall, it was noted that increases in opportunity to lear n data were associated with decreases in student math scores, regardless of levels of student se lfconfidence in learning math. However, in comparing the three groups of students, t hose reported having a high level of selfconfidence in learning math consistently out performed their peers who reported having a low or medium level of selfconfidence in lear ning math. The sizes of differences in math achievement among these three groups of st udents, however, was small when there was little opportunity to learn data and becam e slightly bigger when there was higher opportunity to learn data.
PAGE 215
201 020406080100 Opportunity to learn DataPredicted Math Achievement Selfconfidence: Low Selfconfidence: Medium Selfconfidence: HighFigure 31. Interaction between Opportunity to Learn Data and Selfconfidence for Canada The interaction between opportunity to learn measurement and student selfconfidence in learning math is illustrated in Figure 32. The results suggested that, for students with a medium or high level of se lfconfidence in learning math, increases in opportunity to learn measurement was associat ed with higher math scores; whereas for students with a low level of selfconfidence in learning math, increa ses in opportunity to learn measurement made no difference in thei r math scores. Thus, the achievement gaps among these students was smallest when there was little oppor tunity to learn measurement and largest when there was hi gh opportunity to learn measurement. As expected, regardless of opportunity to l earn measurement, those students who had a higher level of selfconfidence in learning math consistently performed better than their peers who had a lower level of se lfconfidence in learning math.
PAGE 216
202 020406080100 Opportunity to learn MeasurementPredicted Math Achievement Selfconfidence: Low Selfconfidence: Medium Selfconfidence: HighFigure 32. Interaction between Opportunity to Learn measurement and Selfconfidence for Canada Research Question 4 To what extent are teacherrelated variab les (i.e., preparation to teach, ready to teach, and professional development) asso ciated with TIMSS 2003 eighthgrade math scores in each country? Similarly, incremental model building st rategies were appl ied to examine the relationships among teacherrelated variables (i .e., preparation to teach, ready to teach, and professional development) and math achievement. Results of these models (Models 1518) are presented in Tables 4346. Interestingly, as shown in Table 43, self confidence in learning math was the only level1 predictor that had statis tically significant interaction with the level2 predictors in Models 1517. Specifically, in Model 15, st udent selfconfidence in learning math
PAGE 217
203 interacted with mathrelate d preparation to teach ( = 9.40, SE = 2.19, p <.001). Similarly, in Model 16 selfconfidence in learning math interacted with ready to teach data ( = 6.52, SE = 3.18, p = .041). Likewise, in Model 17, selfconfidence in learning math interacted with mathrelated professional development ( = 1.64, SE = 0.51, p = .002). Also, in these models, it was noted that all of the random effect s were statistically significant, suggesting that a significant amount of variance in math achievement remained unexplained, both w ithin and between schools. Table 43. Parameter Estimates for Teacher Background Models for Canada Model Type Parameters Estimates SE p 15 Fixed INT 462.47 3.99 <.001 Preparation 27.27 6.63 <.001 Gender 2.26 2.29 .325 Preparation*Gender 2.39 3.13 .447 Extra lessons 12.76 1.66 <.001 Preparation*Extra lessons 0.18 2.14 .933 Selfconfidence 39.24 1.40 <.001 Preparation*Selfconfidence 9.40 2.19 <.001 Valuing math 4.79 1.30 <.001 Random 00 1086.90 32.97 <.001 Gender 70.54 8.40 .006 Extra lessons 43.58 6.60 <.001 Selfconfidence 36.88 6.07 .003 2 1667.27 40.83 16 Fixed INT 473.05 32.37 <.001 Ready_number 13.23 18.58 .477 Ready_algebra 3.05 9.32 .744 Ready_measurement 8.11 8.42 .337 Ready_geometry 21.99 11.49 .056 Ready_data 13.85 8.30 .096 Gender 5.52 9.47 .560 Ready_number*Gender 7.04 6.06 .247 Ready_algebra*Gender 1.49 3.67 .685 Ready_measurement*Gender 3.73 3.92 .343 Ready_geometry*Gender 2.96 5.84 .613 Ready_data*Gender 1.55 3.57 .664 Extra lessons 0.34 5.95 .955 Ready_number*Extra lessons 0.27 4.27 .951 Ready_algebra*Extra lessons 3.40 2.82 .228 Ready_measurement*Extra lessons 1.21 3.90 .756
PAGE 218
204 Table 43. Parameter Estimates for Teacher Background Models for Canada Model Type Parameters Estimates SE p Ready_geometry*Extra lessons 6.65 4.11 .107 Ready_data*Extra lessons 1.42 2.41 .555 Selfconfidence 32.74 8.89 <.001 Ready_number*Selfconfidence 2.77 5.43 .610 Ready_algebra*Selfconfidence 0.19 3.19 .954 Ready_measurement*Selfconfidence 3.36 2.98 .261 Ready_geometry*Selfconfidence 4.49 3.90 .251 Ready_data*Selfconfidence 6.52 3.18 .041 Home resources 4.79 1.30 <.001 Random 00 1199.81 34.64 <.001 Gender 75.20 8.67 .004 Extra lessons 41.71 6.46 <.001 Selfconfidence 54.66 7.39 <.001 2 1665.68 40.81 17 Fixed INT 491.98 6.09 <.001 Professional development 6.48 1.67 <.001 Gender 0.07 3.56 .983 Professional development*Gender 0.45 0.98 .648 Extra lessons 13.03 2.03 <.001 Professional development*Extra lessons 0.06 0.61 .917 Selfconfidence 30.93 1.70 <.001 Professional development*Selfconfidence 1.64 0.51 .002 Valuing math 4.66 1.30 .001 Random 00 1103.66 33.22 <.001 Gender 84.86 9.21 .004 Extra lessons 46.81 6.84 <.001 Selfconfidence 48.12 6.94 .001 2 1663.44 40.79 When comparing the proportion of variance accounted for by Models 1517 with that of the foundational level1 model (Model 9), it appears that Model 15 was the most efficient one (see Table 44). As an example, wh ereas the inclusion of preparation to teach math content in Model 15 resulted in a re duction of 12% in the be tween school variance to be explained; the addition of ready to teach math contents in Model 16 resulted in a reduction of only 3% in the between school va riance to be explained. No improvement in the within school variance was noted by use of these models.
PAGE 219
205 Table 44. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for Canada Compared Model 00 2 15 vs. 9 0.12 0.00 16 vs. 9 0.03 0.00 17 vs. 9 0.11 0.00 When including all the teacherrelated va riables (i.e., prepara tion to teach, ready to teach math topics, and mathrelated profe ssional development) in Model 18 to predict math achievement, two statistically signifi cant crosslevel interaction effects were produced (see Table 45). Specifically, prepar ation to teach math content was found to interact with student selfc onfidence in learning math ( = 7.83, SE = 2.28, p <.001) and mathrelated professional development was found to interact with student selfconfidence in learning math ( = 1.10, SE = .53, p = .038). Also, in this model, all the random effects were statistically si gnificant, meaning that a cons iderable amount of variance remained to be explained within and between schools. Table 45. Parameter Estimates for the Combined Teacher Background Model for Canada Model Type Parameters Estimates SE p 18 Fixed INT 477.11 30.46 <.001 Preparation 20.95 6.49 .002 Professional development 5.22 1.74 .003 Ready_number 7.71 17.98 .668 Ready_algebra 3.11 9.21 .736 Ready_measurement 8.89 8.22 .281 Ready_geometry 13.32 11.95 .266 Ready_data 9.29 8.60 .281 Gender 5.24 9.50 .581 Preparation*Gender 1.90 3.68 .606 Professional development*Gender 0.26 1.14 .822 Ready_number*Gender 6.73 6.22 .281 Ready_algebra*Gender 1.61 3.84 .674 Ready_measurement*Gender 3.80 4.10 .356 Ready_geometry*Gender 2.55 5.97 .669 Ready_data*Gender 1.22 3.71 .742 Extra lessons 0.96 5.83 .870 Preparation*Extra lessons 1.18 2.34 .614 Professional development*Extra lessons 0.41 0.63 .511
PAGE 220
206 Table 45. Parameter Estimates for the Combined Teacher Background Model for Canada Model Type Parameters Estimates SE p Ready_number*Extra lessons 0.81 4.33 .852 Ready_algebra*Extra lessons 3.91 2.83 .168 Ready_measurement*Extra lessons 0.56 3.96 .888 Ready_geometry*Extra lessons 6.80 4.25 .110 Ready_data*Extra lessons 2.02 2.48 .417 Selfconfidence 32.59 8.04 <.001 Preparation*Selfconfidence 7.83 2.28 .001 Professional development*Selfconfidence 1.10 0.53 .038 Ready_number*Selfconfidence 0.64 5.13 .901 Ready_algebra*Selfconfidence 0.39 3.05 .899 Ready_measurement*Selfconfidence 3.15 3.01 .298 Ready_geometry*Selfconfidence 1.56 3.95 .693 Ready_data*Selfconfidence 5.24 2.99 .080 Valuing math 4.75 1.30 <.001 Random 00 978.58 31.28 <.001 Gender 81.04 9.00 .003 Extra lessons 43.56 6.60 <.001 Selfconfidence 29.34 5.42 .020 2 1666.44 40.82 As evident in Table 46, Model 18 appeared to be more efficient than Models 9 and 1517 in terms of the amount of variance accounted for between schools. Specifically, an increase of 10% to 21% in the between school va riance was likely to result when using Model 18 as opposed to Models 9, 15, 16, or 17. Table 46. Comparison of R2 between Model 18 and Previously Constructed Models 9 and 1517 Compared Model 00 2 18 vs. 9 0.21 0.00 18 vs. 15 0.10 0.00 18 vs. 16 0.18 0.00 18 vs. 17 0.11 0.00 As shown in Figure 33, the interaction be tween preparation to teach math content and student selfconfidence in learning math suggests that studentsÂ’ math achievement was positively related their selfconfidence in learning math. That is, the more selfconfident students expressed in learning math the better they performed in math. And,
PAGE 221
207 this relationship was true for all the student s in Canada, regardless if their teachers reported being prepared or not to teach math content. As expected, in comparing two groups of students, the one with teachers who were prepared to teach consistently achieved higher math scores than the othe r group of students whose teachers were not prepared to teach. The size of differences in math achievement between the two groups was large when students expressed low se lfconfidence in learning math and became narrower as their selfconfiden ce in learning math increased. LowMediumHigh SelfconfidencePredicted Math Achievement Preparation to teach: No Preparation to teach: YesFigure 33. Interaction between Teacher Reported Pr eparation to Teach Math and Student Selfconfidence in Learning Math for Canada Figure 34 illustrates the interaction be tween student selfconfidence in learning math and mathrelated professional developm ent. Surprisingly, math achievement was found to be inversely associated with math relate d professional development. It seems that teachersÂ’ participation in more math related professional development programs did
PAGE 222
208 not result in higher math performance for thei r students, regardless of how selfconfident they were in learning math. It is important to note, howev er, that in comparing three groups of students, those with a higher selfc onfidence level in learning math consistently outperformed their peers who reported a lower level of selfconfiden ce in learning math. The differences in their math achievement app eared to be largest when their teachers had five professional development programs and smallest when their teachers had none of these programs. 012345 Mathrelated professional developmentPredicted Math Achievement Selfconfidence: Low Selfconfidence: Medium Selfconfidence: High Figure 34. Interaction between Types of MathRelated Professional Development and Student Selfconfidence in Learning Math for Canada
PAGE 223
209 Research Question 5 To what extent are school related variables (i .e., class size, school resources for math instruction, and teacher perception of math instructional limitation due to student factor) associated with TIMSS 2003 eighthgrade math scores in each country? Table 47 provides a summary of the results for Models 1921 where schoolrelated variables (i.e., class size, school resources for ma th instruction, and teacher perception of math instructional limitations due to student factor s) were separately included in the models to predict math ach ievement. In these models, no statistically significant crosslevel interaction effects were detected. However, there were two statistically significant level2 main effects: class size for math instruction in Model 19 ( = 15.69, SE = 6.24, p = .013) and teacher perception of math instructional limitations due to student factors in Model 20 ( = 17.84, SE = 4.83, p <.001). Also, in these models, all of the random effects were st atistically significant, meaning that a good amount of variance remained to be e xplained within and between schools. Table 47. Parameter Estimates for School Background Models for Canada Model Type Parameters Estimates SE p 19 Fixed INT 460.43 5.06 <.001 Class size 15.69 6.24 .013 Gender 3.21 3.27 .328 Class size*Gender 2.34 3.19 .463 Extra lessons 13.92 1.92 <.001 Class size*Extra lessons 1.33 1.78 .457 Selfconfidence 37.72 2.12 <.001 Class size*Selfconfidence 2.40 2.18 .272 Valuing math 4.78 1.30 <.001 Random 00 1180.10 34.35 <.001 Gender 64.75 8.05 .006 Extra lessons 45.26 6.73 <.001 Selfconfidence 57.88 7.61 <.001 2 1667.70 40.84 20 Fixed INT 481.84 4.12 <.001
PAGE 224
210 Table 47. Parameter Estimates for School Background Models for Canada Model Type Parameters Estimates SE p Instructional limitation 17.84 4.83 <.001 Gender 0.33 1.91 .864 Instructional limitation*Gender 1.92 2.51 .444 Extra lessons 12.76 1.55 <.001 Instructional limitation*Extra lessons 0.03 1.66 .988 Selfconfidence 35.36 1.35 <.001 Instructional limitation*Selfconfidence 0.73 1.67 .663 Valuing math 4.83 1.30 <.001 Random 00 1105.53 33.25 <.001 Gender 65.91 8.12 .006 Extra lessons 45.63 6.75 <.001 Selfconfidence 60.68 7.79 <.001 2 1666.84 40.83 21 Fixed INT 459.82 7.73 <.001 School resources 9.36 5.79 .107 Gender 2.94 4.38 .502 School resources*Gender 3.30 2.78 .237 Extra lessons 16.01 3.80 <.001 School resources*Extra lessons 2.37 2.42 .329 Selfconfidence 39.76 2.74 <.001 School resources*Selfconfidence 2.98 1.88 .114 Valuing math 4.79 1.31 <.001 Random 00 1243.31 35.26 <.001 Gender 61.60 7.85 .007 Extra lessons 43.69 6.61 <.001 Selfconfidence 52.70 7.26 <.001 2 1669.78 40.86 In comparing Models 1921 with Model 9 in terms of the proportion of variance accounted for, it looks like that Model 20 work ed the best (see Table 48). Specifically, the use of Model 20 as opposed to Model 9 increased the amount of between school variance accounted for by 11%; whereas the use of Model 19 as opposed to Model 9 accounted for only 5% more of the between school variance. Table 48. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for Canada Compared Model 00 2 19 vs. 9 0.05 0.00 20 vs. 9 0.11 0.00 21 vs. 9 0.01 0.00
PAGE 225
211 Unlike earlier combined models, Mode l 22 with all of the school backgroundrelated predictors did not produce any statis tically significant cro sslevel interaction effects. There were, however, tw o statistically significant level2 main effects: class size for math instruction ( = 13.90, SE = 5.64, p = .015) and teacher perception of math instructional limitations due to student factors ( = 17.02, SE = 5.18, p = .002). These results suggest that, after controlling for a ll other level1 and leve l2 variables in the model, students in schools with larger class sizes tended to perform better in math than their peers in schools with smaller class sizes. Also, it seems that students tended to fare poorer in math in schools where teachers pe rceived to have more limitations due to student factors than in schools where teacher s perceived to have none or few limitations due to student factors. Specifically, for ever y unit increased in class size, the students were expected to improve their math scores by 15.69 points, and for every unit increased in teacher perception of instru ctional limitations due to studen t factors, the students were expected to lower their math scores by 17.02 points, after controlling for other level1 and level2 variables in the model. Again, in Model 22, all the random effect s were statistically significant. This indicated that the amount of within and between school variance that remained to be explained were still significant. Table 49. Parameter Estimates for the Combined School Background Model for Canada Model Type Parameters Estimates SE P 22 Fixed INT 460.40 9.16 <.001 Instructional limitation 17.02 5.18 .002 Class size 13.90 5.64 .015 School resources 7.75 5.24 .141 Gender 2.17 5.66 .701 Instructional limitation*Gender 1.78 2.55 .485 Class size*Gender 2.81 3.14 .372
PAGE 226
212 Table 49. Parameter Estimates for the Combined School Background Model for Canada Model Type Parameters Estimates SE P School resources*Gender 3.67 2.74 .182 Extra lessons 16.55 4.11 <.001 Instructional limitation*Extra lessons 0.14 1.63 .931 Class size*Extra lessons 0.91 1.82 .619 School resources*Extra lessons 2.25 2.46 .362 Selfconfidence 40.76 3.67 <.001 Instructional limitation*Selfconfidence 0.61 1.79 .735 Class size*Selfconfidence 2.12 2.21 .339 School resources*Selfconfidence 2.77 1.88 .141 Valuing math 4.88 1.29 <.001 Random 00 1042.32 32.28 <.001 Gender 62.62 7.91 .007 Extra lessons 45.34 6.73 <.001 Selfconfidence 57.70 7.60 <.001 2 1667.59 40.84 As shown in Table 50, Model 22 appears to be more efficient than Models 9 and 1921. Although the amount of explained vari ance within schools in Model 22 did not change compared to these models (pseudo R2 = 0), the amount of explained variance between schools in Model 22 increased by 6% (compared to Model 20) to 16% (compared to Models 9 and 21). Table 50. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for Canada Compared Model 00 2 22 vs. 9 0.16 0.00 22 vs. 19 0.12 0.00 22 vs. 20 0.06 0.00 22 vs. 21 0.16 0.00 Final Model With an intention to identify the most efficient and parsimonious model to predict eighthgrade math achievement in Canada Model 23 was built by including all the statistically significant level2 predictors in earlier combined models to Model 9 and then compared with the three combin ed models, Models 14, 18, and 22.
PAGE 227
213 As can be seen from Table 51, Model 23 produced six statistically significant crosslevel interaction effect s: (1) average math instruc tional hours per year by gender ( = .09, SE = .04, p = .027), (2) opportun ity to learn data by gender ( = .14, SE = .07, p = .042), (3) opportunity to learn al gebra by extra math lessons ( = .10, SE = .04, p = .021), (4) opportunity to learn ge ometry by extra math lessons ( = .14, SE = .06, p = .018), (5) opportunity to learn data by selfconfidence in learning math ( = .08, SE = .04, p = .022). Finally, preparation to teach math content by selfconfidence in learning math ( = 5.85, SE = 2.20, p = .009). In addition, there were f our significant level2 main effects: (1) teacher perception of math instructional limitations due to student factors ( = 14.68, SE = 4.02, p <.001), (2) opportunity to learn data ( = .26, SE = .11, p = .018), (3) mathrelated professional development ( = 3.52, SE = 1.66, p = .035), and (4) average math instructional hours per year ( = .21, SE = .07, p = .005). The only level1 variable that had significant main effect was student selfconfidence in learning math ( = 31.02, SE = 11.46, p = .008). Also, in this model, all of the random effects were statistically significant. Table 51. Parameter Estimates for Full Model for Canada Model Type Parameters Estimates SE P 23 Fixed INT 446.85 28.36 <.001 Instructional limitation 14.68 4.02 .001 Class size 4.27 4.98 .392 Opportunity_Algebra 0.11 0.10 .246 Opportunity_Data 0.26 0.11 .018 Opportunity_Geometry 0.00 0.14 .986 Opportunity_Measurement 0.11 0.17 .511 Opportunity_number 0.53 0.25 .037 Preparation 16.89 6.08 .006 Professional development 3.52 1.66 .035 Instructional hours 0.21 0.07 .005 Gender 16.17 19.42 .406
PAGE 228
214 Table 51. Parameter Estimates for Full Model for Canada Model Type Parameters Estimates SE P Instructional limitation*Gender 0.84 2.31 .716 Class size*Gender 0.13 3.55 .971 Opportunity_algebra*Gender 0.01 0.06 .848 Opportunity_data*Gender 0.14 0.07 .042 Opportunity_geometry*Gender 0.01 0.08 .949 Opportunity_measurement*Gender 0.05 0.09 .629 Opportunity_number*Gender 0.20 0.19 .290 Preparation*Gender 0.68 3.60 .851 Professional development*Gender 0.48 0.93 .602 Instructional hours*Gender 0.09 0.04 .027 Extra lessons 3.20 15.40 .836 Instructional limitation*Extra lessons 0.04 1.57 .978 Class size*Extra lessons 1.70 1.91 .376 Opportunity_algebra*Extra lessons 0.10 0.04 .021 Opportunity_data*Extra lessons 0.05 0.05 .295 Opportunity_geometry*Extra lessons 0.14 0.06 .018 Opportunity_measurement*Extra lessons 0.07 0.05 .179 Opportunity_number*Extra lessons 0.15 0.15 .301 Preparation*Extra lessons 1.39 2.33 .552 Professional development*Extra lessons 0.41 0.66 .537 Instructional hours*Extra lessons 0.00 0.02 .993 Selfconfidence 31.02 11.46 .008 Instructional limitation*Selfconfidence 0.27 1.48 .856 Class size*Selfconfidence 0.64 2.15 .766 Opportunity_algebra*Selfconfidence 0.04 0.04 .302 Opportunity_Data*Selfconfidence 0.08 0.04 .022 Opportunity_Geometry*Selfconfidence 0.06 0.04 .168 Opportunity_Measurement*Selfconfidence 0.10 0.05 .065 Opportunity_number*Selfconfidence 0.07 0.10 .502 Preparation*Selfconfidence 5.85 2.20 .009 Professional development*Selfconfidence 0.31 0.60 .607 Instructional hours*Selfconfidence 0.03 0.03 .291 Valuing math 4.76 1.31 .001 Random 00 710.83 26.66 <.001 Gender 76.07 8.72 .013 Extra lessons 40.05 6.33 <.001 Selfconfidence 24.51 4.95 .072 2 1661.06 40.76 As evident in Table 52, Model 23 appears to be the most efficient model for Canada because this model accounted for th e largest amount of variance within and between schools. Specifically, Model 23 account ed for 18% more of the between school
PAGE 229
215 variance when compared to Model 14, and 27% when compared to Model 18 and 32% when compared to Model 22. Therefore, for Canada, Model 23 serves as the best model for predicting math achievement. Table 52. Comparison of R2 between Model 23 and Previously Constructed Models 14, 18 and 22 for Canada Compared Model 00 2 14 0.18 0.00 18 0.27 0.00 22 0.32 0.00 Figure 35 visually displays the nature of the crosslevel interactions between average math instructional hour s per year and gender. The re sults suggested that there was an inverse relationship between average math instructional hour s per year and math achievement and that this relationship wa s different across female and male groups. Noticeably, regardless of average math inst ructional hours per year, female students appeared to outperform male students in ma th achievement. However, with low average math instructional hours per years, there wa s a small gap in math achievement between female and male students. As the average math instructional hours pe r year increased, the math achievement gap between female and male students became more noticeable.
PAGE 230
216 3080130180230280330380 Math instructional hours per yearPredicted Math Achievement Male FemaleFigure 35. Interaction between Average Math Inst ructional Hours per Year and Gender for Canada As shown in Figure 36, the interaction betw een preparation to teach math content and student selfconfidence in learning math suggests that studentsÂ’ math achievement was positively related to their selfconfidence in learning math. That is, the more selfconfidence students expressed in learning math the better they pe rformed in math. And, this relationship was true for all the student s in Canada, regardless if their teachers reported being prepared or not to teach math content. As expected, in comparing two groups of students, the one with teachers who were prepared to teach consistently achieved higher math scores than the othe r group of students whose teachers were not prepared to teach. The size of differences in math achievement between the two groups was large when students expressed low se lfconfidence in learning math and became narrower as their selfconfidence in learning math increased.
PAGE 231
217 LowMediumHigh SelfconfidencePredicted Math Achievement Preparation to teach: No Preparation to teach: YesFigure 36. Interaction between Teacher Reported Pr eparation to Teach Math Content and Student Selfconfidence in Learning Math for Canada The nature of the crosslevel interacti ons between opportunity to learn data and gender is presented in Figure 37. The data s uggested that student math achievement was inversely related to opportunity to learn data and that this relationship differed significantly across female and male groups. Notably, regardless of opportunity to learn data, female students appeared to perform better in math th an male students. However, with little opportunity to learn data, the achievement gap between female and male students seemed narrow. As the opportunity to learn data increased, the math achievement gaps between female and male students became statistically significant.
PAGE 232
218 020406080100 Opportunity to learn DataPredicted Math Achievement Male FemaleFigure 37. Interaction between Opportunity To Learn Data and Gender for Canada Figure 38 presents the modeled mean of math achievement based upon student selfconfidence in learning math and opportunity to learn data. Overall, it was noted that increases in opportunity to lear n data were associated with decreases in student math scores, regardless of levels of student se lfconfidence in learning math. However, in comparing the three groups of students, thos e who reported having a high level of selfconfidence in learning math consistently out performed their peers who reported having a low or medium level of selfconfidence in lear ning math. The sizes of differences in math achievement among these three groups of st udents, however, was small when there was little opportunity to learn data, and became sli ghtly bigger as the opportu nity to learn data increased.
PAGE 233
219 020406080100 Opportunity to learn DataPredicted Math Achievement Selfconfidence: Low Selfconfidence: Medium Selfconfidence: HighFigure 38. Interaction between Opportunity to Learn Data and Selfconfidence in Learning Math for Canada The nature of the interaction between ex tra math lessons and opportunity to learn geometry is displayed in Figure 39. The resu lts suggest that for students who reported never taking extra math lessons, increases in the opportunity to learn geometry was associated with slightly increased math scores However, as the frequencies of extra math lessons increased, the relationshi p rapidly became an inverse association and an increase in the opportunity to learn geometry was a ssociated with lower math scores. As an example, students who reported taking extr a math lessons everyday tended to score lowest in math when the opportunity to le arn geometry was the highest. Also it was observed that when the opportunity to learn geometry was little the achievement gap across student groups was small. As the oppor tunity to learn geometry increased, the achievement gaps became significantly larger.
PAGE 234
220 020406080100 Opportunity to learn GeometryPredicted Math Achievement Extra math lessons: Never Extra math lessons: Sometimes Extra math lessons: Once or twice a week Extra math lessons: EverydayFigure 39. Interaction between Opportunity to L earn Geometry and Extra Math Lessons for Canada Finally, the data in Figure 40 depict th e interaction between extra math lessons and opportunity to learn algebra. It appeared that when th ere was little opportunity to learn algebra, students tended to score si milarly low in math, regardless of how frequently they took extra math lessons. Ho wever, the achievement gaps among students with different levels of extra math lessons grew rapidly as the opportunity to learn algebra increased. Specifically, students who reported taking extra math lessons everyday tended to achieve higher math scores than their peers w ho reported taking extra math lessons only sometimes or once or twice a week. As for the students who reported never taking extra math lessons, their math achievement seemed to decrease slightly when they had more opportunity to learn algebra.
PAGE 235
221 020406080100 Opportunity to learn AlgebraPredicted Math Achievement Extra math lessons: Never Extra math lessons: Sometimes Extra math lessons: Once or twice a week Extra math lessons: EverydayFigure 40. Interaction between Opportunity to Lear n Algebra and Extra Math Lessons for Canada
PAGE 236
222 Results for Egypt Evaluation of Missing Data As a result of the listwise deletion met hod, the sample size for Egypt was reduced from 7,095 students and 217 schools to 1,876 stud ents and 69 schools. This means only 26.44% of the original sample had complete data on all variables of in terest in this study. In order to evaluate the extent to which th e data for Egypt were missing completely at random, the missingness on 19 level2 variables were correlated. Results of this analysis suggested a nonrandomness of missing data, with correlation coefficients ranging from .38 to .97 (n = 217, p <.001), indicating a modest to strong positive relationship among missingness indicators of the variables. In addition, when mi ssingness was correlated with values of itself as well as values of other variables, on ly marginal correlations were observed (r = .26 to .14, n = 217, p <.001). In summary, the missing data mechanism for Egypt was not missing completely at random. Univariate Analysis A descriptive examination of level1 va riables (i.e., overall math achievement, gender, selfconfidence in learning math, va luing of math, time on math homework, extra math lessons, and home resources for lear ning math) was conducted using SAS 9.13. Of the complete sample of 1,876 eighthgrade students, 1,083 (57.73%) we re male and 793 (42.27%) were female. On average, the weight ed overall math achievement for Egyptian students was 416.52 (SD = 84.83) with the lowest scor e of 188.57 and the highest score of 714.09 (see Table 53). With regard to level1 predictor variable s, it appeared that, on average, eighthgrade students in Egypt had a moderate s upport at home in terms of resources for
PAGE 237
223 learning (M = 1.94 SD = .61), were above medium leve l of selfconfidence in learning math (M = 1.50, SD = .62) and valuing of math (M = 1.79, SD = .61), spent a modest amount of time on math homework (M = .91, SD = .62), and took extra math lessons about one to two times a week (M = 1.70, SD = 1.02) (see Table 53). Table 53. Weighted Descriptive Statistics for Level 1 Variables for Egypt (N = 1,876) Variable M SD Min Max Overall math achievement 416.52 84.83 188.57 714.09 Selfconfidence in learning math 1.50 0.62 0 2 Valuing of math 1.79 0.46 0 2 Time on math homework 0.91 0.62 0 2 Extra math lessons 1.70 1.02 0 3 Home resources for learning math 1.94 0.61 0 3 Note: When weight was used to compute means in SAS, skewness and kurtosis were not produced In terms of distributions of level1 va riables, the unweighted descriptive results from Table 54 suggested that all except for valuing of math, approximated normality, with skewness and kurtosis values within the range of 1.00 and 1.00. Table 54. Unweighted Descriptive Statistics for L evel1 Variables for Egypt (N = 1,876) Variable M SD Min Max Skewness Kurtosis Overall math achievement 458.34 90.44 188.57 714.09 0.09 0.49 Selfconfidence in learning math 1.58 0.60 0 2 1.11 0.19 Valuing of math 1.80 0.45 0 2 2.14 3.88 Time on math homework 0.92 0.63 0 2 0.06 0.52 Extra math lessons 1.59 1.01 0 3 0.17 1.06 Home resources for learning math 2.26 0.68 0 3 0.53 0.22 Similarly, a descriptive analysis was c onducted on the 19 predictor variables at the school level. Although it appears from Table 55 that, on average, Egyptian students had moderate to high percentage of opportuni ty to learn math content domains (61.02 for data to 99.13 for number), the range of thei r minimum opportunity to learn math subjects was quite large, from 16.67 for algebra to 90 for number. Interestingly, in this sample,
PAGE 238
224 100% of math teachers reported being prepared to teach math content. On average, these teachers participated in less than half of the indicated mathrelated professional development programs (M = 2.28, SD =1.68) and reported a high level of readiness to teach (M = 1.41, SD = .52 for data to M = 1.96, SD = .21 for number). The data also suggested that in more than half of the lessons, students were given activities related to math instructional practice (M = 2.03, SD = .19) and math content (M = 2.03, SD = .13). On average, a moderate amount of homework assignment was assigned to the students (M = .96, SD = .61). Finally, class size in Egyptian schools tended to have medium class sizes, between 3340 students (M = 1.94, SD = .92) and teachersÂ’ perception of instructional limita tions due to student factors was low (M = .23, SD = .46). On average, the availability of school resources for math instruction was above medium (M = 1.26, SD = .76). Noticeably, across 69 schools, the average math instructional hours per years va ried greatly, ranging from 22.5 to 174.6, with a mean of 102.06 (SD = 44.18). Also, as shown in Table 55, nine out of 19 level2 pred ictor variables appeared to approximate normal distributions with skewne ss and kurtosis values within the normality approximation range of 1.00 to 1.00. The 11 va riables that appeared to depart from normality included opportunity to learn number, algebra, measurement, geometry, preparation to teach, ready to teach number, algebra, measur ement, geometry, and teacher perception of math instructional lim itations due to student factors.
PAGE 239
225 Table 55. Unweighted Descriptive Statistics for L evel2 Variables for Egypt (N =69) Variable M SD Min Max Skewness Kurtosis Opportunity to learn number 99.13 2.84 90 100 3.00 7.19 Opportunity to learn algebra 91.93 13.66 16.67 100 2.86 12.54 Opportunity to learn measurement 91.59 14.95 25 100 2.31 6.04 Opportunity to learn geometry 94.56 8.85 53.85 100 2.11 5.70 Opportunity to learn data 61.02 20.51 25 100 0.39 0.63 Amount of homework assignment 0.96 0.65 0 2 0.04 0.57 Instructional practicerelated activities in math lessons 2.03 0.19 1.70 2.50 0.13 0.65 Contentrelated activities in math lessons 2.03 0.13 1.72 2.33 0.22 0.01 Preparation to teach 1.00 0.00 1 1 Ready to teach number 1.96 0.21 1 2 4.58 19.52 Ready to teach algebra 1.94 0.24 1 2 3.87 13.34 Ready to teach measurement 1.84 0.41 0 2 2.55 6.26 Ready to teach geometry 1.93 0.26 1 2 3.37 9.65 Ready to teach data 1.41 0.52 0 2 0.07 1.30 Mathrelated professional development 2.28 1.68 0 5 0.16 1.09 Class size for math instruction 1.94 0.92 0 3 0.70 0.19 School resources for math instruction 1.26 0.76 0 2 0.48 1.11 Teacher perception of math instructional limitations due to student factors 0.23 0.46 0 2 1.76 2.25 Average math instructional hours per year 102.06 44.18 22.5 174.6 0.42 1.12 Bivariate Analysis The results of weighted bivariate correl ations among six level1 variables (i.e., gender, selfconfidence in learning math, va luing of math, time on math homework, extra math lessons, and home resources for learni ng math) are presented in Appendix I. It appeared from these results that level1 pred ictor variables were i ndependent from each other, with correlation coefficients ranging from .06 between gender and student selfconfidence in learning math to .33 between se lfconfidence in learni ng math and valuing
PAGE 240
226 of math. It was interesting to note that extra math lessons tended to have a negative albeit weak relationship with all level1 variables (r = .01 to .04). At level2, unweighted biva riate relationships were estimated for the 19 predictor variables. The correla tion matrix for these variables can be found in Appendix K. Unlike level1, correlation coefficients of level2 va riables had a wider range, from .30 between class size and ready to teach measurement to .62 between ready to teach number and ready to teach measurement. As expected, correlation coefficients among the variables measuring the same construct tended to be stronger than those measuring different construct. For example, the correlation ranged from .17 to .62 for ready to teach variables. Interestingly, opport unity to learn number had ve ry weak correlations with opportunity to learn data (r = .07). Another interesting rela tionship was observed between class size and other variables where most of the correlation coefficients were negative in direction. Evaluation of HLM Assumptions In order to ensure tenability of result s produced by multilevel models in this study, an evaluation of HLM assumptions thro ugh visual analysis of both level1 and level2 random effects of Model 18 was performed. Model 18 was selected because the results of HLM analysis suggested that it wa s the most efficient model to predict math achievement in Egypt (see HLM Analysis for Egypt). The data from Figure 41 suggested that level1 residuals approximated a normal distribution. In terms of varian ce, the scatter plot between le vel1 residuals and predicted math achievement, as illustrated in Figure 42, suggested that ther e was evidence of homogeneity of level1 variance.
PAGE 241
227 Figure 41. Histogram for Level1 Residuals for Egypt Figure 42. Level1 Residuals by Predicte d Math Achievement for Egypt For level2 random effects, the empirical Bayes residuals for the intercepts and predicted math scores were used to construct the graphs in Figures 4344. As can be seen from Figures 4344, level2 intercept residuals appeared to have a normal distribution and homogeneous variance.
PAGE 242
228 Figure 43. Histogram for Level2 Intercept Residuals for Egypt Figure 44. Level2 Intercept Residuals by Predicted Intercept for Egypt HLM Analysis Unconditional Model (Model 1) The HLM analysis started with the uncondi tional model where none of the level1 or level2 predictor was included in the mode l. The results of the unconditional model are presented in Table 56. For Egypt, the fixe d effect for the intercept was 414.26 (SE = 6.38, p <.001). The amount of variab ility in math achievement was significantly different across schools in Egypt (00 = 2001.88, SE = 44.74, p <.001). Within schools, the amount
PAGE 243
229 of unexplained variance was much la rger than that between schools (2= 5,116.85, SE = 71.53). The computed intraclass correlation (ICC) of .28 indicated a modest level of natural clustering of studen ts occurred between schools in Egypt. In other words, approximately 28% of the total variance in math scores occurred between schools. Table 56. Parameter Estimates for Unconditional Model for Egypt Model Type Parameters Estimates SE p 1 Fixed ICC .28 INT 414.26 6.38 <.001 Random 00 2001.88 44.74 <.001 2 5116.85 71.53 Research Question 1 To what extent are student background va riables (i.e., gender selfconfidence in learning math, valuing of math, time on math homework, and tutoring in math) associated with TIMSS 2003 eighthgrade math scores in each country? In order to answer this research question, first, ea ch of the student background variables was entered separately into Model 1 to predict math achievement. Then, as a group of variables, those that contributed sign ificantly in Models 26 were included in Model 7 to predict math achievement. Finally, in order to evaluate model fit in terms proportion of variance accounted for, pseudo R2 was computed for the current model against previously constructed models. Re sults of these models (Models 26) are presented in Table 57. The data from Table 57 suggested that all of the fixed effect s estimated by Models 26 were statistically significant, ex cept for those of gender in Model 2 ( = 2.86, SE = 6.51, p = .662) and time students spent on homework in Model 5 ( = 6.24, SE = 3.79, p = .104). Interestingly, whereas selfconfidence in learning math (Model 3) and valuing of
PAGE 244
230 math (Model 4) appeared to have statistically significant, positive relationships with math achievement ( = 32.66 and 27.62, SE = 3.71 and 6.57; p <.001 and .001, respectively); extra math lessons (Model 6) appeared to have a negative relationship with math achievement ( = 6.45, SE = 1.81, p <.001). Surprisingly, of all slope variances estimated in Models 26, only time student spent on homework was statistically significant ( = 55.67, SE = 7.46, p = .028). This suggests that schools in Egypt tended to differ significantly in the relationship be tween time students spent on homework and math achievement, but were not different in other relationships such as those between math achievement and gender, selfconfidence in learning math, student valuing of math, and extra math lessons. An examination of pseudo R2 across the five models (Models 26) suggested that the addition of individual pr edictors separately to the unconditional model (Model 1) to predict math achievement resulted in reducti on of the between school variance in three models (i.e., 26%, 13% and 11% in Models 3, 5, and 6, respectively) and an increase of the between school variance in two mode ls (2% and 39% in Models 2 and 4, respectively). For the within school varian ce, however, the amount of reduction was smaller, up to 8% in Model 3. Table 57. Parameter Estimates for Models 26 (L evel1 Student Background) for Egypt Model Effect Parameters Estimates SE p 00 2 2 Fixed INT 415.51 6.86 <.001 Gender 2.86 6.51 .662 Random 00 2046.46 45.24 <.001 Gender 8.88 2.98 >.500 2 5118.14 71.54 Pseudo R2 0.02 0.00
PAGE 245
231 Table 57. Parameter Estimates for Models 26 (L evel1 Student Background) for Egypt Model Effect Parameters Estimates SE p 00 2 3 Fixed INT 364.83 7.54 <.001 Selfconfidence 32.66 3.71 <.001 Random 00 1481.05 38.48 <.001 Selfconfidence 119.94 10.95 .193 2 4706.30 68.60 Pseudo R2 0.26 0.08 4 Fixed INT 365.05 13.52 <.001 Valuing math 27.62 6.57 <.001 Random 00 2775.98 52.69 <.001 Valuing math 278.39 16.69 .164 2 4894.05 69.96 Pseudo R2 0.39 0.04 5 Fixed INT 408.51 6.66 <.001 Homework time 6.24 3.79 .104 Random 00 1749.65 41.83 <.001 Homework time 55.67 7.46 .028 2 5083.99 71.30 Pseudo R2 0.13 0.01 6 Fixed INT 425.33 6.58 <.001 Extra lessons 6.45 1.81 .001 Random 00 1782.77 42.22 <.001 Extra lessons 3.05 1.75 >.500 2 5081.68 71.29 Pseudo R2 0.11 0.01 Note: Pseudo R2 refers to the difference in proportion of variance accounted for between the current models (Models 26) and the unconditional model (Model 1). As a next step of model building, all of the student background variables (i.e., gender, selfconfidence in learning math, va luing of math, time on math homework, and tutoring in math) were included in the co mbined model, Model 7, to predict math achievement. Notably, in the presence of othe r level1 variables in the model, all but
PAGE 246
232 gender had statistically significant fixed e ffects. With a fixed effect of 5.90 (SE = 1.68, p <.001) for extra math lesson, it could be inferred that for each unit incr ease in extra math lesson (i.e., from 0 for never to 3 for daily ), the students were expected to reduce 5.90 points in their math scores while controlling for other predictors in the model. Similarly, with fixed effect of 29.80 (SE = 3.36, p <.001) for selfconfidence in learning math, it could be interpreted that for each unit incr ease in level of self confidence in learning math (i.e., from 0 for low to 2 for high), it wa s expected that the st udents would improve 29.80 points in their math scores while contro lling for other predictors in the model. Likewise, each unit change in student valuing of math was associated with an increase of 14.22 points (SE = 6.52, p = .033) in math achievement and each unit change in time student spent on homework was associated with an increase of 7.77 points (SE = 3.26, p = .020), an after adjusting for othe r predictors in the model. Interestingly, it was noted that all of th e random effects in this model were not statistically significant (p > .50) This indicates that the ob served relationships between math achievement and student background vari ables did not seem to vary significantly across schools in Egypt. Table 58. Parameter Estimates for Model 7 (Level 1 Student Background) for Egypt Model Type Parameters Estimates SE p 7 Fixed INT 349.43 13.66 <.001 Gender 5.71 5.24 .280 Extra lessons 5.90 1.68 .001 Selfconfidence 29.80 3.36 <.001 Valuing math 14.22 6.52 .033 Homework time 7.77 3.26 .020 Random 00 2887.49 53.74 >.500 Gender 4.80 2.19 >.500 Extra lessons 8.29 2.88 >.500 Selfconfidence 93.49 9.67 >.500 Valuing math 333.02 18.25 >.500 Homework time 60.17 7.76 >.500
PAGE 247
233 Table 58. Parameter Estimates for Model 7 (Level 1 Student Background) for Egypt Model Type Parameters Estimates SE p 2 4500.18 67.08 An evaluation of model fit was also c onducted between Model 7 and previously constructed models, Models 26. As can be s een from Table 59, the inclusion of student background variables in Model 7 resulted in some reduction in the amount of variance accounted for in math achievement within sc hools (from 4% when compared with Model 3 and 12% when compared with Model 2). Unexpectedly, between schools, the amount of variance appeared to increase notably, fr om 4% in Model 4 to 95% in Model 3. In sum, Model 7 was more efficient than earli er models in that it accounted for more variance in math achievement within schools in Egypt. However, this model appeared to be less efficient than previously constructe d models in that it accounted for less variance in math achievement between schools in Egypt. Table 59. Comparison of R2 between Model 7 and Previously Constructed Models for Egypt Previous Model 00 2 2 0.41 0.12 3 0.95 0.04 4 0.04 0.08 5 0.65 0.11 6 0.62 0.11 Research Question 2 To what extent are home resources variab les (i.e., availability of calculator, computer, and desk for student use) asso ciated with TIMSS 2003 eighthgrade math scores in each country? When the level1 predictor home resources was added to the unconditional model to predict math achievement, a reduction of 5% in the within school variance and an
PAGE 248
234 increase of 1% in the between school vari ance was noted (see Table 60). As a fixed effect, home resources was statistically significant ( = 31.92, SE = 6.8, p <.001), meaning that each unit change in home resources (i.e., from 0 to 3) was associated with an increase of 31.92 points in student math achievement, while not controlling for other variables in the model. As a random eff ect, home resources wa s not statistically significant ( = 10.69, SE = 3.27, p = 3.54) which indicates that the relationship between home resources and math achievement did not vary significantly across schools in Egypt. Table 60. Parameter Estimates for Level1 Home Resources Model for Egypt Model Effect Parameters Estimates SE P 00 2 8 Fixed INT 352.86 9.37 <.001 Home resources 31.92 3.65 <.001 Random 00 2011.95 44.85 <.001 Home resources 10.69 3.27 .354 2 4853.60 69.67 Pseudo R2 0.01 0.05 Note: Pseudo R2 refers to the difference in the proportion variance between Model 8 and Model 1. Given the findings obtained from Models 7 and 8, five out of six studentrelated variables were entered into the unconditional model to make Model 9. Gender was excluded from Model 9 because both of its fi xed and random effects we re not statistically significant in Model 7. Also, in Model 9, all the slope variances were set to 0 because they were not statistically significant in earlier models. As can be seen from Table 61, all of the level1 variable s had statistically significant fixed effects. Of th e five variables in the models home resources appears to have the strongest positive relatio nship with math achievement ( = 28.83, SE = 3.74, p <.001). Following was student selfconfidence in learning math with a fixed effect of 26.94 (SE = 3.30, p <.001). Next were student valuing of math ( = 15.94, SE = 8.00, p
PAGE 249
235 =.046) and time student spent on homework ( = 7.04, SE = 2.88, p = .015). As for extra math lesson, an inverse relationship was noted between this predictor and math achievement ( = 5.64, SE = 1.75, p = .002). These results su ggest that the more resources students had at home, the more self confidence students expressed in learning math, the higher students valued math, a nd the more time students spent on their homework, the higher math scores the students tended to achieve. In contrast, it appears that the more frequently students took extr a math lessons, the poorer math scores they seem to earn. In terms of random effects, the amount of 1,382.87 (SE = 37.19, p <.001) for the between school variance was statistically significant, suggesting that a considerable amount of variance between schools remained to be explained. As compared to Models 7 and 8, Model 9 appeared more efficient because it accounted for a significantly higher amount of the explained variance between schools, up to 52% when compared to Model 7. In term s of the explained vari ance within schools, an increase of up to 8% was observed. As a result of these comparisons, Model 9 was selected as the foundational level1 model fo r further examination of the relationships between level2 predictors and math achievement. Table 61. Parameter Estimates for Combined L evel1 Predictors Model for Egypt Model Type Parameters Estimates SE p Compared Model 00 2 9 Fixed INT 292.77 17.45 <.001 Extra lessons 5.64 1.75 .002 Selfconfidence 26.94 3.30 <.001 Valuing math 15.94 8.00 .046 Homework time 7.04 2.88 .015 Home resources 28.83 3.74 <.001 Random 00 1382.87 37.19 <.001 2 4442.74 66.65
PAGE 250
236 Table 61. Parameter Estimates for Combined L evel1 Predictors Model for Egypt Model Type Parameters Estimates SE p Compared Model 00 2 Pseudo R2 7 0.52 0.01 8 0.31 0.08 Note: Pseudo R2 refers to the difference in the proportion of variance between Model 9 and Models 78. Research Question 3 To what extent are instructional variable s (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional time) associated with TIMSS 2003 eighthgrade math scores in each country? In addressing this research question, a similar strategy for model building used in Research Question 1 was applied here. That is, each of the level2 instructional practice variables was first added to the foundationa l level1 model (Model 9) to make Models 1013. Then, as a group, those variables with significant fixed eff ects in Models 1013 were included in the combined instructional practices model, Model 14. It is important to note that in these models, there was no crosslevel interactions between level1 and level2 predictors because, as described earlier in Model 9, all of the random slopes were set to 0. The results of Models 1014 ar e presented in Tables 6265. As can be seen in Table 62, none of the le vel2 predictors that were separately added to Models 1013 was stat istically significant. This means that, statistically, there was no evidence that these instructional practicesrelated variables contributed significantly to predict ma th achievement across schools in Egypt. Interestingly, however, in the presence of these level2 variables in the models, all the level1 predictors were found statistic ally significant and the sizes of their fixed effects were similar to what were observed in Model 9.
PAGE 251
237 Table 62. Parameter Estimates for Level2 Instru ctional Practices Models for Egypt Model Type Parameters Estimates SE p 10 Fixed INT 359.15 131.28 .009 Opportunity_algebra 0.28 0.34 .411 Opportunity_data 0.13 0.27 .623 Opportunity_geometry 0.43 0.64 .507 Opportunity_measurement 0.78 0.39 .051 Opportunity_number 0.78 1.38 .573 Extra lessons 5.65 1.74 .002 Selfconfidence 26.94 3.31 <.001 Valuing math 15.82 7.88 .044 Homework time 7.09 2.88 .014 Home resources 28.76 3.69 <.001 Random 00 1301.23 36.07 <.001 2 4442.80 66.65 11 Fixed INT 283.86 18.14 <.001 Homework assignment 11.20 6.27 .078 Extra lessons 5.66 1.75 .002 Selfconfidence 26.94 3.31 <.001 Valuing math 15.86 7.96 .046 Homework time 7.10 2.88 .014 Home resources 28.69 3.73 <.001 Random 00 1355.07 36.81 <.001 2 4442.49 66.65 12 Fixed INT 445.37 98.84 <.001 Content_activities 72.97 39.91 .072 Instruction_activities 2.72 29.49 .927 Extra lessons 5.58 1.74 .002 Selfconfidence 26.87 3.29 <.001 Valuing math 16.01 7.99 .045 Homework time 6.94 2.87 .016 Home resources 28.94 3.76 <.001 Random 00 1346.18 36.69 <.001 2 4443.11 66.66 13 Fixed INT 292.65 17.37 <.001 Instructional hours 0.03 0.14 .840 Extra lessons 5.63 1.75 .002 Selfconfidence 26.95 3.31 <.001 Valuing math 15.95 7.99 .046 Homework time 7.03 2.87 .015 Home resources 28.83 3.73 <.001 Random 00 1403.31 37.46 <.001 2 4442.88 66.65
PAGE 252
238 In terms of model fit, in comparison w ith the foundational level1 model (Model 9), Model 10 appeared to be the most effi cient model because the amount of explained variance between schools in this model increased by 6% (see Table 63). As for the within school variance, no significant difference was observed between Models 1013 and Model 9 (pseudo R2 = 0). Table 63. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model for Egypt Compared Model 00 2 10 vs. 9 0.06 0.00 11 vs. 9 0.02 0.00 12 vs. 9 0.03 0.00 13 vs. 9 0.01 0.00 Similar to what was observed in Models 1013, when using all the level2 instructional practice variable s to predict math achievement, Model 14 did not produce any statistically significant le vel2 fixed effects (see Table 64 ). These results suggest that, in Egypt, instructional pract icesrelated predictors di d not appear to contribute significantly in the statistical prediction of student math achievement. However, with statistically significant level1 fixed effects and random effect s, it could be inferred that extra math lessons, student selfconfidence in learning math, student valuing of math, time students spent on homework, and home re sources tended to be good predictors of math achievement and that schools in Egypt varied significantly in their mean math performance. Table 64. Parameter Estimates for The Combined Level2 Instructional Practices Model for Egypt Model Type Parameters Estimates SE p 14 Fixed INT 436.27 165.51 .011 Homework assignment 3.99 7.53 .598 Opportunity_algebra 0.24 0.38 .523 Opportunity_data 0.15 0.27 .571
PAGE 253
239 Table 64. Parameter Estimates for The Combined Level2 Instructional Practices Model for Egypt Model Type Parameters Estimates SE p Opportunity_geometry 0.50 0.64 .439 Opportunity_measurement 0.78 0.43 .074 Opportunity_number 0.61 1.39 .664 Content_activities 68.38 43.76 .123 Instruction_activities 21.03 30.94 .499 Instructional hours 0.16 0.13 .219 Extra lessons 5.61 1.75 .002 Selfconfidence 26.89 3.31 <.001 Valuing math 15.83 7.92 .045 Homework time 7.02 2.86 .014 Home resources 28.79 3.75 <.001 Random 00 1294.75 35.98 <.001 2 4443.16 66.66 As evident in Table 65, compared to pr eviously constructed models (Models 913), Model 14 appears to be more efficien t because this model accounted for more variance between schools. For example, an incr ease of 8% in the between school variance was observed when using Model 14 instead of Models 13. However, there was one exception. That is, changing from Models 10 to Model 14 would result in no difference in the amount of explained variance accounted fo r within and between schools. Therefore, in consideration of the amount of the e xplained variance between schools, Model 10 served as the most efficient and parsimonious instructional practi cesrelated model in predicting math achievement in Egypt. Table 65. Comparison of R2 between Model 14 and Previously Constructed Models 913 for Egypt Compared Model 00 2 14 vs. 9 0.06 0.00 14 vs. 10 0.00 0.00 14 vs. 11 0.04 0.00 14 vs. 12 0.04 0.00 14 vs. 13 0.08 0.00
PAGE 254
240 Research Question 4 To what extent are teacherrelated variab les (i.e., preparation to teach, ready to teach, and professional development) asso ciated with TIMSS 2003 eighthgrade math scores in each country? Similarly, incremental model building st rategies were appl ied to examine the relationships among teacherrelated variables (i .e., preparation to teach, ready to teach, and professional development) and math achievement. It is worth noting, however, that because the predictor preparati on to teach math content had no variation in Egypt (i.e., 100% math teachers in this sample reported be ing prepared to teach), Model 15 could not be estimated. Thus, only results of Mode ls 1618 are presented in Tables 6668. As shown in Table 66, of all the level2 predictors, only math related professional development in Model 16 had a stat istically significant fixed effect ( = 8.32, SE = 3.12, p = .010). This means each unit increases in mathrelated professional development was associated with an increase of 8.32 points in st udent math scores, after adjusting for other variables in the models. In terms of level1 fixed effects and random effects, all in Models 16 and 17 were statistically significant. These results suggested that a significant amount of variance in math achievement remained unexplained between schools in Egypt. When comparing the proportion of va riance accounted for by Models 16 and 17 with that of the foundational level1 model (Model 9), it a ppears that Model 16 was the most efficient with the amount of variance between school increased by 10% (see Table 66). No improvement in the within school va riance was noted by use of these models.
PAGE 255
241 Table 66. Parameter Estimates for Teacher Background Models for Egypt Model Type Parameters Estimates SE p 00 2 16 Fixed INT 356.35 53.41 <.001 Ready_number 32.27 36.29 .378 Ready_algebra 21.98 23.94 .363 Ready_Measurement 14.16 15.58 .367 Ready_Geometry 1.38 11.71 .907 Ready_Data 15.11 12.62 .236 Extra lessons 5.66 1.76 .002 Selfconfidence 26.87 3.31 <.001 Valuing math 15.99 7.94 .044 Homework time 6.86 2.87 .017 Home resources 28.62 3.68 <.001 Random 00 1372.70 37.05 <.001 2 4442.89 66.65 Pseudo R2 0.01 0.00 17 Fixed INT 276.93 18.27 <.001 Professional development 8.32 3.12 .010 Extra lessons 5.60 1.73 .002 Selfconfidence 27.07 3.29 <.001 Valuing math 16.02 7.95 .044 Homework time 6.97 2.86 .015 Home resources 28.72 3.70 <.001 Random 00 1238.67 35.19 <.001 2 4442.64 66.65 Pseudo R2 0.10 0.00 Note: Model 15 could not be computed because there was no variation for the level2 predictor Preparation to teach math content. Pseudo R2 refers to the difference in the proportion of variance between Model 9 and Models 1617. In Model 18 where ready to teach math topics and mathrelated professional development were included as level2 predic tors to predict math achievement, only mathrelated professional development wa s found statistically significant ( = 8.34, SE = 3.37, p = .016). This suggests that the more math related professional development programs that Egyptian teachers took, the higher math scores their students tended to achieve. Specifically, with every unit in creases in mathrelated profes sional development, student math scores were expected to increase by 8.34 points, after adjusti ng for other variables
PAGE 256
242 in the model. Also, all the level1 fixed effects and all the random effects were statistically significant, meaning that, in th is model, extra math lessons, student selfconfidence in learning math, student valuing of math, time students spent on homework, and home resources were good predictors of math achievement, and that there was a significant amount of variability in average math achievement across schools in Egypt. Table 67. Parameter Estimates for the Combined Teacher Background Model for Egypt Model Type Parameters Estimates SE p 18 Fixed INT 349.77 55.37 <.001 Professional development 8.34 3.37 .016 Ready_number 29.70 36.92 .424 Ready_algebra 27.53 25.18 .279 Ready_measurement 9.98 16.64 .551 Ready_geometry 0.64 12.41 .960 Ready_data 14.93 11.58 .203 Extra lessons 5.64 1.75 .002 Selfconfidence 26.99 3.31 <.001 Valuing math 16.10 7.92 .042 Homework time 6.76 2.86 .018 Home resources 28.52 3.66 <.001 Random 00 1224.62 34.99 <.001 2 4442.82 66.65 As evident in Table 68, Model 18 appeared to be more efficient than Models 9, 16 and 17 in terms of the amount of variance accounted for between schools. Specifically, an increase of 1% to 11% in the between school variance was likely to result when using Model 18 as opposed to Models 9, 16, or 17. Table 68. Comparison of R2 between Model 18 and Previously Constructed Models 9 and 1617 for Egypt Compared Model 00 2 18 vs. 9 0.11 0.00 18 vs. 16 0.11 0.00 18 vs. 17 0.01 0.00
PAGE 257
243 Research Question 5 To what extent are school related variables (i .e., class size, school resources for math instruction, and teacher perception of ma th instructional limitations due to student factors) associated with TIMSS 2003 eighthgrade math sc ores in each country? Table 69 provides a summary of the results for Models 1921 where schoolrelated variables (i.e., class size, school resources for ma th instruction, and teacher perception of math instructional limitations due to student factor s) were separately included in the models to predict math ach ievement. In these models, no statistically significant level2 main effects were detected, meaning these pr edictors did not appear to predict well math achievement in Egypt. Agai n, similar to what was observed in earlier models, all of the level1 fixed and random effects were statistic ally significant. Table 69. Parameter Estimates for School Background Models for Egypt Model Type Parameters Estimates SE p 19 Fixed INT 309.31 22.68 <.001 Class size 7.63 7.90 .338 Extra lessons 5.62 1.75 .002 Selfconfidence 27.03 3.32 <.001 Valuing math 16.03 8.01 .045 Homework time 7.00 2.89 .016 Home resources 28.80 3.73 <.001 Random 00 1355.58 36.82 <.001 2 4443.74 66.66 20 Fixed INT 291.41 17.84 <.001 Instructional limitation 5.84 7.40 .433 Extra lessons 5.64 1.76 .002 Selfconfidence 26.92 3.30 <.001 Valuing math 15.90 7.99 .046 Homework time 7.05 2.89 .015 Home resources 28.81 3.74 <.001 Random 00 1398.20 37.39 <.001 2 4442.66 66.65 21 Fixed INT 289.66 20.39 <.001 School resources 2.92 7.24 .687
PAGE 258
244 Table 69. Parameter Estimates for School Background Models for Egypt Model Type Parameters Estimates SE p Extra lessons 5.64 1.75 .002 Selfconfidence 26.93 3.30 <.001 Valuing math 15.90 7.97 .046 Homework time 7.07 2.87 .014 Home resources 28.81 3.73 <.001 Random 00 1402.07 37.44 <.001 2 4442.63 66.65 In comparing Models 1921 with Model 9 in terms of the proportion of variance accounted for, it looks like that Model 19 was slightly more efficient (see Table 70). Specifically, the use of Model 19 as opposed to Model 9 increased the amount of between school variance accounted for by 2%; whereas the use of Models 20 and 21 as opposed to Model 9 accounted resulted in an increas e of 1% in the between school variance. Table 70. Comparison of R2 between Level2 Teacher Background and Foundational Level1 Model for Egypt Compared Model 00 2 19 vs. 9 0.02 0.00 20 vs. 9 0.01 0.00 21 vs. 9 0.01 0.00 As shown in Table 71, Model 22 with all of the school backgroundrelated predictors included in the model to pred ict math achievement did not produce any statistically significant level2 main effects. These results suggest that, after controlling for all other level1 and level2 variables in the model, students in schools with larger class sizes did not statistically perform bette r in math than their peers in schools with smaller class sizes. Similarly, it seems that there was no statistically significant difference in math achievement among students in schools where teachers perceived to have more limitations due to student factors and those in schools where teachers perceived to have none or few limitations due to student factor s. Likewise, school resources did not show
PAGE 259
245 significant relationship with math achievement across schools in E gypt, after adjusting for other predictors in the model. As expected, all the level1 fixed effect s and random effects in Model 22 were found statistically sign ificant. This indicated that th e amount of within and between school variance that remained to be explained was still significant. Table 71. Parameter Estimates for the Combined School Background Model for Egypt Model Type Parameters Estimates SE p 22 Fixed INT 306.86 24.54 <.001 Instructional limitation 6.78 8.39 .422 Class size 8.19 7.89 .304 School resources 1.97 7.46 .793 Extra lessons 5.62 1.77 .002 Selfconfidence 26.99 3.33 <.001 Valuing math 15.97 7.97 .045 Homework time 7.02 2.88 .015 Home resources 28.75 3.73 <.001 Random 00 1387.90 37.25 <.001 2 4443.52 66.66 When comparing Model 22 against earlier constructed models, it appears that this model was not the most efficient (see Table 72). Although the amount of explained variance between schools in Model 22 was be tter than Models 20 and 21 and equal to Model 9, it accounted for less variance be tween schools than Model 19 by 2%. Thus, Model 19 served as the most efficient school related model to predict math achievement in Egypt. Table 72. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for Egypt Compared Model 00 2 22 vs. 9 0.00 0.00 22 vs. 19 0.02 0.00 22 vs. 20 0.01 0.00 22 vs. 21 0.01 0.00
PAGE 260
246 Final Model With an intention to identify the most efficient and parsimonious model to predict eighthgrade math achievement in Egypt, Model 23 was built by including all the statistically significant level2 predictors in earlier combined models to Model 9 and then compared with the three combin ed models, Models 14, 18, and 22. As can be seen from Table 73, all of the fixed and random effects estimated in Model 23 were statistically significant. Of the fixed effects, five showed positive relationships with math achievement and one showed a negative relationship with math achievement. As an example, whereas an increase in home resources was associated with an improvement of 28.72 points in student math achievement, a unit change in extra math lessons was associated with a reduction in student math scores by 5.60 points, while controlling for other variables. As for mathre lated professional de velopment, for every extra program Egyptian teachers took, it was likely that their students would increase their math scores by 8.32 points, while cont rolling for other variab les in the model. Likewise, the more time students spent on homework and the higher level of selfconfidence in learning math and valuing of ma th that students expressed, the higher math scores they tended to achieve. In addition, it seems that significant variation existed in math achievement across schools in Egypt. Table 73. Parameter Estimates for Full Model for Egypt Model Type Parameters Estimates SE p 23 Fixed INT 276.93 18.27 <.001 Professional development 8.32 3.12 .010 Extra lessons 5.60 1.73 .002 Selfconfidence 27.07 3.29 <.001 Valuing math 16.02 7.95 .044 Homework time 6.97 2.86 .015 Home resources 28.72 3.70 <.001 Random 00 1238.67 35.19 <.001
PAGE 261
247 Table 73. Parameter Estimates for Full Model for Egypt Model Type Parameters Estimates SE p 2 4442.64 66.65 Finally, as shown in Table 74, the amount of between school variance accounted for by Model 23 was 4% larger than that acc ounted for by Model 14 a nd 11% larger than that accounted for by Model 22. However, in comparison with Model 18, Model 23 accounted for less variance between schools by 1%. Thus, in consideration of these results, Model 18 served as the best model for predicting math achievement in Egypt. Table 74. Comparison of R2 between Model 23 and Previously Constructed Models 14, 18 and 22 for Egypt Compared Model 00 2 14 0.04 0.00 18 0.01 0.00 22 0.11 0.00
PAGE 262
248 Results for South Africa Evaluation of Missing Data As a result of the listwise deletion met hod, the sample size for South Africa was reduced from 8,927 students and 253 school s to 1,564 students and 52 schools. This means only 17.52% of the original sample had co mplete data on all variables of interest in this study. In order to ev aluate the extent to which th e data for South Africa were missing completely at random, the missingness on 19 level2 variables was correlated. Results of this analysis suggested a nonra ndomness of missing data, with the majority of the missingness indicators (16) having moderate to strong correlation coefficients with each other (r ranged from .50 to .99). In addition, when missingness was correlated with values of itself as well as values of othe r variables, only margin al correlations were observed (r = .27 to .18). In summary, the missi ng data mechanism for South Africa was not missing completely at random. Univariate Analysis A descriptive examination of level1 va riables (i.e., overall math achievement, gender, selfconfidence in learning math, va luing of math, time on math homework, extra math lessons, and home resources for lear ning math) was conducted using SAS 9.13. Of the complete sample of 1,564 eighthgrade students, 773 (49.42%) were male and 791 (50.58%) were female. On average, the wei ghted overall math achievement for South African students was relatively low 272.66 (SD = 108.11) with the lowest score of 70.60 and the highest score of 670.84 (see Table 75). With regard to level1 predictor variable s, it appeared that, on average, eighthgrade students in South Afri ca had moderate support at home in terms of resources for
PAGE 263
249 learning (M = 1.66 SD = .89), were above medium leve l of selfconfidence in learning math (M = 1.23, SD = .67) and valuing of math (M = 1.77, SD = .49), spent a modest amount of time on math homework (M = .99, SD = .64), and took extra math lessons about one to two times a week (M = 1.49, SD = 1.13) (see Table 75). Table 75. Weighted Descriptive Statistics for Level1 Variables for South Af rica (N = 1,564) Variable M SD Min Max Overall math achievement 272.66 108.11 70.60 670.84 Selfconfidence in learning math 1.23 0.67 0.00 2.00 Valuing of math 1.77 0.49 0.00 2.00 Time on math homework 0.99 0.64 0.00 2.00 Extra math lessons 1.49 1.13 0.00 3.00 Home resources for learning math 1.66 0.89 0.00 3.00 Note: When weight was used to compute m eans in SAS, skewness and kurtosis were not produced In terms of distributions of level1 va riables, the unweighted descriptive results from Table 76 suggested that all except for valuing of math, approximated normality, with skewness and kurtosis values within the range of 1.00 and 1.00. Table 76. Unweighted Descriptive Statistics for Level1 Variables for South Africa (N = 1,564) Variable M SD Min Max Skewness Kurtosis Overall math achievement 268.54 94.31 70.60 670.84 1.16 1.67 Selfconfidence in learning math 1.23 0.66 0.00 2.00 0.30 0.77 Valuing of math 1.78 0.48 0.00 2.00 2.15 3.91 Time on math homework 1.00 0.64 0.00 2.00 0.00 0.58 Extra math lessons 1.54 1.11 0.00 3.00 0.08 1.35 Home resources for learning math 1.63 0.88 0.00 3.00 0.13 0.68 Similarly, a descriptive analysis was c onducted on 19 predictor variables at the school level. As shown in Table 77, on averag e, South African students had modest to moderate percentage of opportunity to le arn math content domains, from 39.32% (SD = 34.32) for data to 75.30% (SD = 24.16) for number. Although not every math teacher reported being prepared to teach math content (M = .75, SD = .44), on average, they
PAGE 264
250 participated in various types of mathre lated professional development programs (M = 3.04, SD =1.80) and reported a relatively hi gh level of readin ess to teach (M = 1.37, SD = .66 for data to M = 1.88, SD = .32 for number). The data also suggested that in about ha lf of the lessons, students were given activities related to math instructional practice (M = 2.18, SD = .23) and math content (M = 1.94, SD = .23). On average, a moderate amount of homework assignment was assigned to the students (M = 1.10, SD = .72). Finally, South African schools tended to have relatively large class sizes, more than 40 students (M = 2.42, SD = .85) and teachersÂ’ perception of instructional limitati ons due to student factors was close to medium (M = .90, SD = .75). On average, the availability of school resources for math instruction was relatively low (M = .77, SD = .65). Noticeably, across 52 schools, the average math instructional hour s per year varied greatly, fr om 93 to 360, with a mean of 164.14 (SD = 57.11). Also, as shown in Table 77, the majority of level2 predictor variables (16 out of 19) appeared to approximate normal distri butions with skewness and kurtosis values within the normality approximation range of 1.00 to 1.00. The three variables that appeared to depart from normality included contentrelated activities in math lessons, ready to teach number, and average math instructional hours per year. Table 77. Unweighted Descriptive Statistics for Level2 Variables for South Africa (N =52) Variable M SD Min Max Skewness Kurtosis Opportunity to learn number 75.30 24.16 0.00 100.00 0.86 0.34 Opportunity to learn algebra 57.60 30.08 0.00 100.00 0.09 1.03 Opportunity to learn measurement 50.34 32.29 0.00 100.00 0.22 1.03 Opportunity to learn geometry 47.72 27.26 0.00 100.00 0.61 0.21 Opportunity to learn data 39.32 34.72 0.00 100.00 0.36 1.04 Amount of homework assignment 1.10 0.72 0.00 2.00 0.15 1.02
PAGE 265
251 Table 77. Unweighted Descriptive Statistics for Level2 Variables for South Africa (N =52) Variable M SD Min Max Skewness Kurtosis Instructional practicerelated activities in math lessons 2.18 0.23 1.73 2.65 0.16 0.82 Contentrelated activities in math lessons 1.94 0.23 1.21 2.22 1.47 2.21 Preparation to teach 0.75 0.44 0.00 1.00 1.19 0.61 Ready to teach number 1.88 0.32 1.00 2.00 2.48 4.31 Ready to teach algebra 1.65 0.52 0.00 2.00 1.10 0.12 Ready to teach measurement 1.44 0.67 0.00 2.00 0.80 0.42 Ready to teach geometry 1.62 0.60 0.00 2.00 1.32 0.79 Ready to teach data 1.37 0.66 0.00 2.00 0.55 0.63 Mathrelated professional development 3.04 1.80 0.00 5.00 0.37 1.33 Class size for math instruction 2.42 0.85 0.00 3.00 1.35 0.97 School resources for math instruction 0.77 0.65 0.00 2.00 0.25 0.62 Teacher perception of math instructional limitations due to student factors 0.90 0.75 0.00 2.00 0.16 1.16 Average math instructional hours per year 164.14 57.11 93.00 360.00 1.55 2.36 Bivariate Analysis The results of weighted bivariate correl ations among six level1 variables (i.e., gender, selfconfidence in learning math, va luing of math, time on math homework, extra math lessons, and home resources for learni ng math) are presented in Appendix L. It appeared from these results that level1 pr edictor variables were uncorrelated with each other, with correlation coefficients ranging from .07 between gender and student selfconfidence in learning math to .18 between se lfconfidence in learni ng math and valuing of math. It was interesting to note that extra math lessons tended to have a negative albeit weak relationship with all level1 variable s, except for student valuing of math (r = .02 to .07).
PAGE 266
252 At level2, unweighted biva riate relationships were estimated for 19 predictor variables. The correla tion matrix for these va riables can be found in Appendix M. Unlike level1, correlation coefficients of level2 va riables had a wider range, from .37 between instructional practicerelated activities and opportunity to learn number to .70 between ready to teach algebra and rea dy to teach geometry. As expect ed, correlation coefficients among the variables measuring the same cons truct tended to be stronger than those measuring different construct. For example, correlation coefficients ranged from .42 to .70 for ready to teach variables and from .28 to .63 for opportunity to learn variables. Interestingly, it was observed that of th e 19 predictors, 10 had negative albeit weak correlation with school resources. Evaluation of HLM Assumptions In order to ensure tenability of result s produced by multilevel models in this study, an evaluation of HLM assumptions thro ugh visual analysis of both level1 and level2 random effects of Model 23 was performed. Model 23 was selected because the results of HLM analysis suggested that it wa s the most efficient model to predict math achievement in South Africa (see HL M Analysis for South Africa). The data from Figure 45 suggested that level1 residuals approximated a normal distribution.
PAGE 267
253 Figure 45. Histogram for Level1 Residuals for South Africa The scatter plot between le vel1 residuals and predicted math achievement, as illustrated in Figure 46, suggested that level1 variance was not essentially homogeneous. In fact, it appears that there were two clus ters of students with one consisting of the majority of students whose predicted math sc ores were about equal or less than 500 and the other consisting of a small number of students whose predicted math scores were above 500. Interestingly, for the latter group, le vel1 residuals were large in magnitude and negative in direction, suggesting th eir math scores were over predicted. Figure 46. Level1 Residuals by Predicted Ma th Achievement for South Africa
PAGE 268
254 In order to gain a better understanding of these students, descriptive statistics on level1 predictors were computed for the two groups of students. The data suggested that there were 55 students whose predicted scor es in math were larger than 499. In comparison to their peers whose predicted math scores were less than 499, these students had a higher level of selfc onfidence in learning math (M = 1.65 vs. 1.22), took extra math lessons much less frequently (M = .25 vs. 1.58), and had more resources at home for learning math (M = 2.69 vs. 1.59). In order to evaluate the influence of this group of students with over predicted math scores on the results, a decision wa s made to conduct HLM analysis with two samples: the original sample where all of the students were included and the reduced sample where 55 students with predicted sc ores larger than 499 were excluded. The following table compares HLM results of the original sample (N = 1,564) with those of the reduced sample (N = 1,509) for South Africa. Overall, the results produced by the two datasets were similar. School resources was the only level2 variable that had significant relationship with math ach ievement in the final model. It is worth noting that Models 1023 for the reduced sample were simpler because they were intercept only models (i.e., all the slope variances that were not significant in earlier models were set to 0 in Models 1023), whereas for the original sample, two slope variances for extra math lessons and selfconf idence in learning math were allowed to be random in Models 1023. However, for the original sample, in the final model, none of these slope variances was significant, which is the same as the final model for the reduced dataset.
PAGE 269
255 In terms of level1 residuals, it is inte resting to see that there remained some students whose math scores were over pr edicted even though the magnitude of the residuals appeared smaller in the reduced samp le than in the original sample. Based on this analysis, the HLM results for the original samples we re presented in this study. Table 78. Comparisons of HLM Results Produced by the Original Sample and Reduced Sample for South Africa Mode l Original Sample (N = 1564, Highest math score = 670.84) Reduced Sample (N = 1509, Highest math scores = 499) 1 ICC = .76 ICC = .69 2 Gender: Both fixed and random effects not significant same 3 Selfconfidence: significant fixed effect, not random effect same 4 Valuing math: significant fixed effect, not random effect Same 5 Time on HW: Both fixed and random effects not significant Significant fixed effect, not random 6 Extra math lessons: Significant fixed and random effects Significant fixed effect, not random 7 All student background variables: Gender fixed and random effects not significant Valuing math and time on HW random effects not significant Same 8 Home resources: Significant fixed effect, not random effect Same 9 All studentrelated variables: All fixed and random effects significant All fixed effects significant All slope variances not significant 10 Opportunity to learn: random coefficient model One crosslevel interaction effect significant Random intercept only model: No interaction effects because all slope variances were set to 0. Level2 effect not significant 11 Homework assignment: Not significant Same 12 Activities in math lessons: Not significant Same 13 Instructional hours: Not significant Same 14 Instructional practice: Level2 not significant Interactions not significant Same 15 Preparation to teach: Level2 not significant Interactions not significant Same 16 Ready to teach: Level2 not significant Interactions not significant Same 17 Professional development: Level2 not significant Interactions not significant Same 18 Teacher background model: Level2 not Fixed effect for ready to teach number
PAGE 270
256 Table 78. Comparisons of HLM Results Produced by the Original Sample and Reduced Sample for South Africa Mode l Original Sample (N = 1564, Highest math score = 670.84) Reduced Sample (N = 1509, Highest math scores = 499) significant. Interactions not significant significant 19 Class size: Level2 not significant. Interactions not significant Same 20 Instructional limitation: Level2 not significant. Interactions not significant Same 21 School resources: Fixed effect for school resources significant Same 22 Schoolbackground model: Fixed effect for school resources significant Same 23 Final model. School resources as the only level2 variable. Fixed effect for school resources significant School resources and ready to teach as level2 variables. Only fixed effect for school resources significant 23 Level1 residuals by predicted math scores Level1 residuals by predicted math scores For level2 random effects, the empirical Bayes residuals for the intercept and slope as well as empirical Bayes predicted math scores were used to construct the graphs in Figures 4750. It is worth noting that the model that was used to produce these residuals data consisted of only one level2 predictor, school resources. Because this variable had three possible valu es, there were three predicte d values in Figures 47 and 50. As can be seen from Figures 4748, level2 intercept residuals appeared to have an approximately normal distri bution and homogeneous variance.
PAGE 271
257 Figure 47. Histogram for Level2 Intercep t Residuals for South Africa Figure 48. Level2 Intercept Residuals by Pred icted Intercept for South Africa Likewise, as can be seen from Figures 4950, the slope residuals for extra math lessons also seemed to have an approximately nor mal distribution and ho mogeneous variance.
PAGE 272
258 Figure 49. Histogram for Level2 Slope (Extra Lessons) Residuals for South Africa Figure 50. Level2 Slope (Extra Lessons) Residual s by Predicted Math Achievement for South Africa HLM Analysis Unconditional model (Model 1) The HLM analysis started with the uncondi tional model where none of the level1 or level2 predictor was included in the mode l. The results of the unconditional model are presented in Table 78. For South Africa, th e fixed effect for the intercept was 267.57 (SE = 17.18, p <.001). The amount of variability in math achievement was significantly different across schools in South Africa (00 = 9,252.67, SE = 96.19, p <.001). Within schools, the amount of unexplai ned variance was much smaller than that between schools
PAGE 273
259 (2= 2,902.45, SE = 53.87). The computed intraclass correlation (ICC) of .76 indicated a relatively strong level of natural clusteri ng of students occurred between schools in South Africa. In other words, approximately 76% of the total vari ance in math scores occurred between schools. Table 79. Parameter Estimates for Unconditional Model for South Africa Model Type Parameters Estimates SE p 1 ICC .76 Fixed INT 267.57 17.18 <.001 Random 00 9252.67 96.19 <.001 2 2902.45 53.87 Research Question 1 To what extent are student background va riables (i.e., gender selfconfidence in learning math, valuing of math, time on math homework, and tutoring in math) associated with TIMSS 2003 eighthgrade math scores in each country? In order to answer this research question, first, ea ch of the student background variables was entered separately into Model 1 to predict math achievement. Then, as a group of variables, those that contributed sign ificantly in Models 26 were included in Model 7 to predict math achievement. Finally, in order to evaluate model fit in terms proportion of variance accounted for, pseudo R2 was computed for the current model against previously constructed models. Re sults of these models (Models 26) are presented in Table 79. The data from Table 79 suggested that three out of five fixed effects estimated by Models 26 were statistically significant: student selfconfidence in learning math ( = 27.53, SE = 3.04, p <.001), student valuing of math ( = 22.48, SE = 3.10, p <.001), and extra math lessons in( = 12.81, SE = 2.24, p <.001). These data suggested that whereas
PAGE 274
260 selfconfidence in learning math (Model 3) and valuing of math (Model 4) had positive relationships with math achievement, extra math lessons (Model 6) had an inverse relationship with math achievement. Also, in this model, only stude nt selfconfidence in learning math and extra math lessons had statistically significant slope variance ( = 117.99 and 103.15, SE = 10.86 and 10.16, p <.001 and .011, respectively). This suggests that schools in South Africa tended to diffe r significantly in the relationship between math achievement and student selfconfidence in learning math and extra math lessons. An examination of the pseudo R2 values across the five models (Models 26) suggested that the addition of individual pr edictors separately to the unconditional model (Model 1) to predict math achievement re sulted in a reduction of the between school variance in three models (i.e., 3%, 9% and 19% in Models 2, 3, a nd 4, respectively) and an increase of the between school variance in two models (12% and 8% in Models 5 and 6, respectively). For the within school variance, however, so me reduction was noted in all the models, between 1% and 13% (see Table 79). Table 80. Parameter Estimates for Models 26 (Level1 Student Background) for South Africa Model Type Parameters Estimates SE p 00 2 2 Fixed INT 265.72 17.00 <.001 Gender 3.55 3.44 .307 Random 00 9019.34 94.97 <.001 Gender 66.14 8.13 .216 2 2885.06 53.71 Pseudo R2 0.03 0.01 3 Fixed INT 232.96 17.39 <.001 Selfconfidence 27.53 3.04 <.001 Random 00 8445.69 91.90 <.001 Selfconfidence 117.99 10.86 .019 2 2530.78 50.31 Pseudo R2 0.09 0.13 4 Fixed INT 227.84 16.36 <.001 Valuing math 22.48 3.10 <.001
PAGE 275
261 Table 80. Parameter Estimates for Models 26 (Level1 Student Background) for South Africa Model Type Parameters Estimates SE p 00 2 Random 00 7486.31 86.52 <.001 Valuing math 40.64 6.38 >.500 2 2774.86 52.68 Pseudo R2 0.19 0.04 5 Fixed 263.65 18.26 <.001 Homework time 3.75 2.46 .133 Random 00 10375.53 101.86 <.001 Homework time 39.83 6.31 .355 2 2887.05 53.73 Pseudo R2 0.12 0.01 6 Fixed INT 283.36 17.81 <.001 Extra lessons 12.81 2.24 <.001 Random 00 9979.98 99.90 <.001 Extra lessons 103.15 10.16 .011 2 2703.82 52.00 Pseudo R2 0.08 0.07 As a next step of model building, all of the student background variables (i.e., gender, selfconfidence in learning math, va luing of math, time on math homework, and tutoring in math) were included in the co mbined model, Model 7, to predict math achievement. Notably, in the presence of othe r level1 variables in the model, all but gender had statistically significant fixed effects. With fixed effect of 11.97 (SE = 1.79, p <.001) for extra math lesson, it could be inferred that for each unit incr ease in extra math lesson (i.e., from 0 for never to 3 for daily ), students were ex pected to reduce 11.97 points in their math scores while controlling for other predictors in the model. Similarly, with fixed effect of 24.67 (SE = 2.58, p <.001) for selfconfidence in learning math, it could be interpreted that for each unit incr ease in level of self confidence in learning math (i.e., from 0 for low to 2 for high), it was likely that students would improve 24.67 points in their math scores while controlling for other predictors in the model. Likewise, each unit change in student valuing of math was associated with an increase of 16.24
PAGE 276
262 points (SE = 2.80, p <.001) in math achievement and each unit change in time students spent on homework was associated with an increase of 4.65 points (SE = 2.28, p = .046), an after adjusting for other predictors in the model. Interestingly, it was noted that of all th e random effects in this model, only those of extra math lessons and student selfconfid ence in learning math were statistically significant ( = 66.25 and 76.44, SE = 8.14 and 8.74, p = .010 and .029, respectively). This indicates that the observed relationshi ps between math achievement and extra math lessons and student selfconfid ence in learning math varied significantly across schools in South Africa. Table 81. Parameter Estimates for Model 7 (Level1 Student Background) for South Africa Model Type Parameters Estimates SE p 7 Fixed INT 217.00 18.96 <.001 Gender 3.58 2.86 .216 Extra lessons 11.97 1.79 <.001 Selfconfidence 24.67 2.58 <.001 Valuing math 16.24 2.80 <.001 Homework time 4.65 2.28 .046 Random 00 10418.03 102.07 <.001 Gender 41.14 6.41 .205 Extra lessons 66.25 8.14 .010 Selfconfidence 76.44 8.74 .029 Valuing math 26.42 5.14 .330 Homework time 26.92 5.19 >.500 2 2286.76 47.82 An evaluation of model fit was also c onducted between Model 7 and previously constructed models, Models 26. As can be s een from Table 81, the inclusion of student background variables in Model 7 resulted in some reduction in the amount of variance accounted for in math achievement within schools (from 10% when compared with Model 3 to 21% when compared with Models 2 and 5). Between schools, the amount of variance appeared to increase notably in Mode ls 2 to 4, from 16% 39%. In sum, Model 7
PAGE 277
263 was more efficient than earlier models in that it accounted for mo re variance in math achievement within schools in South Africa. However, this model appeared to be less efficient than previously constr ucted models in that it accoun ted for less variance in math achievement between schools in South Africa. Table 82. Comparison of R2 between Model 7 and Previously Constructed Models for South Africa Compared Model 00 2 7 vs. 2 0.16 0.21 7 vs. 3 0.23 0.10 7 vs. 4 0.39 0.18 7 vs. 5 0.00 0.21 7 vs. 6 0.04 0.15 Research Question 2 To what extent are home resources variab les (i.e., availability of calculator, computer, and desk for student use) asso ciated with TIMSS 2003 eighthgrade math scores in each country? When the level1 predictor home resources was added to the unconditional model to predict math achievement, a reduction of 1% in the within school variance and 25% in the between school variance was noted (see Tabl e 82). As a fixed effect, home resources was statistically significant ( = 7.55, SE = 2.30, p = .002), meaning that each unit change in home resources (i.e., from 0 to 3) was associated with an increase of 7.55 points in student math achievement, while not controlling for othe r variables in the model. As a random effect, home resources was not statistica lly significant ( = 53.80, SE = 7.33, p = .118). This indicates that the relation ship between home resources and math achievement did not vary significantl y across schools in South Africa.
PAGE 278
264 Table 83. Parameter Estimates for Level1 Home Resources Model for South Africa Model Type Parameters Estimates SE p 00 2 8 Fixed INT 253.63 15.49 <.001 Home resources 7.55 2.30 .002 Random 00 6957.50 83.41 <.001 Home resources 53.80 7.33 .118 2 2860.28 53.48 Pseudo R2 0.25 0.01 Given the findings obtained from Models 7 and 8, five out of six studentrelated variables were entered into the unconditional model to make Model 9. Gender was excluded from Model 9 because both of its fi xed and random effects we re not statistically significant in Model 7. Also, in Model 9, all the slope variances for student valuing of math and time students spent on homework were set to 0 because they were not statistically significant in earlier models. As can be seen from Table 83, all of the level1 variable s had statistically significant fixed effects. Of the five variab les in the models, stude nt selfconfidence in learning math appears to have the strongest positive relationship with math achievement ( = 24.17, SE = 2.72, p <.001). Following was student va luing of math with a fixed effect of 16.23 (SE = 2.86, p <.001). Next were home resources ( = 6.81, SE = 1.92, p <.001) and time student spent on homework ( = 5.33, SE = 2.22, p = .017). As for extra math lesson, an inverse relationship was noted between this predictor and math achievement ( = 12.54, SE = 1.81, p <.001). These results suggest that the more selfconfidence students expressed in learning ma th, the higher students valued math, the more time students spent on their homework, and the more home resources students had at home was associated with the higher math scores students tended to achieve. In
PAGE 279
265 contrast, it appears that the more frequently students took extra math lessons, the poorer math scores they seem to earn. In terms of random effects, all were found statistically significant, suggesting that a considerable amount of variance between sc hools remained to be explained. In addition, it could be inferred that the relationship between math achievement and extra math lessons and student selfconfid ence in learning math varied significantly across schools. As compared to Models 7 and 8, Model 9 appeared more efficient because it accounted for a significantly higher amount of the explained variance between schools (i.e., 7% when compared to Model 7). In term s of the explained vari ance within schools, an increase of up to 20% was observed when co mpared with Model 8. As a result of these comparisons, Model 9 was selected as th e foundational level1 model for further examination of the relationships between level2 predictors and math achievement. Table 84. Parameter Estimates for Combined Level1 Predictors Model for South Africa Model Type Parameters Estimates SE p Compared Model 00 2 9 Fixed INT 208.39 19.23 <.001 Extra lessons 12.54 1.81 <.001 Selfconfidence 24.17 2.72 <.001 Valuing math 16.23 2.86 <.001 Homework time 5.33 2.22 .017 Home resources 6.81 1.92 .001 Random 00 9644.91 98.21 <.001 Extra lessons 68.25 8.26 .045 Selfconfidence 69.04 8.31 .034 2 2292.92 47.88 Pseudo R2 7 0.07 0.00 8 0.39 0.20
PAGE 280
266 Research Question 3 To what extent are instructional variable s (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional time) associated with TIMSS 2003 eighthgrade math scores in each country? In addressing this research question, a similar strategy for model building used in Research Question 1 was applied here. That is, each of the level2 instructional practice variables was first added to the foundationa l level1 model (Model 9) to make Models 1013. Then, as a group, those variables with significant fixed eff ects in Models 1013 were included in the combined instructional practices model, Model 14. It is important to note that in these models, all possible crossle vel interactions between level1 and level2 predictors were allowed. Th e results of Models 1014 are presented in Tables 8487. As can be seen in Table 84, Model 10 pr oduced one statistically significant crosslevel interaction betwee n opportunity to learn data and st udent selfconfidence in learning math ( = .14, SE = .05, p < .015). Also, in this m odel, all of the fixed effects, except for those of level2 predictors, were statistical ly significant. This means that, whereas all of the level1 predictors a ppear to be good predictors of math achievement in South Africa, there was not enough evidence to make a similar statement about the opportunity to learn variables. Similar to Model 10, none of the level2 predictors was found statistically significant in Models 1113. In contrast, all of the level1 predicto rs, except for extra math lessons and student selfconfidence in l earning math in Model 12, were statistically significant. In terms of random effects, the slope variance of stude nt selfconfidence in learning math in Model 11 was the only one that was not statistically significant ( =
PAGE 281
267 63.16, SE = 7.95, p = .050). This means, in South Afri ca, the relationship between math achievement and selfconfidence did not appe ar to differ from one school to another. Table 85. Parameter Estimates for Level2 Instructi onal Practices Models for South Africa Model Type Parameters Estimates SE p 10 Fixed INT 212.72 55.63 .001 Opportunity_algebra 0.15 0.57 .798 Opportunity_data 0.19 0.65 .775 Opportunity_geometry 1.42 0.72 .055 Opportunity_measurement 1.12 0.83 .183 Opportunity_number 0.08 0.74 .918 Extra lessons 18.20 6.28 .006 Opportunity_algebra Extra lessons 0.02 0.07 .753 Opportunity_data Extra lessons 0.03 0.06 .647 Opportunity_geometry Extra lessons 0.11 0.08 .172 Opportunity_measurement Extra lessons 0.13 0.08 .102 Opportunity_number Extra lessons 0.10 0.09 .267 Selfconfidence 28.12 8.20 .002 Opportunity_algebra Selfconfidence 0.09 0.09 .334 Opportunity_data Selfconfidence 0.14 0.05 .015 Opportunity_geometry Selfconfidence 0.10 0.09 .268 Opportunity_measurement Selfconfidence 0.11 0.09 .221 Opportunity_number Selfconfidence 0.04 0.12 .766 Valuing math 15.55 3.00 <.001 Homework time 5.45 2.19 .013 6.78 1.92 .001 Random 00 7986.85 89.37 <.001 Extra lessons 60.36 7.77 .020 Selfconfidence 26.38 5.14 .205 2 2299.31 47.95 11 Fixed INT 181.83 23.53 <.001 Homework assignment 25.05 16.76 .141 Extra lessons 10.14 2.74 .001 Homework assignment Extra lessons 2.24 2.24 .323 Selfconfidence 19.10 3.92 <.001 Homework assignment Selfconfidence 4.70 2.46 .062 Valuing math 16.01 2.86 <.001 Homework time 5.43 2.21 .014 Home resources 6.70 1.93 .001 Random 00 9574.97 97.85 <.001 Extra lessons 68.81 8.29 .038 Selfconfidence 63.16 7.95 .050 2 2294.04 47.90 12 Fixed INT 434.97 210.43 .044 Content_activities 27.10 81.45 .741
PAGE 282
268 Table 85. Parameter Estimates for Level2 Instructi onal Practices Models for South Africa Model Type Parameters Estimates SE p Instruction_activities 132.11 100.59 .195 Extra lessons 25.45 21.99 .253 Content_activities Extra lessons 9.79 7.15 .177 Instruction_activities Extra lessons 2.57 9.65 .791 Selfconfidence 15.11 24.65 .542 Content_activities Selfconfidence 8.73 10.37 .404 Instruction_activities Selfconfidence 12.17 14.78 .414 Valuing math 15.73 2.99 <.001 Homework time 4.91 2.27 .031 Home resources 6.77 1.89 .001 Random 00 9238.02 96.11 <.001 Extra lessons 82.26 9.07 .032 Selfconfidence 68.49 8.28 .038 2 2286.29 47.82 13 Fixed INT 208.97 19.30 <.001 Instructional hours 0.14 0.18 .460 Extra lessons 12.62 1.82 <.001 Instructional hours Extra lessons 0.00 0.03 .903 Selfconfidence 24.18 2.72 <.001 Instructional hours Selfconfidence 0.01 0.03 .862 Valuing math 16.26 2.85 <.001 Homework time 5.28 2.21 .017 Home resources 6.79 1.93 .001 Random 00 9738.81 98.69 <.001 Extra lessons 68.68 8.29 .038 Selfconfidence 72.85 8.54 .028 2 2293.84 47.89 In terms of model fit, in comparison w ith the foundational level1 model (Model 9), Model 10 appeared to be the most effi cient model because the amount of explained variance between schools in this model incr eased by 17% (see Table 85). As for the within school variance, no significant di fference was observed be tween Models 1013 and Model 9 (pseudo R2 = 0).
PAGE 283
269 Table 86. Comparison of R2 between Level2 Instructional Practice Models and Foundational Level1 Model Compared Model 00 2 10 vs. 9 0.17 0.00 11 vs. 9 0.01 0.00 12 vs. 9 0.04 0.00 13 vs. 9 0.07 0.00 Figure 51 presents the nature of the inte raction between student selfconfidence in learning math and opportunity to learn da ta produced by Model 10. For students who expressed a low or medium selfconfidence in learning math, increased opportunity to learn data appeared to be associated with in creased student math scores. In contrast, for students with a high level of selfconfiden ce in learning math, increased opportunity to learn data seems to be associated with lowe r math scores for the students. Thus, it looks like that high opportunity to learn data worked best for students with a low level of selfconfidence in learning math. However, in co mparing the three groups of students, it appears that those reported to have a hi gh level of selfconfid ence in learning math consistently outperformed their peers who repor ted having a low or medium level of selfconfidence in learning math. The sizes of differences in math achievement among these three groups of students, however, were large when there was little opportunity to learn data and became significantly smaller when there was higher opportuni ty to learn data.
PAGE 284
270 020406080100 Opportunity to learn DataPredicted Math Achievement Confidence: Low Confidence: Medium Confidence: HighFigure 51. Interaction between Opportunity to Lear n Data and Student Selfconfidence in Learning Math in South Africa Similar to what was observed in Models 1013, when using all the level2 instructional practice variable s to predict math achievement, Model 14 did not produce any statistically significant crosslevel interaction effect as well as level2 fixed effects (see Table 86). This means that, in South Afri ca, instructional practicesrelated predictors did not appear to contribute significantly in the statistical predic tion of student math achievement. As for level1 fixed effects, three were found statis tically significant: valuing of math ( = 15.03, SE = 3.08, p <.001), time students spent on homework ( = 5.08, SE = 2.22, p = .022), and home resources ( = 6.73, SE = 1.91, p = .001). These results suggested that increased home resources or time students spent on homework, or valuing of math tended to be associated with increased student math scores. As for random effects, whereas the variances of the intercept ( 00 = 8146.44, SE = 90.26, p <.001)
PAGE 285
271 and the slope of extra math lessons ( = 67.99, SE = 8.25, p = .016) were statistically significant, that of the slope of student selfconfidence in learning math was not significant ( = 31.96, SE = 5.65, p = .152). This suggested that schools in South Africa differed significantly in math achievement as well as the relationship between math achievement and extra math lessons. Table 87. Parameter Estimates for the Comb ined Level2 Instructional Prac tices Model for South Africa Model Type Parameters Estimates SE p 14 Fixed INT 350.31 177.78 .055 Homework assignment 28.07 20.06 .169 Opportunity_algebra 0.24 0.60 .689 Opportunity_data 0.23 0.62 .717 Opportunity_geometry 1.31 0.68 .062 Opportunity_measurement 1.09 0.74 .151 Opportunity_number 0.31 0.72 .667 Content_activities 8.22 85.80 .925 Instruction_activities 57.18 80.46 .481 Instructional hours 0.08 0.23 .732 Extra lessons 21.92 19.05 .257 Homework assignment Extra lessons 3.72 2.40 .128 Opportunity_algebra Extra lessons 0.02 0.07 .783 Opportunity_data Extra lessons 0.02 0.05 .768 Opportunity_geometry Extra lessons 0.11 0.08 .172 Opportunity_measurement Extra lessons 0.15 0.08 .063 Opportunity_number Extra lessons 0.09 0.08 .290 Content_activities Extra lessons 13.62 7.71 .084 Instruction_activities Extra lessons 8.82 9.34 .351 Instructional hours Extra lessons 0.02 0.03 .535 Selfconfidence 24.95 25.50 .334 Homework assignment Selfconfidence 4.60 2.95 .126 Opportunity_algebra Selfconfidence 0.07 0.10 .468 Opportunity_data Selfconfidence 0.12 0.06 .050 Opportunity_geometry Selfconfidence 0.09 0.09 .315 Opportunity_measurement Selfconfidence 0.10 0.09 .299 Opportunity_number Selfconfidence 0.07 0.11 .494 Content_activities Selfconfidence 2.17 7.12 .762 Instruction_activities Selfconfidence 2.47 11.06 .824 Instructional hours Selfconfidence 0.00 0.03 .966 Valuing math 15.03 3.08 <.001 Homework time 5.08 2.22 .022 Home resources 6.73 1.91 .001 Random 00 8146.44 90.26 <.001 Extra lessons 67.99 8.25 .016
PAGE 286
272 Table 87. Parameter Estimates for the Comb ined Level2 Instructional Prac tices Model for South Africa Model Type Parameters Estimates SE p Selfconfidence 31.96 5.65 .152 2 2296.95 47.93 As evident in Table 87, the amount of explained variance between schools in Model 14 appeared more significant than all of the compared models, except for Model 10. Specifically, when using Mode l 14 instead of Models 9 or 13, an increase of 16% in the explained variance between schools was likely to resu lt. However, changing from Models 10 to Model 14 seemed to reduce th e amount of explained variance between schools by 2%. In terms of th e variance within schools, no change in th e variance was noted across these models. Thus, in considera tion of the amount of the explained variance between schools, Model 10 serves as the most efficient instructional practice model to predict math achievement. Table 88. Comparison of R2 between Model 14 and Previously Constructed Models 913 for South Africa Compared Model 00 2 9 0.16 0.00 10 0.02 0.00 11 0.15 0.00 12 0.12 0.00 13 0.16 0.00 Research Question 4 To what extent are teacherrelated variab les (i.e., preparation to teach, ready to teach, and professional development) asso ciated with TIMSS 2003 eighthgrade math scores in each country? Similarly, incremental model building st rategies were appl ied to examine the relationships among teacherrelated variables (i .e., preparation to teach, ready to teach,
PAGE 287
273 and professional development) and math achievement. Results of these models (Models 1518) are presented in Table 8890. The data in Table 88 show that none of the teacher backgro undrelated variables was statistically significant. However, for level1 fixed effects, all were statistically significant, except for that of student selfconfidence in learning math in Model 16 ( = 18.82, SE = 11.81, p = .118). These data suggest that stud entrelated predictors tended to have stronger relationships with math achieve ment than teacherrelated predictors across South African schools. Similarly, it was noted that all of the random effects were statistically significant in these models, exce pt for that of student selfconfidence in learning math in Model 15 ( = 56.04, SE = 7.49, p = .068). It could be inferred from these results that a significant amount of variance in math achievement remained unexplained, both within and between schools. When comparing the proportion of variance accounted for by Models 1517 with that of the foundational level1 model (Model 9), it appears that Model 15 was the most efficient one (see Table 88). Specifically, wher eas the inclusion of readiness to teach math content in Model 16 and mathrelated professional development in Model 17 resulted in a reduction of 14% in the amount of explaine d variance between schools, the addition of preparation to teach math conten t in Model 15 resulted in an increase of 4% of the explained between school variance. Furt her, no improvement in the within school variance was noted by use of thes e models instead of Model 9. Table 89. Parameter Estimates for Teacher Background Models for South Africa Model Type Parameters Estimates SE p 15 Fixed INT 247.45 43.71 <.001 Preparation 50.91 47.04 .285 Extra lessons 14.67 2.96 <.001 Preparation Extra lessons 2.76 3.63 .450
PAGE 288
274 Table 89. Parameter Estimates for Teacher Background Models for South Africa Model Type Parameters Estimates SE p Selfconfidence 18.57 6.67 .008 Preparation Selfconfidence 7.69 6.90 .271 Valuing math 16.16 2.84 <.001 Homework time 5.42 2.22 .015 Home resources 6.82 1.93 .001 Random 00 9272.88 96.30 <.001 Extra lessons 66.09 8.13 .043 Selfconfidence 56.04 7.49 .068 2 2296.36 47.92 16 Fixed INT 147.83 78.60 .066 Ready_number 6.77 41.09 .870 Ready_algebra 16.38 18.91 .391 Ready_measurement 33.37 27.62 .233 Ready_geometry 0.43 14.52 .977 Ready_data 4.59 33.36 .892 Extra lessons 20.54 8.36 .018 Ready_number Extra lessons 5.33 4.40 .232 Ready_algebra Extra lessons 1.40 1.77 .434 Ready_measurement Extra lessons 1.73 2.87 .549 Ready_geometry Extra lessons 2.14 1.83 .249 Ready_data Extra lessons 1.49 2.90 .610 Selfconfidence 18.82 11.81 .118 Ready_number Selfconfidence 0.95 7.09 .895 Ready_algebra Selfconfidence 1.51 5.60 .789 Ready_measurement Selfconfidence 3.41 5.01 .499 Ready_geometry Selfconfidence 0.39 4.18 .927 Ready_data Selfconfidence 4.63 5.64 .416 Valuing math 15.60 2.96 <.001 Homework time 5.14 2.17 .018 Home resources 6.88 1.94 .001 Random 00 10042.14 100.21 <.001 Extra lessons 80.78 8.99 .024 Selfconfidence 81.36 9.02 .024 2 2288.86 47.84 17 Fixed INT 195.86 17.71 <.001 Professional development 3.95 8.55 .646 Extra lessons 12.42 1.90 <.001 Professional development Extra lessons 0.02 0.92 .981 Selfconfidence 27.31 3.84 <.001 Professional development Selfconfidence 1.01 1.33 .451 Valuing math 16.26 2.88 <.001 Homework time 5.32 2.22 .017 Home resources 6.77 1.91 .001 Random 00 9723.46 98.61 <.001
PAGE 289
275 Table 89. Parameter Estimates for Teacher Background Models for South Africa Model Type Parameters Estimates SE p Extra lessons 69.35 8.33 .037 Selfconfidence 67.66 8.23 .038 2 2294.58 47.90 Similar to the results observed in recent models, Model 18 with all of the teacherrelated variables included in the model (i.e., preparation to teach, ready to teach math topics, and mathrelated professional deve lopment) yielded neither statistically significant crosslevel interaction effects nor statistically significant level2 main effects (see Table 89). What were found statistically si gnificant in this model were four level1 fixed effects and all of the random effects, except for that of stude nt selfconfidence in learning math ( = 65.08, SE = 8.07, p = .052). Of the level1 significant effects, extr a math lessons appears to the have the strongest yet negative relationship with math achievement ( = 22.57, SE = 7.41, p = .004). Following was student valu ing of math with a posi tive fixed effect of 15.58 (SE = 2.97, p <.001). Next were home resources a nd time students spent on homework ( = 6.82 and 5.17, SE = 1.93 and 2.18, p = .001 and .018). These results suggest that for every unit increases in extra math lessons, student math achievement tended to reduce by 22.57 points. In contrast, with every unit incr ease in student valuing of math, students tended to improve 15.58 points in their math sc ores. Similarly, the more home resources student had to support their lear ning, the better they tended to perform in math. Likewise, the more time students engaged in homewor k, the higher math scores they tended to achieve. As for the random effects, it coul d be inferred that a c onsiderable amount of variance remained to be explained within and between schools in South Africa.
PAGE 290
276 Table 90. Parameter Estimates for the Combined Teacher Background Model for South Africa Model Type Parameters Estimates SE p 18 Fixed INT 163.45 78.70 .043 Preparation 83.39 49.15 .096 Professional development 3.98 7.96 .619 Ready_number 7.27 41.40 .862 Ready_algebra 8.19 21.07 .699 Ready_measurement 33.79 18.10 .068 Ready_geometry 23.03 19.16 .236 Ready_data 5.46 23.51 .817 Extra lessons 22.57 7.41 .004 Preparation Extra lessons 3.07 3.63 .403 Professional development Extra lessons 0.23 0.92 .804 Ready_number Extra lessons 4.97 3.81 .199 Ready_algebra Extra lessons 3.25 2.81 .254 Ready_measurement Extra lessons 2.04 2.59 .435 Ready_geometry Extra lessons 3.37 2.05 .106 Ready_data Extra lessons 0.81 2.52 .749 Selfconfidence 20.58 13.48 .134 Preparation Selfconfidence 8.69 7.17 .232 Professional development Selfconfidence 1.38 1.29 .289 Ready_number Selfconfidence 0.94 7.61 .903 Ready_algebra Selfconfidence 2.30 5.60 .683 Ready_measurement Selfconfidence 2.00 4.39 .651 Ready_geometry Selfconfidence 0.97 4.49 .830 Ready_data Selfconfidence 3.43 4.90 .488 Valuing math 15.58 2.97 <.001 Homework time 5.17 2.18 .018 Home resources 6.82 1.93 .001 Random 00 9279.73 96.33 <.001 Extra lessons 84.52 9.19 .019 Selfconfidence 65.08 8.07 .052 2 2292.84 47.88 In terms of model fit, the results in Table 90 show that Model 18 worked equally well as Model 15 but better than Models 9, 16 and 17. Whereas there was no difference in the amount of the variance between school s accounted for by Models 15 and 18, there was an increase of 4% to 8% in the amount of variance accounted for between schools when using Model 18 as opposed to Models 9, 16, or 17.
PAGE 291
277 Table 91. Comparison of R2 between Model 18 and Previously Constructed Models 9 and 1517 for South Africa Compared Model 00 2 9 0.04 0.00 15 0.00 0.00 16 0.08 0.00 17 0.05 0.00 Research Question 5 To what extent are school related variables (i .e., class size, school resources for math instruction, and teacher perception of math instructional limitation due to student factor) associated with TIMSS 2003 eighthgrade math scores in each country? Table 91 provides a summary of the results for Models 1921 where schoolrelated variables (i.e., class size, school resources for ma th instruction, and teacher perception of math instructional limitations due to student factor s) were separately included in the models to predict math ach ievement. In these models, no statistically significant crosslevel interaction effects were detected. However, there was one statistically significant level2 main effect: school resources for math instruction in Model 21. With the fixed effect of 57.60 (SE = 26.97, p = .037), it could be inferred that for every unit increased in sc hool resources for math instru ction, students were expected to improve their math scores by 57.60 points, after adjusting for othe r predictors in the model. Similarly, with all of the level1 fixe d effects significant, it could be interpreted that studentrelated variables contributed significantly in the prediction of math achievement in South Africa. Table 92. Parameter Estimates for School Background Models for South Africa Model Type Parameters Estimates SE p 19 Fixed INT 237.92 72.48 .002 Class size 12.77 27.26 .641 Extra lessons 13.19 6.30 .041
PAGE 292
278 Table 92. Parameter Estimates for School Background Models for South Africa Model Type Parameters Estimates SE p Class size Extra lessons 0.35 2.45 .887 Selfconfidence 28.12 6.53 <.001 Class size Selfconfidence 1.59 2.92 .589 Valuing math 16.36 2.91 <.001 Homework time 5.35 2.23 .017 Home resources 6.79 1.93 .001 Random 00 9590.79 97.93 <.001 Extra lessons 64.31 8.02 .034 Selfconfidence 71.39 8.45 .034 2 2293.58 47.89 20 Fixed INT 208.91 27.95 <.001 Instructional limitation 0.59 20.89 .978 Extra lessons 13.35 2.64 <.001 Instructional limitation Extra lessons 0.85 2.41 .727 Selfconfidence 24.83 3.35 <.001 Instructional limitation Selfconfidence 0.65 2.24 .772 Valuing math 16.21 2.87 <.001 Homework time 5.30 2.23 .018 Home resources 6.83 1.95 .001 Random 00 9828.80 99.14 <.001 Extra lessons 69.71 8.35 .035 Selfconfidence 71.24 8.44 .029 2 2295.12 47.91 21 Fixed INT 161.37 17.91 <.001 School resources 57.60 26.97 .037 Extra lessons 8.58 2.43 .001 School resources Extra lessons 4.83 2.58 .066 Selfconfidence 23.94 2.63 <.001 School resources Selfconfidence 0.47 2.65 .860 Valuing math 16.34 2.88 <.001 Homework time 5.34 2.21 .016 Home resources 6.74 1.88 .001 Random 00 8319.70 91.21 <.001 Extra lessons 57.07 7.55 .019 Selfconfidence 71.57 8.46 .028 2 2294.88 47.90 In comparing Models 1921 with Model 9 in terms of the proportion of variance accounted for, it looks like that Model 21 work ed the best (see Table 92). Specifically, the use of Model 21 as opposed to Model 9 increased the amount of between school
PAGE 293
279 variance accounted for by 14%; whereas the use of Model 19 resulted in an increase in the between school variance by 1% compared to Model 9, and the use of Model 20 resulted in a reduction of 2% in the betw een school variance compared to Model 9. Table 93. Comparison of R2 between Level2 Teacher Backgroun d and Foundational Level1 Model for South Africa Compared Model 00 2 19 vs. 9 0.01 0.00 20 vs. 9 0.02 0.00 21 vs. 9 0.14 0.00 Similar to Model 21, Model 22 with all of the school backgroundrelated predictors did not produce any statistically significant crosslevel interacti on effects (see Table 93). However, as a level2 predictor, school resources was sta tistically significant. With the fixed effect of 55.75 (SE = 25.34, p = .033), it could be inferred that each unit increased in school resources for math instruct ion, student math scores were expected to increase by 57.60 points, after adjusting for othe r predictors in the model. Interestingly, in the presence of all the school related predictors, extra math lessons became statistically insignificant ( = 8.77, SE = 5.28, p = .103). All other level1 predictors in the model appeared to have statistically significant contributions to the prediction of math achievement. With the fixed effects of 5.35 (SE = 2.22, p = .016) for time students spent on homework to 28.00 (SE = 6.68, p < .01) for student selfconf idence in learning math, it could be interpreted that st udents could increase their ma th scores by 5.35 to 28 points for each unit increased in the corresponding predictor, while controlling for other predictors in the model. Again, with all the random effects were found statistically si gnificant in Model 22, it could be conclude that a significant amount of variance within and between schools in South Africa remained to be explained.
PAGE 294
280 Table 94. Parameter Estimates for the Combined School Background Model for South Africa Model Type Parameters Estimates SE p 22 Fixed INT 181.65 53.91 .002 Instructional limitation 0.43 19.65 .983 Class size 8.41 21.03 .690 School resources 55.75 25.34 .033 Extra lessons 8.77 5.28 .103 Instructional limitation Extra lessons 0.87 2.31 .709 Class size Extra lessons 0.27 2.06 .896 School resources Extra lessons 4.58 2.46 .068 Selfconfidence 28.00 6.68 <.001 Instructional limitation Selfconfidence 0.54 2.46 .828 Class size Selfconfidence 1.47 3.11 .637 School resources Selfconfidence 0.68 2.78 .809 Valuing math 16.46 2.94 <.001 Homework time 5.35 2.22 .016 Home resources 6.75 1.93 .001 Random 00 8505.23 92.22 <.001 Extra lessons 55.68 7.46 .015 Selfconfidence 76.88 8.77 .023 2 2297.42 47.93 As shown in Table 94, in comparing the amount of variance accounted for in five recent models, it seems that Model 22 worked more efficient than Models 9, 19, and 20, but less efficient than Model 21. Specifically, whereas the am ount of explained variance between schools in Model 22 increased by 11% compared to Model 19, 12% compared to Model 9, and 13% compared to Model 21, it reduced by 2% when compared to Model 21. Thus, Model 21 serves as the best school background model to predict math achievement in South Africa. Table 95. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for South Africa Compared Model 00 2 22 vs. 9 0.12 0.00 22 vs. 19 0.11 0.00 22 vs. 20 0.13 0.00 22 vs. 21 0.02 0.00
PAGE 295
281 Final Model With an intention to identify the most efficient and parsimonious model to predict eighthgrade math achievement in South Af rica, Model 23 was built by including all the statistically significant level2 predictors in earlier combined models to Model 9 and then compared with the three comb ined models, Models 14, 18, and 22. It is worth noting that in Model 23, the slope variance of selfconfidence in learning math was set to 0 because it was not statistically significant in earlier combined models (i.e., Models 14 and 18). Similar to Model 21, Model 23 with school re sources as the only level2 predictor in the model did not produce st atistically significant crosslevel interaction effects. All of the six remaining fixed effects, however, we re statistically significant: (1) school resources ( = 58.23, SE = 26.31, p = .031), (2) extra math lessons ( = 8.52, SE = 2.40, p = .001), (3) student selfconfidence in learning math ( = 23.65, SE = 3.18, p <.001), (4) student valuing of math ( = 16.32, SE = 2.81, p <.001), (5) time students spent on homework ( = 5.13, SE = 2.24, p = .022), and (6) home resources ( = 6.64, SE = 1.86, p <.001). Interestingly, in this model, th e only slope that was allowed to vary (i.e., extra math lessons) was not statistically significant ( = 53.12, SE = 7.29, p = .07). These results suggest that, ex cept for extra math lesson that consistently identified as having inverse relationship with math achieve ment, all other predictors in the model tended to have significantly positive relationshi p with math achievement in South Africa. School resources appeared to have the strongest positive relationship with math achievement. That is, a unit increased in school resources tended to be associated with an increased of 58.23 points in math achieveme nt; whereas an unit increased in time students spent on homework was associated wi th only 5.13 points in math achievement.
PAGE 296
282 Table 96. Parameter Estimates for Full Model for South Africa Model Type Parameters Estimates SE p 23 Fixed INT 162.13 17.95 <.001 School resources 58.23 26.31 .031 Extra lessons 8.52 2.40 .001 School resources Extra lessons 4.95 2.53 .055 Selfconfidence 23.65 3.18 <.001 Valuing math 16.32 2.81 <.001 Homework time 5.13 2.24 .022 Home resources 6.64 1.86 .001 Random 00 7598.68 87.17 <.001 Extra lessons 53.12 7.29 .066 2 2329.56 48.27 Finally, as evident in Table 96, Model 23 a ppears to be the most efficient model for South Africa because this model account ed for the largest amount of variance between schools. Specifically, Model 23 account ed for 7% more of the between school variance when compared to Model 14, 11% wh en compared to Model 22 and 18% when compared to Model 28. Therefore, for South Af rica, Model 23 serves as the best model for predicting math achievement. Table 97. Comparison of R2 between Model 22 and Previously Constructed Models 9 and 1921 for South Africa Compared Model 00 2 14 0.07 0.01 18 0.18 0.02 22 0.11 0.01
PAGE 297
283 Summary of Results Missing Data In all four countries included in this study (i.e., Canada, USA, Egypt, and South Africa), missing data were present at both leve l1 (i.e., student leve l) and level2 (i.e., teacher and school level). With more missing data found at level2 than at level1, an evaluation of the extent to which these missing data at level2 were completely at random was conducted for each country. The results sh owed that the missing data mechanism in all countries was not completely at random In addition, the amount of missing data tended to be larger in developing countries (i.e., Egypt and South Africa) than in developed countries (i.e., Canada and US A). Using listwise deletion as a method of missing data treatment, the percentage of complete cases in Canada, USA, Egypt, and South Africa was 73.74%, 55.12%, 26.44%, and 17.56% respectively, and the number of schools or level2 units for these coun tries was 271, 153, 69, and 52, respectively. Univariate Analysis Descriptively, considerable differences were observed in the weighted average math achievement across four countries from 272.66 for South Africa to 529.30 for Canada. Similarly, it was observed that in Canada and USA, students had most basic resources at home for learning (i.e., calculator, desk, and co mputer) and rarely took extra math lessons but in Egypt and South Africa, students reported having fewer resources at home for learning and took extra math lessons more frequently, at least once or twice a week. However, there were also some commonalities across these countries. For example, students tended to spend a modest amount of time on math homework and had above medium level of selfconfidence in lear ning math and valuing of math. In terms of
PAGE 298
284 opportunity to learn, number t opic ranked first with 75.30% of students in South Africa, 95.85% of students in Canada, 99.13% of stude nts in Egypt, and 99.65% of students in the U.S taught the subject before the TIMSS assessment. The areas that students had the least opportunity to learn appeared to be data (39.32% for South Africa and 61.02 for Egypt), algebra (59.11% for Canada) and geometry (70.10% for the U.S.). At teacher and school level, it was ob served that schools in Canada and U.S tended to have smaller class size than school s in Egypt and South Africa. Approximately 87% of the schools in Canada and 90% of the schools in the U.S. had less than 33 students in a class, whereas in Egypt and Sout h Africa, the majority of schools (i.e., more than 70% in Egypt and 87% in South Africa) had between 33 a nd 41+ students in a class. Another interesting difference across the f our countries was found in the average math instructional hours per year. Whereas schools in Canada and South Africa varied greatly in the average math instructional hours per year (i.e., 30 to 388 hours and 93 to 360 hours, respectively), there was a narrower range of average yearly math hours instructed in the U.S. and Egypt (i.e., 24.27 to 180 hours and 22.5 to 174.6, respectively). It is also worth noting that whereas 100% of Egyptian te achers reported being prepared to teach math content, less than half of teachers in Canada felt the same way. In the U.S. and South Africa, approximately two third of the teachers indicated they were prepared to teach math content. Similarities that we re observed across these countries included teachersÂ’ participation in various mathrelat ed professional developments, their indication of readiness to teach math topics, the fre quency of activities in math lessons, and the moderate amount of homework assigned to students.
PAGE 299
285 Bivariate Analysis An examination of weighted bivariate co rrelations among studen tlevel predictors (i.e., gender, student selfconfidence in le arning math, valuing of math, time students spent on homework, extra math lessons, a nd home resources) suggested that these predictors were uncorrelated with each other. From Egypt South Africa to Canada and the U.S., weak correlation coefficients am ong these predictors were observed, less than +/.40. Similarly, the results of unweighted bivariate correlations among level2 predictors suggested that these variable s were weakly related to each other (r less than +/.40), except for those that measure the same cons truct such as ready to teach math content variables, opportunity to learn variables, and activities in math lessons variables. In these instances, the correlation coe fficients among the variables tended to be moderate in strength (r less than .70) and positive in direction. Evaluation of HLM Assumptions For each of the four countries included in this study, visual analysis of both level1 and level2 random effects of the most efficient and parsimonious model was conducted. Results of these analyses suggested that in all four countries, the assumptions of normality and homogeneity of level1 a nd level2 random effects were satisfied. HLM Analysis Unconditional model The HLM analysis started with the uncondi tional model where none of the level1 or level2 predictor was included in the model. Across countries, the fixed effect for the intercept ranged from 267.57 (SE = 17.18) for South Africa to 527.33 (SE = 2.55) for
PAGE 300
286 Canada. This suggests consider able differences in overall average school math scores across countries. Likewise, th e intraclass correl ation (ICC) was found dissimilar from one country to another, with Canada and Egypt having the lowest ICC of .28 and South Africa having the highest ICC of .76. These da ta suggest a modest to strong level of clustering of students occurred between sc hools across countries. In other words, approximately 28% to 76% of the total va riance in math scores occurred between schools. Research Question 1 To what extent are student background va riables (i.e., gender selfconfidence in learning math, valuing of math, time on math homework, and tutoring in math) associated with TIMSS 2003 eighthgrade math scores in each country? In order to answer this research question, first, ea ch of the student background variables was entered sepa rately into the unconditiona l model to predict math achievement. Then, as a group, all of these variables were include d in the combined student background model to pr edict math achievement. The results obtained from the combined student background models for Cana da, USA, Egypt and South Africa suggest that this model worked differently across coun tries. Notably, in Canada, Egypt, and South Africa, all of the student background variab les, except for gender, were statistically significant predictors of math achievement, wh ereas, in the U.S, only extra math lessons and student selfconfidence in learning math s howed statistically signi ficant relationships with math achievement. Similarly, whereas th e relationships between math achievement and valuing of math, and time students spent on homework appeared to differ significantly across schools in the U.S., these relationships did not seem to vary
PAGE 301
287 statistically across schools in Canada. Notewo rthy was in Egypt where none of the slope variances was statistically significant, mean ing that the relationships between math achievement and student background predic tors did not differ significantly across schools. However, there were some commonalities across these countries. For example, gender as a main effect did not seem to cont ribute significantly in the prediction of math in all of the four countries. Similarly, wher eas student selfconfiden ce in learning math was found to have the strongest and positive relationship with math achievement, extra math lessons tended to show an inverse rela tionship with math scores across countries. Put differently, across countries the higher level of selfconf idence students expressed in learning math, the higher math scores they tended to achieve. In contrast, the more frequently students took extra math lessons, the poorer in math they seemed to perform. Finally, the inclusion of stude nt background variables in the unconditional model resulted in increased explained variance within schools in all of the countries although the amount of increase differed across countries. Research Question 2 To what extent are home resources variab les (i.e., availability of calculator, computer, and desk for student use) asso ciated with TIMSS 2003 eighthgrade math scores in each country? The results of the home resources model suggested that as a main effect, home resources was found to be a statistically signifi cant predictor of math achievement in all countries. With the estimates for the fi xed effect of home resources of 31.92 (SE = 3.65) for Egypt, 9.74 (SE = 1.86) for USA, 7.55 (SE = 2.30) for South Africa, and 7.48 (SE =
PAGE 302
288 1.76) for Canada, it could be inferred that the more home resources students had for learning math, the higher math scores they te nded to achieve. Interestingly, it was found that only in the U.S., the relationship be tween home resources and math achievement varied significantly across schools. Examining the model from the aspect of the variance accounted for it appeared that the use of th is model instead of the unconditional model only yielded a marginal increase of the within school variance (from .4% in Canada to 5% in Egypt) but a considerable reduction of the between school variance in Canada (up to 56%). As a next step of model building, all of the student variable s (i.e., gender, selfconfidence in learning math, valuing of ma th, time on math homework, tutoring in math and home resources) that were st atistically significant in earlier models were included in the overall combined model or the founda tional level1 model to predict math achievement. Thus, as a result of this m odel building strategy, the foundational level1 model was different across countries. For ex ample, for Egypt, the foundational level1 model was a random intercept only model, but for remaining countries, this model was a random coefficient model. As different as countries could be, th e results of the foundational level1 model suggested that all of the level1 predictors (i .e., selfconfidence in learning math, valuing of math, time on math homework, tutoring in ma th and home resources) were statistically significant predictors of math achievement in Egypt and South Africa. For Canada, however, in the presence of ot her variables in the model, onl y extra math lessons, student selfconfidence in learning math, and valuing of math showed stat istically significant relationships with math achievement. In the U.S., in addition to extra math lessons and
PAGE 303
289 selfconfidence in learning math, home resources also appeared statistically significant in the prediction of math achievement. One thing that was similar across countri es was that by using this model instead of the previous models, some considerable in creases either in the within school variance or in the between school variance was obs erved. Specifically, the foundational level1 model accounted for 37% more of the within school variance than the home resources model in Canada, 26% more of the within sc hool variance than th e home resources model in the U.S., 52% more of the between school variance than the student background model in Egypt, and 20% more of the within school variance than the home resources model in South Africa. Research Question 3 To what extent are instructional variable s (i.e., opportunity to learn, activities in math lessons, amount of homework assignment, and instructional time) associated with TIMSS 2003 eighthgrade math scores in each country? When adding opportunity to learn math to pics, activities in math lessons, amount of homework assignment, and average math instructional hours pe r year as level2 predictors to the foundational level1 model to predict math achievement, the combined instructional practice model produced interestin g results across countries. In Canada, this model yielded five statistically significant crosslevel interactions effects: (1) average math instructional hours per year by gender, (2) opportunity to learn algebra by extra math lessons, (3) opportunity to learn geomet ry by extra math lessons, (4) opportunity to learn data by selfconfidence in learning math, and finally, (5) opportunity to learn
PAGE 304
290 measurement by selfconfidence in learning ma th. The nature of these interactions was detailed in the HLM analysis for Canada. In the U.S., the combined instructional practice model produced three statistically significant crosslevel interaction effects: (1) opportunity to learn da ta by selfconfidence in learning math, (2) opportunity to learn m easurement by student valuing of math, and (3) opportunity to learn geometry by time st udent spent on homework. Again, the nature of these interactions can be found in the HLM analysis for USA. For Egypt and South Africa, however, no statistically significant crosslevel interaction effect was detected. In fact, in both countries, none of the level2 main effects was statistically significant, either. In sum, the use of this model instea d of the foundational level1 model resulted in an increase in the explaine d variance between schools by 25% for the U.S, 30% for Canada, 6% for Egypt, and 16% for South Africa. Research Question 4 To what extent are teacherrelated variab les (i.e., preparation to teach, ready to teach, and professional development) asso ciated with TIMSS 2003 eighthgrade math scores in each country? The combined teacherbackground model with preparation to teach math content, ready to teach math topics and mathrela ted professional development as level2 predictors also yielded inte resting results across countries Specifically, in Canada, two statistically significant crosslevel interac tion effects were det ected: student selfconfidence in learning math by preparation to teach math content and student selfconfidence in learning math by mathrelated professional development. The nature of these interactions can be f ound in the HLM analysis for Canada. As for the U.S., one
PAGE 305
291 statistically significant crosslevel interacti on effect was observed be tween ready to teach number and time students spent on homework. Deta ils of this interaction were illustrated in the HLM analysis for USA. In Egypt, mathrelated professional development was found to be the only level2 predictor that st atistically significantly related to math achievement. Finally, for South Africa, neither crosslevel interacti on effects nor level2 main effects for teacherbackground variable s was found statistica lly significant. In terms of model fit, the use of this model instead of the foundational level1 model resulted in an increase in the explained variance be tween schools by 2% for the U.S, 21% for Canada, 11% for Egypt, and 4% for South Africa. Research Question 5 To what extent are school related variables (i .e., class size, school resources for math instruction, and teacher perception of ma th instructional limitations due to student factors) associated with TIMSS 2003 eighthgrade math sc ores in each country? The combined schoolbackground model with class size, school resources for math instruction, and teacher perception of ma th instructional limitations due to student factors as level2 predictors produced differe nt results across countri es. Specifically, in Canada, no statistically signifi cant crosslevel interaction effect was detected but there were two statistically significant level2 main effects: class size for math instruction and teacher perception of math instructional limita tions due to student factors. In the U.S., however, two statistically significant crossle vel interaction effect s were observed. One was between class size for math instruction and student selfconfid ence in learning math and the other interaction was between class si ze for math instruction and student valuing of math. Interestingly, for E gypt, no statistically significan t level2 main effect was
PAGE 306
292 found. As for South Africa, school resources appeared to contri bute statistically significantly to the prediction of math achievement. An evaluation of the proportion of variance accounted for between schools suggested that, compared to the foundati onal level1 model, the combined schoolbackground model was less efficient for USA, equally efficient for Egypt, and more efficient for Canada and Sout h Africa. Specifically, a reduct ion of 7% of the explained between school variance was noted for USA, whereas an increase of 16% was noted for Canada and 12% for South Africa. Final Model With an intention to identify the most efficient and parsimonious model to predict eighthgrade math achievement in each of the country, the final model was built and compared with the three combined models (i.e., instructional practice model, teacherbackground model, and schoolbackground model). It is also worth noting that this model included only fixed and random effects that were statistically si gnificant in earlier combined models. In Canada, this final model produced six statistically significant crosslevel interaction effects: (1) av erage math instructional hour s per year by gender, (2) opportunity to learn data by gender, (3) oppor tunity to learn algebra by extra math lessons, (4) opportunity to learn geometry by ex tra math lessons, (5) opportunity to learn data by selfconfidence in learning math, and (6) preparation to teach math content by selfconfidence in learning math. Details of these interactions can be found in the HLM analysis for Canada. In the U.S., two statis tically significant crosslevel interaction effects were observed: opport unity to learn geometry by time student spent on homework
PAGE 307
293 and ready to teach number by time student sp ent on homework. Again, the nature of these interactions was discussed in detail in the HLM analysis for USA. In Egypt, the only level2 predictor, mathrelat ed professional development th at was included in the final model showed statistically significant relations hip with math achievement. Similarly, for South Africa, the only level2 predictor, schoo l resources that was included in the final model showed statistically si gnificant contribution to the prediction of student math scores. In comparison with previously construc ted combined models (i.e., instructional practice model, teacherbackground model, and schoolbackground model), it appeared that the final model served as the best mode l for predicting math achievement in Canada and South Africa; whereas for USA, the comb ined instructional practice model worked the best, and for Egypt, the combined teacherbackground model served as the most efficient and parsimonious model for predicting math achievement.
PAGE 308
294 CHAPTER FIVE DISCUSSION Purpose The purpose of this study was to investig ate correlates of math achievement in both developed and developing countries. Specifically, two developed countries, the United States and Canada and two developi ng countries, Egypt and South Africa that participated in the TIMSS 2003 eighthgrade math assessment were selected for this study. For each country, a series of twoleve l models were constructed to examine the extent to which student background, home re sources, instructional practices, teacher background, and school backgroundrelated vari ables were associated with TIMSS 2003 eighthgrade math scores in the respective country. Ultimately, the overall goal for this study was to provide empirical evidence that supports the rationa le that developing countries should not implement educational mo dels that appear to work in developed countries; rather, they should develop and implement their own educational models based upon their countriesÂ’ specific research findings This is because countries differ in characteristics and a model that works in a developed country might not work in a developing country (Bryan et al., 2007; Delaney, 2000; Watkins & Biggs, 2001). Review of Method This study used secondary data from the TIMSS 2003 to investigate the relationships between eighthgrade student math achievement and contextual and background factors in four countries, the Unite d States, Canada, Egypt, and South Africa.
PAGE 309
295 The dependent variable of the study was overall math score, an IRTbased score, which was calculated by averaging five plausible su btopic scores: algebra, number, geometry, measurement, and data. The independent vari ables were five groups of factors: (1) student background, (2) home resources, (3 ) instructional prac tices, (4) teacher background, and (5) school background. Each of these groups of factors was precisely defined by using existing variables in the TIMSS 2003 database. The studyÂ’s theoretical framework, CarrollÂ’s (1963) model of school learning, as well as a review of related literature guided the selection of these variables. In additi on, subject matter experts were consulted and factor analysis was conducted to provide reliability evidence for the variables included in this study. Because of the naturally occurring clus ters in the TIMSS 2003 data, multilevel models were used to capture the relati onships among the student level and the teacher/school level variables and eighthgr ade student math achievement. Specifically, for each country, 23 models were constructed to represent the student level (level1) and the teacher/school level (level 2) in the TIMSS 2003 data. The HLM analysis started with an unconditional or baseline model where none of the level1 or level2 variables was included. Next, each student background variable was entered separately in the baseline model to make Models 26. Model 7 was c onstructed with all the student background variables included in the baseline model. Ne xt, home resources was added to the baseline model to make Model 8. Then, all of the variab les that were statistically significant in Models 7 and 8 were included to the baseli ne model to make the foundational student model or Model 9.
PAGE 310
296 At level2, Models 1013 we re built by first entering individual instructional practice variable to the founda tional level1 model. Then, as a group, all instructional practices variables were added to the founda tional level1 model to make the combined instructional practice model (Model 14). Li kewise, teacher background models (Models 1518) and schoolbackground models (Models 1921) were constructed. Finally, Model 23 was built by adding all level2 variables that were statistically significant in the earlier combined models (Models 14, 18, and 22) to the foundational level1 model. Prior to analysis of the data, listwise deletion as a missing data treatment method was employed to eliminate all the missing data at both the student level and the classroom/school level. This step was c onducted because in twolevel HLM analyses, parameter estimates are computed based on co mplete cases. In addition, because TIMSS 2003 utilized a complex sampling design, a normalized student sampling weight was used in all analyses of the data in orde r to obtain more accurate population estimates. Results Unconditional model The HLM analysis started with the uncondi tional model where none of the level1 or level2 predictor was included. Results fr om this model suggest that considerable differences in overall average school math scores existed across countries, from 267.57 (SE = 17.18) for South Africa to 527.33 (SE = 2.55) for Canada. Likewise, the intraclass correlation (ICC) was found dissimilar from one country to another. Whereas the ICCs were relatively small (.28) for Canada and Egypt, they were relatively large for the United States (.51) and South Africa (.76). In other words, approximately 28%76% of the variance in math achievement occurre d between schools in these countries.
PAGE 311
297 One possible explanation for a high level of ICC as observed in the United States and South Africa could be the sampling procedures implemented by the TIMSS 2003 (i.e., in each school, only one math class was sampled). Because students from the same school had similar opportunities to learn math and were taught by the same math teacher, their math scores were more homogeneous wh en compared within schools than between schools. As a result, it was more likely to fi nd statistically signifi cant effects for level2 contextual and background fact ors in subsequent models (i .e., instructi onal practices, teacher background, and school background models). Student Background Model The student background model was devel oped to address the first research question regarding the extent to which eighthgrade math achievement was associated with gender, selfconfidence in learning math, valuing of math, time students spent on homework, and tutoring in math in the c ountries of Canada, USA, Egypt and South Africa. The results suggested that this model worked differently across countries. Specifically, in Canada, Egypt, and South Afri ca, all of the student background variables, except for gender, were statistically significan t predictors of math achievement; whereas, in the U.S, only extra math lessons and stude nt selfconfidence in learning math showed statistically significant relationships with math achievement. Similarly, whereas the relationships between math achievement and valuing of math, and time students spent on homework appeared to differ significantly across schools in the U.S., these relationships did not seem to vary statis tically across schools in Canada Interestingly, in Egypt, none of the relationships between math achie vement and student background predictors appeared to differ signi ficantly across schools.
PAGE 312
298 In connecting to existing literature (Beaton et al., 1996; Mullis et al., 2000; Peterson & Fennema,1985; Rodriguez, 2004), thes e results do not support the view that a gender gap exists in math achievement. Howeve r, this study concurs with prior research findings (House, 2006; Koller, Baumert & Sc hnabel, 2001; Pajeres & Graham, 1999) in that student selfconfidence in learning math contributes positively and significantly in the prediction of student math achievement. In fact, in this study, evidence was found in all four countries that, of all student bac kground variables, student selfconfidence in learning math showed the strongest and positiv e relationship with math achievement. As for tutoring/extra math lessons, consistent results were found between this study and those of Papanastasiou (2002) that the more frequently students took extra math lessons, the poorer in math they seemed to perform. It is worth noting that Papanastasiou (2002) looked at fourthgrade student in Cyprus, Hong Kong and the United States; whereas this study focused on eighthgrade students in Ca nada, the United States, Egypt and South Africa. Home Resources Model The home resources model aimed at an swering the second research question regarding the associatio n between eighthgrade math achieve ment and the availability of a calculator, desk, and computer for student use at home. Without controlling for other variables, home resources as a sum composite variable of calculator, desk and computer was found to be a statistically significant predic tor of math achievement in all four countries of the study Canada, USA, Egypt and South Africa. Specifically, in these countries, the more home resources student s had for learning math, the higher math scores students tended to achieve.
PAGE 313
299 Interestingly, however, after control ling for other student backgroundrelated variables in the foundational le vel1 model, home resources no longer was a statistically significant predictor of math achievement in Ca nada. In other words, in the presence of other student background variable s, home resources remained as a potential predictor of eighthgrade math achievement in three of the four countries, the United States, Egypt, and South Africa. These results were congruent with the findings from the recent study of Mullis, Martin, Gonzalez and Chrostowsk i (2004). According to these researchers (Mullis et al., 2004), students from homes with a range of study aids such as computer, calculator, desk, and dictionary tended to have higher math scores than their peers who did not have access to such resources at home. Instructional Practices Model The instructional practices model focused on the third research question concerning the relationships between eight hgrade math achievement and level2 instructional practicesrelated predictors su ch as opportunity to learn math topics, activities in math lessons, amount of ho mework assignment, and average math instructional hours per year. This model pr oduced interesting results across countries. Specifically, whereas there were five statis tically significant crosslevel interactions effects in Canada and three in the United St ates, none was detected in Egypt and South Africa. In fact, in these two developing count ries, none of the level2 main effects was statistically significant, either. Unexpectedly, in Canada, it was found that the more math inst ructional hours the students had, the poorer math scores they tended to achieve. A lthough this pattern was observed for both gender groups, the negative effect seemed stronger for male than
PAGE 314
300 female students. Similarly, increases in opportu nity to learn geometry were associated with lower math scores for students who repor ted taking extra math lessons sometimes to almost everyday. Likewise, incr eases in opportunity to learn data were related to poorer math achievement for all students, especially those with low selfconfidence in learning math. However, the results also showed that with more opportunity to learn measurement, students with a higher level of selfconfidence in learning math tended to perform better in math than their peers w ho had a lower level of selfconfidence in learning math. Similarly, increa ses in opportunity to learn al gebra were associated with better math performance for those students who tended to take extra math lessons sometimes to almost everyday. Interestingly, in the United States, wh ereas opportunity to learn measurement was found to have positive relationship with ma th achievement and this relationship was observed for all students, regardless of their levels of valuing math, opportunity to learn data was found to be inversely associated w ith math achievement and this relationship was observed for all students, regardless of their levels of selfconfidence in learning math. Also, it was surprising to find that, fo r students who reported spending little time on homework, increases in opportunity to l earn geometry was associated with higher math scores. However, for students who re ported spending a medium to high amount of time on homework, the higher opportunity to le arn geometry was related to poorer math scores. Clearly, these results showed consensus with existing literature (Baker, 1993; Westbury, 1992; Wiley &Yoon, 1995) that opportunity to learn math was associated with math achievement. Although it appears that diffe rent math topics (i.e., number, algebra,
PAGE 315
301 measurement, geometry, and data) were relate d to math achievement in different ways, the finding of this study were supported by Wiley and Yoon (1995). Specifically, Wiley and Yoon (1995) concluded that studentsÂ’ exposure to differe nt math topics, and the way in which these topics were covered aff ected studentsÂ’ perf ormance on tests. Teacher Background Model The fourth research question was ad dressed by the teacher background model which centered on the relationship between ma th achievement and three level2 teacher backgroundrelated variables: preparation to teach math content, ready to teach math topics, and mathrelated professional devel opment. This model yielded interesting findings across countries. Whereas two statisti cally significant cros slevel interactions were detected for Canada, only one was observed for the United States. Similarly, whereas mathrelated professional development was found significantly and positively related to math achievement in Egypt, neithe r crosslevel interacti on effects nor level2 main effects for teacherbackground variables were found statistical ly significant in South Africa. As expected, in Canada, students whose te achers reported being prepared to teach math content consistently performed higher in math than their peers whose teachers reported being not prepared to teach math content. The achievement gap among these students, however, was large when the stude nts expressed low self confidence in learning math, and became narrower as their self confidence in learning math increased. Interestingly, the data in Ca nada also showed that ther e was an inverse relationship between math achievement and mathrelated pr ofessional development. That is, teachersÂ’ participation in more mathrelated professi onal development programs did not result in
PAGE 316
302 higher math performance for their students, re gardless of how selfconfident they were in learning math However, among the students, those who had a hi gher level of selfconfidence in learning math consistently out performed their peers who had a lower level of selfconfidence in learning math. The differe nces in their math achievement appeared to be largest when their teachers had fi ve professional development programs and smallest when their teachers had none of these programs. Unexpectedly, the data for the United States suggested that students whose teachers were very ready to teach number tended to achieve lower math scores than their peers whose teachers were ready to teach number. The achievement gap between these students, however, was small when their time on homework was low and became more substantial when the amount of time they spent on homework was high. These results should be interpreted with cautions becau se they were based on 149 teachers who reported very ready to teach and only 4 teachers who reported ready to teach. In linking with existing literature, the observed positive relationship between math achievement and preparation to teach math content in Canada appeared to be congruent with evidence from recent resear ch (Greenberg, Rhodes, Ye, & Stancavage, 2004; Grouws, Smith, & Sztajn, 2004). Using the data from NAEP 2000 for eighth grade math, Greenberg, Rhodes, Ye, and Stancavag e (2004) showed that students across all math ability levels (i.e., low, medium, and hi gh) who had teachers w ith a major in math scored higher than their peers whose teach ers had a major outside of their field of teaching. Likewise, the results observed for Canada and South Africa with regard to the relationship between math achievement and mathrelated professional development were
PAGE 317
303 found consistent with the findings from pr ior research. Specifically, Wiley and Yoon (1995) examined the data from the Californi a Learning Assessment System (CLAS) 1993 and found that Grades 4 and 8 students whose teachers were familiar with mathematics instruction assessment guides and participat ed in mathematics curriculum activities tended to perform significantly better than th eir peers whose teachers were not involved in those activities. For Grade 10 math achie vement, however, little impact was noted despite the highest level of teachersÂ’ fam iliarity with math goals and standards and frequent participation in vari ous instructional activities. School Background Model The school background model focused on the la st research question that examined the relationship between math achievement and level2 schoolrelated factors: class size, school resources for math instruction, and teacher perception of math instructional limitations due to student factors. The result s suggested that whereas class size for math instruction and teacher perception of math in structional limitations due to student factors were statistically significant predictors of math achieve ment in Canada and school resources was a significant pred ictor of math achievement in South Africa, none of level2 main effects was statistically significant for Egypt. In the United States, however, two stat istically significant crosslevel interaction effects were observed. The nature of the interaction between class size for math instruction and student selfc onfidence in learning math i ndicated that for students who reported having high selfconfidence in learning math, changes from small class size (i.e., 124 students) to large class si ze (i.e., 41+ students) tended to lower their math scores significantly. Conversely, for students w ho reported having low selfconfidence in
PAGE 318
304 learning math, increases in class size appeared to improve their math scores. Thus, math achievement gap among eighthgr ade students was most substantial when they learned math in small class size and became smaller when they learned math in large class size. As for the interaction between class size for math instruction and student valuing of math, the data suggested that in schools with small class size (i.e., 124 students), students who reported having a low level of va luing math tended to achieve higher math scores than their peers who reported having me dium or high levels of valuing math. This pattern of relationship, however, appears to reverse in schools w ith larger classes. That is, students with medium and high levels of valu ing math tended to perform better than their peers who reported having a low level of valu ing math. Nevertheless, a similar trend was noted for all of the students, regardless of th eir levels of valuing math. That is, changes from smaller classes to big classes were associated with increased math scores. There appear some agreements between th ese results and those of prior research. For example, Baker et al. (2002) conclude d that the effect of school resources on achievement within nations was not associated with national income levels. In this study, although school resources was positively relate d to math achievement in South Africa, it was not found statistically significant in E gypt. Similarly, the positive association found between class size and math achievement in Canada and the interesting interaction patterns between class size and math achievement in relation to student selfconfidence in learning math and student valuing of math in the United States support the mixed results suggested by existing literature about the relationship between cl ass size and student learning (Cooper, 1989b; ERS, 1978; Nye, Hedges, & Konstantopoulos, 2001; Pong & Pallas, 2001; Woobmann & West, 2006). Sp ecifically, Pong and Pallas (2001) and
PAGE 319
305 Woobmann and West (2006) found that in Cana da and some other European countries such as Greece and Iceland, sc hools with larger classes tende d to be associated with better performance in math than did schools with smaller classe s. In contrast, the results from a longitudinal study conducted by Nye, Hedges, and Konstantopoulos (2001) in the United States suggested that the benefits of small class size in terms of student achievement persisted significantly th roughout the six years of the study. Final Model With an intention to identify the most efficient and parsimonious model to predict eighthgrade math achievement in four countries examined in this study, the final model was built and compared with the three models : instructional practices model, teacherbackground model, and schoolbackground model. It is worth noting that the final model included only fixed and random effects that were statistically si gnificant in earlier models. The results of these comparisons sugge sted that the final model served as the best model for predicting math achievement in Canada and South Africa; whereas for the USA, the instructional practices model work ed the best, and for Egypt, the combined teacherbackground model served as the mo st efficient and parsimonious model for predicting math achievement. Limitations Findings of this study must be interprete d in light of its limitations. First, the massive amount of missing data (i.e., fr om 26.26% for Canada to 82.44% in South Africa) due to sampling procedures (i.e., multistage, stratified, and unequal probability), assessment design (each student took only one te st booklet or a subset of the entire test items), and nonresponses from participants c ould negatively affect accuracy of this study
PAGE 320
306 results, especially when the missing data mechanism in each country was found not completely at random. Across the four countri es, it was interesting to observe that the variable average math instruc tional hours per year presented the majority of missing data. The amount of missingness for this variable alone was 19% for Canada, 31% for the United States, 61% for Egypt, and 70% for South Africa. It is important to note that this variable was created by the TIMSS 2003 using items from both the teacher survey and the school survey (see Table 6 for further deta ils). It could be possible that the way these items were designed and administered wa s associated with the amount of their missingness. Thus, it is worthwhile for future TIMSS studies to revisit the design of these items as well as the administration of these surveys in order to maximize participantsÂ’ responses. Second, because this is an analysis of s econdary data, the study was limited to the data collected in the TIMSS 2003. As an example, the construct of home resources in this study was originally conceptua lized to include four indicat ors: the availability of a calculator, dictionary, desk and computer fo r student use at home. Interestingly, it was found that whereas the data for the variable dictionary were availa ble for Canada, Egypt, and South Africa, they were not available fo r USA. Specifically, all of the students in USA had missing data on this variable. In an attempt to understand why such data were not available for USA, the researcher cont acted the TIMSS 2003 Office. Because further investigation of the problem was needed at the TIMSS 2003 Office, a decision was made to recalculate the composite variable home resources to include only three indicators: calculator, desk, and computer.
PAGE 321
307 Also, although the TIMSS 2003 provide s rich background and contextual information, there were variables that the re searcher wished to have but the TIMSS 2003 database did not have. For example, there wa s no measure of student selfconfidence in learning individual math topics (i.e., number, algebra, meas urement, geometry and data) or studentsÂ’ past achievement or aptitude scores. For this reason, it was not possible to establish a direct connection between the va riables proposed by CarollÂ’s (1963) model of school learning and the variable s selected for this study. Third, the results of this study were ba sed on the relationships between student math achievement and contextual and bac kground factors which were selfreported by students, teachers, and school principals. Selfreported data, according to Rosenberg, Greenfield, and Dimick (2006), have several pot ential sources of bias such as selective memory (remembering or not remembering expe riences or events that occurred sometime in the past), telescoping (recalling events that occurred at one time as if they had occurred at another time), and social desirability (re porting behaviors that tend to be widely accepted by certain social groups rather than the behaviors actually exhibited by the respondents). Thus, it is important to interpret findings of this study with this limitation in mind. Last but not least, analysis of secondary data with a complex survey design often requires the use of sampling weights. Howe ver, at the time this dissertation was conducted, common statistical software such as SAS and SPSS did not have the ability to incorporate sampling weights into multilevel analysis. Although the latest HLM version 6.06 was able to apply sampling weights, some parts of the HLM analysis output (i.e., table of fixed effects with regular standard errors) were not produced. Thus, in this study,
PAGE 322
308 tables of fixed effects with robust standard errors were reported. A dditionally, as part of the HLM analysis, reliability es timates of the level1 random coefficients were computed. However, reliability estimates of the random slopes tended to be lower than those of the intercepts. According to Raudenbus h, Bryk, Cheong, and Congdon (2004), Â“Low reliabilities do not invalidate the HLM analysis. Very low re liabilities (e.g., <.10), often indicate that a random coefficient might be considered fixed in subsequent analyses.Â” (p.82). Implications Despite its limitation, this study contri butes significantly to the field of educational research. First, this study made an attempt to reduce bias in international educational research by examining correlates of math achievement in both developed and developing countries. Second, by investigating correlates of math achievement within each country rather than between countries, this study produced countryspecific research findings related to eighthgrade studentsÂ’ ma th achievement. For the national leaders, policy makers, and educators from these count ries, especially deve loping countries, the results of this study could be insightful because they were carefully examined while controlling for the uniqueness of each country (i.e., student bac kground, home resources, instructional practices, teacher background, a nd school resources). Thus, in each of the four countries, these results could be used directly to help evaluate and improve the effectiveness of their current ma thematics educational systems. Third, findings of this study provide st rong evidence to suppor t the view that countries differ and an educational model th at worked for a developed country might not work for a developing country. Fourth, with detailed descriptions of research design,
PAGE 323
309 method and data analysis steps, this study serves as an exam ple for other researchers to replicate this study either w ith other countries that pa rticipated in the TIMSS 2003 assessment or with other international achievement databases. Finally, by outlining limitations with the TIMSS 2003 data, this st udy aimed to provide TIMSS researchers with suggestions for refinement and improvement of future TIMSS studies. Future Research As a result of this work, a number of future research studies can be conducted. First of all, in an effort to reduce bias in international achievement research, this study can be replicated with other developed and de veloping countries that participated in the TIMSS 2003 study. Ideally, the new studies sh ould include countri es from different continents in order to maximize variances ac ross countries. Second, future work can be conducted using different existi ng largescale international achievement data such as PIRLS and PISA. Because different databases tended to provide different contextual and background variables, it will be interesting to find out whether similar countryspecific findings will result from the use of similar models with similar composite variables but different indicators. Third, it may also be worthwhile to consider building countryspecific achievement models for different subj ects such as Science and Reading, and for different grades such as Four th, Eighth, and Twelfth in futu re research. Finally, because the current study did not expl ain why certain relationships between math achievement and contextual and background factors were pr esent or absent, further studies can be conducted within each country to gain deep er understanding of the reasons underlying these relationships.
PAGE 324
310 REFERENCES AbuHilal, M. M. (2000). A structural m odel of attitudes towa rd school subjects, academic aspirations, and achievement. Educational Psychology, 20, 75Â–84. Adler, C. (1996). Report of the public co mmittee on long school day. Jerusalem: Center for the Study of Social Policy in Israel. Akiba, M., LeTendre, G. K., & Scribner, J. P. (2007). Teacher quality, opportunity gap, and national achievement in 46 countries. Educational Researcher, 36(7), 369387. Allison, P.D. (2001) Missing data. Thousand Oaks,CA: Sage. Allinder, R. M. (1994). The relationship betw een efficacy and the instructional practices of special education teachers and consultants. Teacher Education and Special Education, 17, 8695. Angrist, J. D., & Lavy, V. (2001). Does teac her training affect pup il learning? Evidence from matched comparisons in Jerusalem public schools. Journal of Labor Economics, 19 (2), 343369. Ashton, P. T., & Webb, R. B. (1986). Making a difference: TeachersÂ’ sense of efficacy and student achievement. New York: Longman. Baker, D. P. (1993). A rejoinder. Educational Researcher, 22 (3), 2526 Baker, D. P., Fabrega, R., Galindo, C., & Mishook, J. (2004). Instructional time and national achievement: Crossnational evidence. Prospect, 34 (3), 311334. Baker, D. P., Goesling, B., & LeTendre, G. K. (2002). Socioeconomic status, school quality, and national economic developmen t: A crossnational analysis of the Â“HeynemanLoxley effectÂ” on mathematics and science achievement. Comparative Education Review, 46 (3), 291312. Baker, D. P., & LeTendre, G. K. (2005). National differences, global similarities Â– world culture and the future of schooling. Stanford, CA: Stanford University Press. Bankov, K., Mikova, D., & Smith, T. M. (2006). Sc hool quality and equity in central and eastern Europe: Asse ssing betweenschool variation in educational resources and mathematics and science achievement in bulgaria. Prospects, 36 (4), 448473.
PAGE 325
311 Beaton, A. E. (1998). Comparing crossnati onal student performance on TIMSS using different test items. International Journal of Educational Research, 29, 529542. Beaton, A. E., Mullis, I. V. S., Martin, M. O. Gonzalez, E. J., Kelly, D. L., & Smith, T. A. (1996). Mathematics achievement in the middle school years: IEAÂ’s third international mathematics and science study (TIMSS). Chestnut Hill, MA: Boston College. Beck, E. L. (1999). Prevention and interven tion programming: Lessons from an afterschool program. Urban Review, 31 (1), 107124. Bielinski, J., & Davison, M. L. (2001). A se x difference by item difficulty interaction in multiplechoice mathematics items administered to national probability samples. Journal of Educational Measurement, 38, 5177. Bolger, N., & Kellaghan, T. (1990). Method of measurement and gender differences in scholastic achievement. Journal of Educational Measurement, 27 (2), 165Â–174. Borko, H. (2004). Professional development a nd teacher learning: Mapping the terrain. Educational Researcher, 33, (8), 315. Borko, H., & Putnam, R. (1996). Learning to teach. In D. Berliner & R. Calfee (Eds.), Handbook of educational psychology (673708). New York: Macmillan. Bryan, C. A., Wang, T., Perry, B., Wong, N., & Cai, J. (2007). Comparison and contrast: Similarities and differences of teachersÂ’ views of effective mathematics teaching and learning from four regions. ZDM Mathematics Education, 39, 329340. Bush, G. W. (2001). President honors nati on's leading math and science teachers. Retrieved on September 20, 2007 from http://www.whitehouse.gov/news/releases/2001/03/2001030512.html Carpenter, T. P., & Fennema, E. (1992). C ognitively guided instruction: Building on the knowledge of students and teachers. In W. Secada (Ed.), Researching educational reform: The case of school mathematics in the United States (pp. 457470). Special issue of International Jo urnal of Educational Research. Carpenter, T. P., Blanton, M. L., Cobb, P., Franke, M. L., Kaput, J., & McClain, K. (2004). Scaling up innovative practices in mathematics and science. Madison: University of WisconsinMadison, Na tional Center for Improving Student Learning and Achievement in Mathematics and Science. Carroll, J. B. 1963. A model of school learning. Teachers College Record, 64, 722Â–733. Carter, D. S. G., & O'Neill, M. H. (1995). International perspec tives on educational reform and policy implementation. Washington, D.C.: Falmer Press.
PAGE 326
312 Cogan S. L., & Schmidt, W. H. (1999). An ex amination of instructi onal practices in six countries. In G. Kaiser, E. Luna, & I. Huntley (Eds.), International comparisons in mathematics education (pp. 6885). Philadelphia, PA: Falmer Press. Cohen, D. K., & Hill, H. C. (1998). Instructional policy and cla ssroom performance: The mathematics reform in California (RR39). Philadelphia: Consortium for Policy Research in Education. Cohen, D. K., & Hill, H. C. (2000). Instruc tional policy and classroom performance: The mathematics reform in California. Teachers College Record, 102 (2), 294343. Coladarci, T. (1992). TeachersÂ’ sense of efficacy and commitment to teaching. Journal of Experimental Education, 60, 323337. Coleman, J. S. (1975). Methods and results in the IEA studies of effects of school on learning. Review of Educational Research, 45(3), 355386. Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M.,Weinfeld, F. D.,& York, R. L. (1966). Equality of educational opportunity. Washington, D.C.: U.S. Government Printing Office. Comber, L. C., & Keeves, J. P. (1973). Science education in nineteen countries. International studi es in evaluation, Vol. 1. Stockholm: Almqvist & Wiksell. Cooper, H. M. (1989a). Homework. White Plains, NY: Longman Cooper, H. M. (1989b). Does reducing studentt oinstructor ratios affect achievement? Educational Psychologist, 24 (1), 7898. Cooper, H., Lindsay, J. J., Nye, B., & Greathouse, S. (1998). Relationships among attitudes about homework, amount of ho mework assigned and completed, and student achievement. Journal of Educational Psychology, 90 (1), 70Â–83. Cooper, H., & Valentine, J. C. (2001). Using re search to answer prac tical questions about homework. Educational Psychologist, 36 (3), 143153. Corcoran, T. B. (1995). Transforming professional devel opment for teachers: A guide for state policymakers. Washington, DC: National Governors' Association. Cosden, M., Morrison, G., Albanese, A. L., & Macias, S. (2001). When homework is not home work: Afterschool progr ams for homework assistance. Educational Psychologist, 36 (3), 211221. DarlingHammond, L. (2000). How teacher education matters. Journal of Teacher Education, 51 (3), 166173.
PAGE 327
313 Wiley, D. E. & Yoon, B. (1995). Educational Evaluation and Policy Analysis, 17 (3), 355370. Davis, H., & Carr, M. (2001). Gender differences in mathematics: Strategy, use, the influence of temperament. Learning and Individual Differences, 13, 8395. Delaney, P. (2000). Study finds Asian countries are best in math, sc ience: Newest TIMSS data indicates little progress for US 8th graders. The Boston College Chronicle, 9 (8). Retrieved June 15, 2007 from http://www.bc.edu/bc_org/rvp/puba f/chronicle/v9/d14/timss.html De Jong, R., Westerhof, K. J., & Creemers, B. P. M. (2000). Homework and student math achievement in junior high schools. Educational Research and Evaluation, 6, 130157. Educational Research Service. (1978). Class size: A summary of research. Arlington, VA: Educational Research Service. Educational Research Service. (1980). Class size research: A critique of recent metaanalyses. Arlington, VA: Educational Research Service. Eklof, H. (2007). Testtaking motivation and mathematics performance in TIMSS 2003. International Journal of Testing, 7 (3), 311326. Encyclopedia of Educational Research (1960). London: CollierMacmillan Limited. Ercikan, K., McCreith, T. & Lapointe, T. (2005). How are nonschool related factors associated with participation and achievem ent in science? An examination of gender differences in Canada, the USA and Norway. In S. J. Howie and T. Plomp (Eds), Contexts of learning mathematic s and science: Lesso ns learned from TIMSS, 211225. Netherlands: Swets & Zeit linger International Publishers. Evertson, C. (1980). Differences in instruction ac tivities in high and low achieving junior high classes. Paper presente d at the annual meeting of the American Educational Research Association, Boston, MA. Ferguson, R. F. (1991). Paying for public education: New evidence on how and why money matters. Harvard Journal on Legislation, 28 (2), 465498. Fennema, E., Carpenter, T. P., Jacobs, V. R ., Franke, M. L., & Levi, L. W. (1998). A longitudinal study of gender differences in young children's mathematical thinking. Educational Researcher, 27 (5), 611. Forshay, A. W., Thorndike, R. L., Hoytat, F ., Pidgeon, D. A., & Walker, D. A. (1962). Educational achievement of thirteenyearolds. Hamburg: UNESCO Institute for Education.
PAGE 328
314 Frederick, W. C., & Walberg, H. J. ( 1980). Learning as a function of time. Journal of Educational Research, 73, 183Â–194. Fuller, B. (1987). What school factors ra ise achievement in the third world? Review of Educational Research. 57 (3), 255292. Glazerman, S., Mayer, D., & Decker, P. (2006) Alternative routers to teaching: the impacts of teach for America on student achievement and other outcomes. Journal of Policy Analysis and Management, 25 (1), 7596. Glass, G. V., & Smith, M. L. (1978). Metaanalysis of research on the relationship of classsize and achievement. San Francisco: Far West Laboratory for Educational Research and Development. Gonzales, P. (2001). Using TIMSS to analyze correlates of performance variation in mathematics. Washington, D.C.: U.S. Departme nt of Education, National Center for Education Statistics. Goals 2000. (1994). Goals 2000: Educate America Act of 1994. Pub. L. No. 103227, Stat. 125, 108. Gibson, S., & Dembo, M. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76 (4), 569582. Greenberg, E., Rhodes, D., Ye, X., & Stancavage, F. (2004). Prepared to teach: Teacher preparation and student achievement in eighthgrade mathematics: American Institutes for Research. Paper pr esented at AERA 2004, San Diego, CA. Downloaded October 10, 2007 from http://www.air.org/news_events/doc uments/AERA2004PreparedtoTeach.pdf Guskey, T. R. (1987). Context variables th at affect measures of teacher efficacy. Journal of Educational Research, 81 (1), 4147. Grouws, D. Smith, M., & Sztajn, P. (2004). The preparation and teaching practices of United States mathematics teachers: Grades 4 and 8. In P. Kloosterman & F. Lester, Jr. (Eds.), Results and interpretations of the 1990 through 2000 mathematics assessments of the Natio nal Assessment of Educational Progress (pp. 221267). Reston, VA: National Council of Teachers of Mathematics. HahsVaughn, D. L. (2005). A primer for us ing and understanding weights with national datasets. Journal of Experime ntal Education, 73, 221228. Hambleton, R.K. Merenda, P.F. & Spielberger, C.D. (2005). Adapting educational and psychological tests for crosscultural assessment. Mahwah, NJ: L. Erlbaum Associates.
PAGE 329
315 Hamilton, L. S., McCaffrey, D. F., Stecher, B. M., Klein, S. P., Robyn, A., & Bugliari, D. (2003). Studying largescale re forms of instructional practice: An example from mathematics and science. Educational Evaluation and Policy Analysis, 25 (1), 129. Harris, A. M., & Carlton, S. T. (1993). Pa tterns of gender differences on mathematics items on the Scholastic Aptitude Test. Applied Measurement in Education, 6, 137151. Heyneman, S. P., & Loxley, W. A. (1982). Influences on academic achievement across high and low income countries: A reanalysis of IEA data. Sociology of Education, 55 (1), 1321. Heyneman, S. P. & Loxley, W. A. (1983). The effect of primaryschool quality on academic achievement across twentynine highand lowincome countries. American Journal of Sociology, 88 (6), 11621194. Hiebert, J., & Stigler, J. W. (2000). A proposal for improving classroom teaching: Lessons from the TIMSS video study. Elementary School Journal, 1001, 320. Hilton, T. L. (1992). Using national data bases in educational research. Hillsdale, NJ: Lawrence Earlbaum. Hofferth, S.L. (2005). Secondary data analysis in family research. Journal of Marriage and Family, 67, 891907. Hollins, E. R. (1995). Revealing the deep mean ing of culture in school learning: Framing a new paradigm for teacher preparation. Action in Teacher Education, 17 (1), 7079. House, J. D. (2006). Mathematics beliefs and achievement of elementary school students in Japan and the United States: Results from the third international mathematics and science study, The Journal of Genetic Psychology, 2006, 167 (1), 31Â–45. Hox, J. J. (1995). Applied multilevel analysis, 2nd Ed. Amsterdam: TTPublikaties. Huffman, D., Thomas, K. & Lawrenz, F. (2003). Relationship between professional development, teachers' instructional practi ces, and the achievement of students in science and mathematics. School Science and Mathematics, 103 (8), 378387 International Association for the Evalua tion of Educational Achievement. (2007). Completed studies. Retrieved September 22, 2007 from http://www.iea.nl/completed_studies.html
PAGE 330
316 Jacob, B. A., & Lefgren, L. (2004). The impact of teacher training on student achievement: Quasi experiment evidence from school reform efforts in Chicago. The Journal of Human Resources, 39 (1), 5079. Johnson, C. C., Kahle, J. B., & Fargo, J. D. (2007). A study of the effect of sustained, wholeschool professional development on student achievement in science. Journal of reseach in science teaching, 44 (6), 775786. Keith, T. Z., & Cool, V. A. (1992). Testing m odels of school learni ng: Effects of quality of instruction, motivation, academic coursework, and homework on academic achievement. School Psychology Quarterly, 7, 207Â–226. [Could not retrieve full text online] Educational Ou tcomes of Tutoring: A Me taAnalysis of Findings Kennedy, M. M. (1998). Form and substance in ins ervice teacher education. Arlington, VA: National Science Foundation. Kennedy, M. M. (1998). Form and substance in inservice teacher education (Research Monograph No. 13). Arlington, VA: National Science Foundation. Kiecolt, K. J., Nathan, L. E. (1985). Secondary analysis of survey data. Beverly Hills: Sage. Koller, Baumert & Schnabel (2001). Does in terest matter? The relationship between academic interest and achievement in mathematics. Journal for Research in Mathematics Education, 32 (5), 448470. Kreft, I.G.G. (1996). Are multilevel techni ques necessary? An overview, including simulation studies. Unpublished manuscrip t, California State University, Los Angeles, CA. Kreft, I., & Leeuw, J. D. (2004). Introducing Multilevel Modeling. London: Sage Publications. Kromrey, J.D. & Hines, C.V. (1994). Nonrandomly missing data in multiple regression: an empirical comparison of co mmon missingdata treatments. Educational and Psychological Measurement, 54, 573593. Luke, A. D. (2004). Multilevel Modeling. Series: Quantitative Appli cations in the Social Sciences. Sage Publications Lee, J. (2007). Two worlds of private tutori ng: the prevalence and causes of afterschool mathematics tutoring in Korea and the United States. Teachers College Record, 109, 12071234. Leung, F. K. S. (2002). Behind the high ach ievement of East Asian students. Educational Research and Evaluation, 8, 87Â–108.
PAGE 331
317 Little, J. W. (1993). Teachers' professional development in a climate of educational reform. Educational Evaluation and Policy Analysis, 15(2), 1291 51. LoucksHorsley, S., Hewson, P. W., Love, N., & Stiles, K. E. (1998). Designing professional development for teachers of science and mathematics. Thousand Oaks, CA: Corwin Press. Luyten, H., Visscher, A., & Witziers, B. ( 2005). School effectiveness research: From a review of the criticism to recomm endations for further development. School Effectiveness and School Improvement, 16 (3), 249279. Ma, X., & Kishor, N. (1997). Assessing th e Relationship Between Attitude Toward Mathematics and Achievement in Mathematics: A MetaAnalysis. Journal for Research in Mathematics Education, 28 (1), 2647. Marsh, H. W., Hau, K., & Kong, C. (2002). Multil evel causal ordering of academic selfconcept and achievement: Influence of la nguage of instruction (English compared with Chinese) for Hong Kong students. American Educationa l Research Journal, 39, 727Â–763. Martin, M. O. (2005). TIMSS 2003 user guide for th e internati onal database. Chestnut Hill,MA: TIMSS & PIRLS International Study Center. Moriarty, H. J., Deatrick, J. A., Mahon, M. M., Feetham, S. L., Carroll R. M., Shepard, M. P., & Orsi, A. J. (1999). Issues to consider when choosing and using large national databases for research of families. Western Journal of Nursing Research, 21, 143153. Mosteller, F., & Moynihan, D. P. (Eds.) (1972). On equality of educational opportunity. New York: Vintage Books. Muijs, D., & Reynolds, D. (2003). The effectiveness of the use of learning support assistants in improving the mathematics ach ievement of low ac hieving pupils in primary school. Educational Research, 45 (3), 219230. Mullis, S. R. (2001). Dealing with Missing Data. Outcome Oriented: The Online Newsletter of the Center for Outcome Measurement in Brain Injury (COMBI). Retrieved July 14, 2006 from www.tbims.org/combi Mullis, I. V. S., Martin, M. O., Gonzalez, E. J., & Chrostowski, S. J. (2004). TIMSS 2003 international mathematics report: Findings from IEAÂ’s Trends in International Mathematics and Science Study at the fourth and eighth grades. Chestnut Hill, MA: TIMSS & PIRLS International Study Center. Muthen, B., Huang, L., Jo, B., Khoo, S., Goff, G. N., Novak, J. R., & Shih, J. C. (1995). Opportunitytolearn effects on ac hievement: Analytical aspects. Educational Evaluation and Policy Analysis, 17 (3), 371403.
PAGE 332
318 National Center for Education Statistics. (2007). Surveys and programs. Retrieved on June 15, 2007 from http://nces.ed.gov/surveys/interna tional/IntlIndica tors/index.asp National Commission on Educational Excelle nce. (1983). A national at risk: The imperative for educational reform. Washington, D.C.: United States Department of Education. No Child Left Behind. (2001). No Child Left Behind Act of 2001. Pub. L. 107110. Washington D.C: U.S. Department of Education. Nye, B., Hedges, L. V., & Konstantopoulos, S. (2001). The longterm effects of small classes in early grades: Lasting benefits in mathematics achievement at grade 9. The Journal of Experime ntal Education, 2001, 69 (3), 245257. Organization for Economic Cooperation and Development. (2001). Knowledge and skills for life: First resu lts from the OECD programme for international student assessment. Paris, France: OECD. Organization for Economic Cooperation a nd Development. (2007). The program for international student assessment. Retrieved September 20, 2007 from http://www.pisa.oecd.org OÂ’Leary, M. (2002). Stability of country ra nkings across item formats in the third international mathematics and science study. Educational Measurement: Issues and Practice, 21 (4), 2738. Pajares, F., & Graham, L. (1999). Selfefficac y, motivation constructs, and mathematics performance of entering middle school students. Contemporary Educational Psychology, 24, 124Â–139. Papanastasiou, C. (2000). Effects of attit udes and beliefs on mathematics achievement. Studies in Educational Evaluation, 26, 2742. Papanastasiou, C. (2002). School, teaching a nd family influence on student attitudes toward science: Based on TIMSS data Cyprus. Studies in Educational Evaluation, 28, 7186. Parsad, B., Lewis, L., Farris, E., Greene, B., (2000). Teacher preparation and professional development: 2000. U.S. Department of Education: Office of Educational Research and Improvement. Peterson, P. L., & Fennema, E. (1985). E ffective teaching, student engagement in classroom activities, and sex related di fferences in learning mathematics. American Educational Research Journal, 22 (3), 309335.
PAGE 333
319 Phan, H. T., & Kromrey, J. D. (2007). Mi ssing data in largescale assessments: A confirmatory factor analysis of the TIMSS 2003 eighthgrade mathematics scores. A research paper presented at the annual meeting of the American Educational Research Association, Chicago, IL. Pianta, R. C., Belsky, J., Houts, R., & Mo rrison, F. (2007). Opportunities to learn in AmericaÂ’s elementary classrooms. Science, 315 (5820) 17951796. Pong, S., & Pallas, A. (2001). Class size and eighthgrade math achievement in the United States and Abroad. Educational Evaluation and Policy Analysis, 23 (3), 251273. Rao, N., Moely, B. E., & Sachs, J. (2000). Mo tivational beliefs, st udy strategies, and mathematics attainment in highand lo wachieving Chinese secondary school students. Contemporary Educati onal Psychology, 25, 287Â–316. Raudenbush, S. W. & Bryk, A. S. (2002). Hierarchical linear mode ls: Applications and data analysis methods (2nd Ed.) Thousand Oaks: Sage. Raudenbush, S., Bryk, A., Cheong, Y.F., & Congdon, R. (2004). HLM6 hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International, Inc. Raynolds, A. J. (1991). The middle schooli ng process: Influences on science and mathematics achievement from the longitudinal study of American youth. Adolescence, 26, 132157. Reys, B. J., Reys, R. E., Barnes, D., B eem, J., & Papick, I. (1997). Collaborative curriculum investigation as a vehicle for teacher enhancement and mathematics curriculum reform. School Science and Mathematics, 87 (5), 253259. Riddell, A. R. (1997). Assessing designs for school effectiveness research and school improvement in developing countries. Comparative Education Review, 41 (2), 178204. Rodriguez, M. C. (2004). The Role of cla ssroom assessment in student performance on TIMSS. Applied Measurement in Education, 17 (1), 124. Rosenberg, A.L., Greenfield, M.V.H., & Dimick, J.B. (2006). Secondary data analysis: Using existing data to answer clinical questions. In J.T. Wei (Ed.) Clinical Research Methods for the Surgeons. Totowa, NJ: Humana Press. Ross, J. A., Bruce, C., & HogaboamGray, A. (2006). The impact of a professional development program on student achievement in grade 6 mathematics. Journal of Mathematics Teacher Education, 9, 551577.
PAGE 334
320 Ross, J. G., Saavedra, P. J., Shur, G. H., Winters, F., & Felner, R. D. (1992). The effectiveness of an afte rschool program for primar y grade latchkey students on precursors of substance abuse. Journal of Community Psychology, 2238. Roth, P. (1994). Missing data:A conceptu al overview for applied psychologists. Personnel Psychology, 47, 537560. SAS Institute Inc. (2005). SAS/STAT userÂ’s guide, version 9.13. Cary, NC: SAS Institute Inc. SAS/STAT Software Enhancement. (2006) SAS/STAT Software Enhancement. Retrieved on July 14, 2006 from http://support.sas.com/rnd/app/da/new/dami.html Schmidt, W. H., & McKnight, C. C. (1995) Surveying Educati onal Opportunity in Mathematics and Science: An International Perspective. Educational Evaluation and Policy Analysis, 17 (3), 337353. Schmidt, W., McKnight, C., Valverde, G., Houang, R., & Willey D. (1997). Many visions, many aims: A crossnational inves tigation of curricula r intentions in school mathematics. Norwell, MA: Kluwer Academic. Shen, C. (2002). Revisiting the relationship betw een studentsÂ’ achievement and their self perceptions: A crossnational analysis based on TIMSS 1999 data. Assessment in Education, 9 (2), 161Â–184. Shen, C., & Pedulla, J. J. (2000). The rela tionship between studentsÂ’ achievement and their self perceptions of competence a nd rigour of mathematics and science: A crossnational analysis. Assessment in Education, 7, 237Â–253. Shulman, L. S., & Shulman, J, H. (2004). Ho w and what teachers learn: a shifting perspective. Journal of Curriculum Studies, 36 (2), 257Â–271. Singh, K., Grandville, M., & Dika, S. (2002) Mathematics and science achievement: Effects of motivation, intere st, and academic engagement, The Journal of Educational Research, 95 (6), 323332. Slavin, R. E. (1989). Class si ze and student achievement: Small effects of small classes. Educational Psychologist, 24 (1), 99110. Smith, D. C., & Neale, D. C. (1991). The c onstruction of subjectmatter knowledge in primary science teaching. In J. Brophy (Ed.), Advances in research on teaching: Vol. 2. TeachersÂ’ knowledge of subject matter as it relates to their teaching practice (18 Â–243). Greenwich, CT: JAI Press.
PAGE 335
321 Staub, F.C., & Stern, E. (2002). The nature of teachersÂ’ pedagogi cal content beliefs matters for studentsÂ’ achievement gain s: Quasiexperimental evidence from elementary mathematics. Journal of Educational Psychology, 94, 344355. Stigler, J. W., Gonzales, P., Kawanaka T., Knoll, S., & Serrano, A. (1999). The TIMSS videotape classroom study: Methods and findings from an expl oratory research project on eighthgrade mathematics in struction in Germany, Japan, and the United States. U.S. Department of Education. National Center for Education Statistics NCES 1999074. Washington, DC: U.S. Government Printing Office. Stigler, J. W., Gallimore, R., & Hiebert, J. (2000). Using video surveys to compare classrooms and teaching across cultures: Examples and lessons from the TIMSS video studies. Educational Psychologist, 35, 87100. Stigler, J. W., Lee, S. Y., & Stevenson, H. W. (1987). Mathematics classroom in Japan, Taiwan, and the United States. Child Development, 58, 12721285. Stipek, D. J., Givvin, K. B., Salmon, J. M., & MacGyvers, V. L. ( 2001). TeachersÂ’ beliefs and practices related to mathematics instruction. Teaching and Teacher Education, 17, 213226. Suter, L. E. (2000). Is student achievement immutable? Evidence from international studies on schooling and student achievement. Review of Educational Research, 70 (4), 529545. The World Bank (2003). 2003 World Development Indicators. Washington, DC: The World Bank. The World Bank. (2007). World Bank list of economies (July 2007). Retrieved on November 30, 2007 from http://we b.worldbank.org/wbsite/external/ datastatistics/0,,contentm dk:20420458~menupk:64133156~pagepk:64133150~pi pk:64133175 ~thesitepk:239419,00.html TIMSS. (1995). The trends in internati onal mathematics and science study (TIMSS 1995). Retrieved on June 15, 2007 from http://isc.bc.edu/timss1995.html TIMSS. (1999). Mathematics Benchmarking Report: TIMSS 1999. Retrieved on June 15, 2007 from http://isc.bc.edu/timss1999b/pdf/TB99_Math_Intro.pdf TIMSS. (2003). TIMSS 2003 Technical Repor t. Retrieved on June 15, 2007 from http://timss.bc.edu/timss2003i/technicalD.html TIMSS. (2007). The trends in internati onal mathematics and science study (TIMSS 2007). Retrieved on June 15, 2007 from http://timss.bc.edu/TIMSS2007/index.html
PAGE 336
322 Trautwein, U., Koller, O., Schmitz, B., & Baum ert, J. (2002). Do homework assignments enhance achievement? A multilevel analysis in 7th grade mathematics. Contemporary Educational Psychology, 27, 2650. Trautwein, U., & Koller, O. (2003). Th e relationship between homework and achievement still much of a mystery. Educational Psychology Review, Vol. 15, No. 2, 115145. Trautwein, U. (2007). The homeworkachieveme nt relation reconsidered: Differentiating homework time, homework frequency, and homework effort. Learning and Instruction 17, 372388. TschannenMoran, M. & Hoy, A. W (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17, 783805. Walberg, H.J., & Paschal, R.A. (1995). Homework. In L.W. Anderson (Ed.), International encyclopedia of teaching and teacher e ducation (pp. 268Â–271). Oxford: Elsevier. Wang, J. (2001). TIMSS primary and middle school data: Some technical concerns. Educational Researcher, 30 (6), 1721. Watkins, D. A., & Biggs, J. B. (2001). Teaching the Chinese learners: Psychological and pedagogical perspective. Hong Kong: Comparative Edu cation Research Centre, the University of Hong Kong and Melbour ne, Australia: The Australian Council for Educational Research. Werf, G. V. D., Creemers, B., Jong, R. D., & Klaver, E. (2000). Evaluation of school improvement through an educational effec tiveness model: The case of IndonesiaÂ’s PEQIP Project. Comparative Education Review, 44 (3), 329355. Westbury, I. (1992). Comparing American a nd Japanese achievement: Is the United States really a low achiever? E ducational Researcher, 21(4), 1824. Wester, A., & Henriksson, W. (2000). The inte raction between item format and gender differences in mathematics performance based on TIMSS data. Studies in Educational Evaluation, 26, 7990. Wiggins, R. A., & Follo, E. J. (1999). De velopment of knowledge, attitudes, and commitment to teach di verse student populations. Journal of Teacher Education, 50 (2), 94105. Wiley, D. E., & Yoon, B. (1995). Teacher repor ts on opportunity to learn: Analyses of the 1993 california learning assessment system (CLAS). Educational Evaluation and Policy Analysis, 17 (3), 355370.
PAGE 337
323 Wobmann, L. (2003). School resources, educationa l institutions and student performance: The international evidence. Oxford Bulle tin of Economics and Statistics. 65(2), 03059049. Wobmann, L., & West, M. (2006). Classsize ef fects in school systems around the world: Evidence from betweengrade variation in TIMSS. European Economic Review, 50, 695736. Wright, S.P., Horn, S.P. & Sanders, L. (1997) Teacher and classroom context effects on students achievement: Implications for Teacher Evaluation, 11, 5767. Yair, G. (2000). Not just about time: Inst ructional practices an d productive time in school. Educational Administ ration Quarterly, 36 (4), 485512.
PAGE 338
324 APPENDICES
PAGE 339
325 Appendix A: List of Countries Table A1. List of countries participating in TIMSS 2003 eighthgrade assessment by country status Developing Countries Developed Countries Name No. schools No. Students Name No. schools No. Students Armenia 149 5,699 Australia 207 4,791 Bulgaria 164 4,117 Bahrain 67 4,199 Botswana 146 5,150 Belgium 144 4,970 Chile 195 6,377 Canada 361 8,628 Egypt 217 7,094 Chinese Taipei 150 5,379 Estonia 151 4,040 Cyprus 59 4,009 Hungary 155 3,302 United Kingdom 215 6,346 Indonesia 150 5,762 Hong Kong SAR 125 4,972 Iran 181 4,942 Israel 146 4,318 Jordan 140 4,489 Italy 171 4,278 Latvia 140 3,629 Japan 146 4,856 Lebanon 152 3,814 Korea 149 5,309 Lithuania 143 4,572 Netherlands 130 3,036 Malaysia 150 5,314 New Zealand 169 3,800 Macedonia 147 3,893 Norway 138 4,133 Morocco 131 2,873 Singapore 164 6,018 Moldova 149 4,033 Slovenia 174 3,578 Philippines 137 6,917 Sweden 159 4,255 Romania 148 4,103 United States 286 11,100 Russian Federation 214 4,667 Spain 115 2,514 Saudi Arabia 155 4,295 Serbia 149 4,295 South Africa 255 8,840 Syrian Arab Republic Complete data not available Slovak Republic 179 4,215 Tunisia 150 4,931 Palestine 145 5,357 Note: The classification of country status was based on the World BankÂ’s (2003) World Development Indicators (The World Bank, 2003). According to the World BankÂ’s (2007) list of economies, developing countries refer to countries with lowincome and middleincome economies; whereas developed countries refer to c ountries with highincome economies. The use of the terms is convenient; it is not intended to imply that all economies in the group are experiencing similar development or that devel oped economies have reached a preferred or final stage of development (The World Bank, 2007).
PAGE 340
326 Appendix B: Items Used to Create Compos ite Variable Opportunity to Learn Items used to create composite variable opportunity to learn The following list includes the main topics addressed by the TIMSS mathematics test. Choose the response that best describes when students in the TIMSS class have been taught each topic. If a topic was taught half th is year and half before this year, please choose Â“Mostly taught this year.Â” 1 = mostly taught before this year 2 = mostly taught this year 3 = not yet taught or just introduced A. Number a) Whole numbers including place value, factorization, and th e four operations b) Computations, estimations, or ap proximations involving whole numbers c) Common fractions includi ng equivalent fractions, a nd ordering of fractions d) Decimal fractions incl uding place value, ordering, rounding, and converting to common fractions (and vice versa) e) Representing decimals and fractions us ing words, numbers, or models (including number lines) f) Computations with fractions g) Computations with decimals h) Integers including words, numbers, or models (including number lines), ordering integers, addition, subtraction, multiplic ation, and division with integers i) Ratios (equivalence, division of a quantity by a given ratio) j) Conversion of percents to fract ions or decimals, and vice versa
PAGE 341
327 Appendix B: (Continued) B. Algebra a) Numeric, algebraic, and geometric patte rns or sequences (extension, missing terms, generalization of patterns) b) Sums, products, and powers of e xpressions contai ning variables c) Simple linear equations and inequalities, and simultaneous (two variables) equations d) Equivalent representations of functions as ordered pairs, tables, graphs, words, or equations e) Proportional, linear, and non linear relationships (travel gr aphs and simple piecewise functions included) f) Attributes of a graph such as intercep ts on axes, and interv als where the function increases, decreases, or is constant C. Measurement a) Standard units for measures of length, ar ea, volume, perimeter, circumference, time, speed, density, angle, mass/weight b) Relationships among units for conversions within systems of units, and for rates c) Use standard tools to measure length, we ight, time, speed, angle, and temperature d) Estimations of length, circumference, area volume, weight, time, angle, and speed in problem situations (e.g., circumferen ce of a wheel, speed of a runner) e) Computations with measurements in pr oblem situations (e.g., add measures, find average speed on a trip, find population density) f) Measurement formulas for perimeter of a r ectangle, circumference of a circle, areas of plane figures (including circles), surface area an d volume of rectangular solids, and rates
PAGE 342
328 Appendix B: (Continued) g) Measures of irregular or compound ar eas (e.g., by using grids or dissecting and rearranging pieces) h) Precision of measurements (e.g., upper a nd lower bounds of a length reported as 8 centimeters to the nearest centimeter) D. Geometry a) Angles acute, right, stra ight, obtuse, reflex, comple mentary, and supplementary b) Relationships for angles at a point, angl es on a line, vertically opposite angles, angles associated with a transversal cutting parallel lines, and perpendicularity c) Properties of angle bi sectors and perpendicular bisectors of lines d) Properties of geometric shapes : triangles and quadrilaterals e) Properties of other polygons (regular pentagon, hexagon, octagon, decagon) f) Construct or draw triangles and rectangles of given dimensions g) Pythagorean theorem (not proof) to find length of a side h) Congruent figures (tria ngles, quadrilaterals) and th eir corresponding measures i) Similar triangles and recall their properties j) Cartesian plane ordered pairs, equations, intercepts, intersections, and gradient k) Relationships between twodimensi onal and threedimensional shapes l) Line and rotational symmetry for twodimensional shapes m) Translation, reflection, rotation, and en largement E. Data a) Organizing a set of data by one or more ch aracteristics using a tally chart, table, or graph
PAGE 343
329 Appendix B: (Continued) b) Sources of error in collecting and organi zing data (e.g., bias, inappropriate grouping) c) Data collection methods (e.g., su rvey, experiment, questionnaire) d) Drawing and interpreting gr aphs, tables, pictographs, bar graphs, pie charts, and line graphs e) Characteristics of data se ts including mean, median, ra nge, and shape of distribution (in general terms) f) Interpreting data sets (e .g., draw conclusions, make pred ictions, and estimate values between and beyond given data points) g) Evaluating interpretations of data with respect to correctness and completeness of interpretation h) Simple probability including using data fr om experiments to estimate probabilities for favorable outcomes
PAGE 344
330 Appendix C: Items Used to Create Composite Variable Ready to Teach Math Topics Items used to create composite va riable ready to teach math topics Considering your training and experience in both mathematics cont ent and instruction, how ready do you feel you are to teach each topi c at eighth grade? ( 3point scale: 1 = very ready, 2 = ready, 3 = not ready) A. Number 1) Representing decimals and fractions using words, numbers, or models (including number lines) 2) Integers including words, numbers, or models (including number lines); ordering integers; and addition, subtraction, multip lication, and division with integers B. Algebra 1) Numeric, algebraic, and geometric patte rns or sequences (extension, missing terms, generalization of patterns) 2) Simple linear equations and inequalities, and simultaneous (two variables) equations 3) Equivalent representations of functions as ordered pairs, tables, graphs, words, or equations 4) Attributes of a graph su ch as intercepts on axes, a nd intervals where the function increases, decreases, or is constant C. Measurement 1) Estimations of length, circumference, ar ea, volume, weight, time, angle, and speed in problem situations (e.g., circumfe rence of a wheel, speed of a runner) 2) Computations with measurem ents in problem situations (e.g., add measures, find average spee d on a trip, find population density)
PAGE 345
331 Appendix C: (Continued) 3) Measures of irregular or compound ar eas (e.g., by using grids or dissecting and rearranging pieces) 4) Precision of measurements (e.g., upper a nd lower bounds of a length reported as 8 centimeters to the nearest centimeter) D. Geometry 1) Pythagorean theorem (not pr oof) to find length of a side 2) Congruent figures (tria ngles, quadrilaterals) and their corresponding measures 3) Cartesian plane ordered pairs, equati ons, intercepts, intersections, and gradient 4) Translation, reflection, rotation, and enlargement E. Data 1) Sources of error in collecting and organizing data (e.g., bias, inappropriate grouping) 2) Data collection methods (e.g., su rvey, experiment, questionnaire) 3) Characteristics of data se ts including mean, median, ra nge, and shape of distribution (in general terms) 4) Simple probability including using data from experiments to estimate probabilities for favorable outcomes
PAGE 346
332 Appendix D: Reliabilities of Composite Variables Table A2. Factor pattern coefficients of items used to create composite variables Composite variable Item description Factor pattern coefficients Cronbach's alpha Student selfconfidence (TIMSS derived variable) I learn things quickly in math 0.65 0.73 I usually do well in math 0.65 Math is more difficult for me than for many of my classmates 0.56 Math is not one of my strengths 0.64 Student valuing math (TIMSS derived variable) I need math to learn other school subjects 0.69 0.79 I need to do well in math t get the job I want 0.65 I would like a job that involved using math 0.64 I need to do well in math to get into the university of my choice 0.62 I would like to take more math in school 0.59 I think learning math will help me in my daily life 0.46 I enjoy learning math 0.50 Home resources for learning Desk 0.44 0.44 Calculator 0.39 Computer 0.42 Opportunity to learn number (TIMSS derived variable) Whole numbers including place value, factorization, and the four operations 0.66 0.91 Computations, estimations, or approximations involving whole numbers 0.72 Common fractions including equivalent fractions, and ordering of fractions 0.74 Decimal fractions including place value, ordering, rounding, and converting to common fractions (and vice versa) 0.74 Representing decimals and fractions using words, numbers, or models (including number lines) 0.74 Computations with fractions 0.81 Computations with decimals 0.80 Integers including words, numbers, or models (including number lines), ordering integers, addition, subtraction, multiplication, and division with integers 0.60 Ratios (equivalence, division of a quantity by a given ratio) 0.64 Conversion of percents to fractions or decimals, and vice versa 0.72
PAGE 347
333 Appendix D: (Continued) Table A2. Factor pattern coefficients of items used to create composite variables Composite variable Item description Factor pattern coefficients Cronbach's alpha Opportunity to learn algebra (TIMSS derived variable) Numeric, algebraic, an d geometric patterns or sequences (extension, missing terms, generalization of patterns) 0.36 0.74 Sums, products, and powers of expressions containing variables 0.46 Simple linear equations and inequalities, and simultaneous (two variables) equations 0.61 Equivalent representations of functions as ordered pairs, tables, graphs, words, or equations 0.67 Proportional, linear, and nonlinear relationships (travel graphs and simple piecewise functions included) 0.63 Attributes of a graph such as intercepts on axes, and intervals where the function increases, decreases, or is constant 0.68 Opportunity to learn measurement (TIMSS derived variable) Standard units for measures of length, area, volume, perimeter, circumference, time, speed, density, angle, mass/weight 0.68 0.85 Relationships among units for conversions within systems of units, and for rates 0.66 Use standard tools to measure length, weight, time, speed, angle, and temperature 0.67 Estimations of length, circumference, area, volume, weight, time, angle, and speed in problem situations (e.g., circumference of a wheel, speed of a runner) 0.67 Computations with measurements in problem situations (e.g., add measures, find average speed on a trip, find population density) 0.66 Measurement formulas for perimeter of a rectangle, circumference of a circle, areas of plane figures (including circles), surface area and volume of rectangular solids, and rates 0.65 Measures of irregular or compound areas (e.g., by using grids or dissecting and rearranging pieces) 0.62 Precision of measurements (e.g., upper and lower bounds of a length reported as 8 centimeters to the nearest centimeter) 0.57
PAGE 348
334 Appendix D: (Continued) Table A2. Factor pattern coefficients of items used to create composite variables Composite variable Item description Factor pattern coefficients Cronbach's alpha Opportunity to learn geometry (TIMSS derived variable) Angles acute, right, straight, obtuse, reflex, complementary, and supplementary 0.67 0.88 Relationships for angles at a point, angles on a line, vertically opposite angles, angles associated with a transversal cutting parallel lines, and perpendicularity 0.64 Properties of angle bisectors and perpendicular bisectors of lines 0.67 Properties of geometric shapes: triangles and quadrilaterals 0.74 Properties of other polygons (regular pentagon, hexagon, octagon, decagon) 0.69 Construct or draw triangles and rectangles of given dimensions 0.65 Pythagorean theorem (not proof) to find length of a side 0.48 Congruent figures (triangles, quadrilaterals) and their corresponding measures 0.75 Similar triangles and recall their properties 0.59 Cartesian plane ordered pairs, equations, intercepts, intersections, and gradient 0.51 Relationships between twodimensional and threedimensional shapes 0.48 Line and rotational symmetry for twodimensional shapes 0.57 Translation, reflection, rotation, and enlargement 0.63
PAGE 349
335 Appendix D: (Continued) Table A2. Factor pattern coefficients of items used to create composite variables Composite variable Item description Factor pattern coefficients Cronbach's alpha Opportunity to learn data (TIMSS derived variable) Organizing a set of data by one or more characteristics using a tally ch art, table, or graph 0.64 0.85 Sources of error in collecting and organizing data (e.g., bias, inappropriate grouping) 0.64 Data collection methods (e .g., survey, experiment, questionnaire) 0.67 Drawing and interpreting graphs, tables, pictographs, bar graphs, pie charts, and line graphs 0.63 Characteristics of data sets including mean, median, range, and shape of distribution (in general terms) 0.63 Interpreting data sets (e.g., draw conclusions, make predictions, and estimate values between and beyond given data points) 0.73 Evaluating interpretations of data with respect to correctness and completeness of interpretation 0.70 Simple probability including using data from experiments to estimate probabilities for favorable outcomes 0.59 Instructional practicerelated activities in math lessons We relate what we are learning in mathematics to our daily life 0.51 0.55 We decide on our own procedures for solving complex problems 0.47 We work together in small groups 0.39 We explain our answers 0.40 We listen to the teacher give a lecturestyle presentation 0.36 Contentrelated activities in math lesson We practice adding, subtracting, multiplying, and dividing without using a calculator 0.42 0.60 We work on fractions and decimals 0.55 We interpret data in tables, charts, or graphs 0.53 We write equations and functions to represent relationships 0.50 Ready to teach number Representing decimals and fractions using words, numbers, or models 0.66 0.71 Integers including words, numbers, or models (including number lines); ordering integers; and addition, subtraction, multiplication, and division with integers 0.66
PAGE 350
336 Appendix D: (Continued) Table A2. Factor pattern coefficients of items used to create composite variables Composite variable Item description Factor pattern coefficients Cronbach's alpha Ready to teach algebra Numeric, algebraic, an d geometric patterns or sequences (extension, missing terms, generalization of patterns) 0.57 0.81 Simple linear equations and inequalities, and simultaneous (two variables) equations 0.73 Equivalent representations of functions as ordered pairs, tables, graphs, words, or equations 0.79 Attributes of a graph such as intercepts on axes, and intervals where the function increases, decreases, or is constant 0.73 Ready to teach measurement Estimations of length, circumference, area, volume, weight, time, angle, and speed in problem situations (e.g., circumference of a wheel, speed of a runner) 0.79 0.86 Computations with measurements in problem situations (e.g., add measures, find average speed on a trip, find population density) 0.79 Measures of irregular or compound areas (e.g., by using grids or dissecting and rearranging pieces) 0.77 Precision of measurements (e.g., upper and lower bounds of a length reported as 8 centimeters to the nearest centimeter) 0.73 Ready to teach geometry Pythagorean theorem (not proof) to find length of a side 0.82 0.83 Congruent figures (triangles, quadrilaterals) and their corresponding measures 0.83 Cartesian plane ordered pairs, equations, intercepts, intersections, and gradient 0.72 Translation, reflection, rotation, and enlargement 0.66 Ready to teach data Sources of error in collecting and organizing data (e.g., bias, inappropriate grouping) 0.81 0.83 Data collection methods (e .g., survey, experiment, questionnaire) 0.80 Characteristics of data sets including mean, median, range, and shape of distribution (in general terms) 0.69 Simple probability including using data from experiments to estimate probabilities for favorable outcomes 0.66
PAGE 351
337 Appendix D: (Continued) Table A2. Factor pattern coefficients of items used to create composite variables Composite variable Item description Factor pattern coefficients Cronbach's alpha Mathrelated professional development Math content 0.74 0.78 Math pedagogy/instruction 0.69 Math curriculum 0.69 Math assessment 0.54 Problem solving/critical thinking 0.50 School resources for mathematics instruction (TIMSS derived variable) Computers for mathematics instruction 0.82 0.92 Computer software for mathematics instruction 0.82 Audiovisual resources for ma thematics instruction 0.84 Library materials relevant to mathematics instruction 0.82 Calculators for mathematics instruction 0.80 Budget for supplies (e.g., paper, pencils) 0.68 School buildings and grounds 0.67 Heating/cooling and lighting systems 0.68 Instructional materials (e.g., textbook) 0.66 Instructional space (e.g., classrooms) 0.61 TeacherÂ’s perception of math instructional limitations due to student factors (TIMSS derived variable) Low morale among students 0.80 0.81 Uninterested students 0.76 Disruptive students 0.69 Students with special needs, (e.g., hearing, vision, speech impairment, physical disabilities, mental or emotional/psychological impairment) 0.54 Students who come from a wide range of backgrounds (e.g., economic, language) 0.56 Students with different academic abilities 0.52
PAGE 352
338 Appendix E: Weighted Correlation of Level1 Variables for USA Table A3. Weighted Correlation of Level1 Variables for USA (N = 4,414) Variable 1 2 3 4 5 6 1. Gender 1.00 2. Selfconfidence in learning math .15 1.00 3. Valuing of math .06 .39 1.00 4. Time on math homework .06 .06 .01 1.00 5. Extra math lessons .04 .13 .01 .01 1.00 6. Home resources for learning math .05 .10 .12 .04 .04 1.00
PAGE 353
316 Appendix F: Unweighted Correlatio n of Level2 Variables for USA Table A4. Unweighted Correlation of Level2 Variables for USA (N = 153) Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1. Opportunity_ number 1.00 2. Opportunity_ algebra .19 1.00 3. Opportunity_ measurement .20 .45 1.00 4. Opportunity_ geometry .09 .25 .41 1.00 5. Opportunity_data .00 .35 .28 .37 1.00 6. Homework assignment .01 .05 .13 .02 .03 1.00 7. Instruction_ activities .05 .08 .08 .04 .06 .07 1.00 8. Content_ activities .09 .11 .16 .10 .07 .18 .31 1.00 9. Preparation to teach .18 .03 .17 .17 .05 .03 .02 .00 1.00 10. Ready_number .37 .15 .15 .03 .02 13 .09 .14 .19 1.00 11. Ready_algebra .18 .27 .24 .06 .03 .15 .04 .21 26 .54 1.00 12. Ready_ measurement .17 .23 .38 19 .16 .16 .01 .23 .21 59 .70 1.00 13. Ready_geometry .19 .07 30 .10 .04 .23 .01 .05 .34 60 .47 .61 1.00 14. Ready_data .25 .08 .32 .02 .10 .23 .09 .15 .23 .55 .52 .51 .53 1.00 15. Professional development .06 .15 .13 .14 23 .00 .10 .07 .10 .02 .14 .11 .03 .03 1.00 16. Class size .03 .13 .11 .06 .14 .05 .01 07 .13 .02 .01 .08 .12 .08 .11 1.00 17. School resources .06 .04 .11 .06 .07 .08 08 .09 .18 .00 .18 .18 .11 .06 .09 .04 1.00 18. Teacher perception_limitation .02 .19 .18 .06 .15 .31 .05 .03 .12 .15 .07 .10 .04 .00 .02 .07 .15 1.00 19. Math hours per year .11 .11 .15 .03 .04 .02 .00 .07 .03 . 06 .04 .02 .05 .09 .11 .10 .02 .07 1.00
PAGE 354
317 Appendix G: Weighted Correlation of Level1 Variables for Canada Table A5. Weighted Correlation of Level1 Variables for Canada (N = 6,248) Variable 1 2 3 4 5 6 1. Gender 1.00 2. Selfconfidence in learning math .11 1.00 3. Valuing of math .04 .37 1.00 4. Time on math homework .02 .04 .03 1.00 5. Extra math lessons .02 .24 .01 .06 1.00 6. Home resources for learning math .03 .09 .14 .04 .01 1.00
PAGE 355
318 Appendix H: Unweighted Correlation of Level2 Variables for Canada Table A6. Unweighted Correlation of Level2 Variables for Canada (N = 271) Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1. Opportunity_ number 1.00 2. Opportunity_ algebra .14 1.00 3. Opportunity_ measurement .08 .20 1.00 4. Opportunity_ geometry .03 .19 .37 1.00 5. Opportunity_data .06 .18 .47 .41 1.00 6. Homework assignment .02 .02 .15 .06 .08 1.00 7. Instruction_ activities .06 .01 .09 .11 08 .02 1.00 8. Content_ activities .08 .11 .14 .10 .19 11 .40 1.00 9. Preparation to teach .21 .07 .27 .20 .30 .07 .06 .09 1.00 1. Ready_number .01 .07 .00 .00 .01 . 02 .03 .06 .06 1.00 11. Ready_algebra .03 .15 .09 .04 .04 .09 .05 .07 17 .49 1.00 12. Ready_ measurement .02 .02 .12 .09 .03 .01 .09 .12 .12 .45 .53 1.00 13. Ready_geometry .04 .03 .06 .09 .07 . 13 .08 .05 .16 .50 .49 .50 1.00 14. Ready_data .15 .01 .11 .09 .23 .01 02 .04 .02 .39 .46 .41 .42 1.00 15. Professional development .12 .05 .23 .09 .24 .15 .01 .10 .13 .07 .16 15 .09 .10 1.00 16. Class size .10 .06 .07 .06 .06 .06 .04 .01 .16 .11 .06 .02 .00 .07 .15 1.00 17. School resources .08 .02 .13 .01 .09 .09 .13 .03 .15 .02 .01 .09 .16 .01 .06 .00 1.00 18. Teacher perception_limitation .09 .00 .04 .06 .09 .04 .08 .01 .02 .11 .02 .05 .09 .06 .05 .04 .04 1.00 19. Math hours per year .09 .10 .09 .07 .14 .00 .04 .04 .07 . 03 .05 .07 .00 .04 .06 .12 .06 .02 1.00
PAGE 356
319 Appendix I: Weighted Correla tion of Level1 Variables for Egypt Table A7. Weighted Correlation of Level1 Variables for Egypt (N = 1,876) Variable 1 2 3 4 5 6 1. Gender 1.00 2. Selfconfidence in learning math .06 1.00 3. Valuing of math .00 .33 1.00 4. Time on math homework .02 .05 .04 1.00 5. Extra math lessons .02 .04 .03 .01 1.00 6. Home resources for learning math 0.01 0.13 0.07 0.02 0.03 1.00
PAGE 357
320 Appendix K: Unweighted Correlation of Level2 Variables for Egypt Table A8. Unweighted Correlation of Level2 Variables for Egypt (N = 69) Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1. Opportunity_ number 1.00 2. Opportunity_ algebra .32 1.00 3. Opportunity_ measurement .20 .48 1.00 4. Opportunity_ geometry .31 .17 .39 1.00 5. Opportunity_data .07 .25 .32 .27 1.00 6. Homework assignment .06 .01 .12 .02 .22 1.00 7. Instruction_ activities .06 .09 .11 .05 .11 .04 1.00 8. Content_ activities .15 .08 .07 .19 .12 00 .27 1.00 9. Preparation to teach 1. Ready_number .07 .05 .06 .13 .07 .10 .10 .06 1.00 11. Ready_algebra .08 .23 .28 .02 .16 .27 .05 .02 .56 1.00 12. Ready_ measurement .12 .07 .05 .04 .24 .20 .12 .13 .62 .52 1.00 13. Ready_geometry .09 .10 .06 .04 .19 .24 .18 .03 .21 .17 .44 1.00 14. Ready_data .06 .08 .30 .00 .19 .14 . 22 .10 .17 .31 .38 .22 1.00 15. Professional development .16 .08 .02 .02 .07 06 .03 .11 .16 .23 22 .05 .12 1.00 16. Class size .02 .15 .01 .20 .14 .04 .04 .16 .09 .05 .30 .20 .01 .20 1.00 17. School resources .23 .05 .05 .11 .02 .07 .09 .04 .07 .25 .23 .05 .01 .00 .06 1.00 18. Teacher perception_limitation .04 .07 .14 .05 00 .03 .12 .16 .05 .01 .11 .10 .03 .10 .14 .16 1.00 19. Math hours per year .07 .04 .04 .23 .06 .17 .18 .00 .01 .15 .09 .07 .01 .06 .23 .23 .14 1.00
PAGE 358
321 Appendix L: Weighted Correlation of Level1 Variables for South Africa Table A9. Weighted Correlation of Level1 Variables for South Africa (N = 1,564) Variable 1 2 3 4 5 6 1. Gender 1.00 2. Selfconfidence in learning math .07 1.00 3. Valuing of math .05 .18 1.00 4. Time on math homework .02 .02 .00 1.00 5. Extra math lessons .03 .06 .03 .02 1.00 6. Home resources for learning math .02 .09 .03 .07 .05 1.00
PAGE 359
322 Appendix M: Unweighted Correlation of Level2 Variables for South Africa Table A10. Unweighted Correlation of Level2 Va riables for South Africa (N = 52) Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1. Opportunity_ number 1.00 2. Opportunity_ algebra .29 1.00 3. Opportunity_ measurement .49 .28 1.00 4. Opportunity_ geometry .40 .63 .42 1.00 5. Opportunity_data .31 .28 .36 .45 1.00 6. Homework assignment .12 .09 .02 .04 .05 1.00 7. Instruction_ activities .37 .01 .15 .06 .05 .24 1.00 8. Content_ activities .01 .10 .02 .06 .02 12 .40 1.00 9. Preparation to teach .19 .01 .15 .33 .05 .14 .15 .06 1.00 1. Ready_number .13 .09 12 .28 .13 .13 .02 .09 .21 1.00 11. Ready_algebra .32 .11 .11 .27 .01 .09 03 .13 .13 .46 1.00 12. Ready_ measurement .43 .14 .22 25 .17 .07 .16 .02 .32 .42 .62 1.00 13. Ready_geometry .34 .00 18 .24 .05 .09 .02 .08 .37 .48 .70 .68 1.00 14. Ready_data .39 .09 .15 17 .10 .01 .18 .17 .26 .48 .55 .65 .61 1.00 15. Professional development .30 .30 .31 .13 .29 .18 .16 .15 41 .08 .06 .38 .29 .20 1.00 16. Class size .09 .10 .05 .07 .09 .16 .20 .37 .19 .11 .16 01 .02 .11 .07 1.00 17. School resources .10 .20 .02 .33 .16 .09 .04 .16 .07 .15 .01 .06 .12 .11 .19 .03 1.00 18. Teacher perception_limitation .06 .22 .00 .04 .24 .05 .04 .12 .19 03 .01 .11 .08 .05 .02 .19 .16 1.00 19. Math hours per year .19 .15 .06 .14 .16 .02 . 08 .11 .10 .14 .01 .09 .02 .16 .15 .09 .04 .14 1.00
PAGE 360
ABOUT THE AUTHOR Ha Phan received a Bachelor's Degr ee in English Education from Hanoi University for Teachers of Foreign Languages and a Bachelor's Degree in Finance from National Economics University in Vietnam. At the University of South Florida in the United States, she earned a Master of Edu cation in Instructional Technology. She became interested in measurement and research while pursuing her master's degree and continued with the Ph. D. program in Educational M easurement and Research. During her graduate studies, she worked as a researcher, an instru ctor, a research consultant, and an editorial assistant for a published journal, the Edu cational Researcher. Her primary research interest was in the area of assessment and psychometrics. Receiving a fellowship from Harcourt Assessment Inc., in 2007 further increase d her passion to work with test scores. Her research has been presented at regional, national, and internat ional conferences. She currently is a research scientist at Pearson Educational Measurement.
