USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00510
usfldc handle - e11.510
System ID:
SFS0024511:00509


This item is only available as the following downloads:


Full Text

PAGE 1

Readers are free to copy display, and distribute this article, as long as the work is attributed to the author(s) and Education Policy Analysis Archives, it is distributed for noncommercial purposes only, and no alte ration or transformation is made in the work. More details of this Creative Commons license are available at http://creativecommons.org/licen ses/by-nc-nd/2.5/. All other uses must be approved by the author(s) or EPAA EPAA is published jointly by the Mary Lou Fulton College of Education at Arizona State Universi ty and the College of Educ ation at the University of South Florida. Articles are indexed by H.W. Wilson & Co. Send commentary to Casey Cobb (casey.cobb@uconn.edu) and errata notes to Sh erman Dorn (epaa-editor@shermandorn.com). EDUCATION POLICY ANALYSIS ARCHIVES A peer-reviewed scholarly journal Editor: Sherman Dorn College of Education University of South Florida Volume 15 Number 3 February 13, 2006 ISSN 1068 Digital Equity in Education: A Multilev el Examination of Differences in and Relationships between Computer Access, Computer Use and State-level Technology Policies Jonathan D. Becker, J.D., Ph.D. Hofstra University Citation: Becker, J. D. (2006). Digital equity in education: A Multilevel examination of differences in and relationships between comp uter access, computer use and state-level technology policies. Education Policy Analysis Archives, 15 (3). Retrieved [date] from http://epaa.asu.edu/epaa/v15n3/. Abstract Using data from the Nationa l Assessment of Educationa l Progress (NAEP) state assessment and a survey of state-level te chnology policies, this study examined digital equity in ed ucation as a multilevel organiza tional phenomenon with data from 70,382 students in 3,479 schools and 40 states. Students in rural schools or schools with higher percentages of Africa n American students were likely to have less access to computers. With respect to computer use, girls and students eligible for free or reduced-price lunch were more likely to use computers more frequently when computers are available in the cla ssroom. With respect to relationships between computer access and computer use, having computers available in a lab increases the likelihood of higher levels of computer use. The re sults suggested that no more than 5% of the vari ance in computer access can be attr ibuted to state factors, and less than 1% of the variance in computer use wa s between states. The findings suggested that where student te chnology standards are integrated into subject-area standards, computer use was likel y lower than in other states. In states where pre-service teachers must meet tech nology-related requirements to receive their teaching credential and states where funds earmarked fo r technology are distributed as competitiv e grants, computer use was likely to be higher.

PAGE 2

Education Policy Analysis Archives Vol. 15 No. 3 2 Equidad Digital en la Ed ucacin: Una examinacin de niveles mltiples de las diferencias y las rela ciones entre el acceso al ordenador, el uso del ordenador y polticas-esta tales sobre tecnologa. Resumen Con datos obtenidos en una muestra de 70,382 estudiantes, atendiendo 3,479 escuelas, en 40 estados a partir de evalua ciones de la NAEP y una encuesta sobre las polticas estatales sobre tecnologa este estudio examino la equidad digital en la educacin como fenmeno organizacional de niveles mltiples. Los estudiantes de escuelas rurales o de escuelas con porce ntajes ms altos de estudiantes AfricanoAmericanos tenan mas proba bilidades de tener menos acceso a ordenadores. Con respecto al uso de los orde nadores, cuando los ordenado res estaban disponibles en el aula, tanto las nias como los estudiantes que reciban apoyo alimenticio ( el almuerzo libre o a precio reducido) tenan mas proba bilidades de utilizar los ordenadores con mayor frecu encia. Con respecto a las re laciones entre acceso a ordenadores y utilizacin del ordenador, tener ordenadores disponibles en un laboratorio aumenta la probabilidad de ni veles ms altos de uso de ordenadores. Los resultados sugirieron que no ms el de 5% de la variacin en el acceso del ordenador se puedan atribuir a factores relacionados a los estados, y se observ una variacin en uso del ordenador entre estados, menor al 1%. Los resultados sugirieren que los estados que integraban los estndares sobre tecnologa dentro de los estndares de las reas temticas, el uso del ordenador era menos frecuente. En los estados donde la formacin docente de pre-servicio requiere que los estudiantes demuestren conocimientos de te cnologa para recibir sus credenciales y estados donde hay fondos destinados para el rea de tecnologa se distribuyen como subvenciones de manera competitiva, el uso del ordenador tenda a ser ms alto. Keywords: educational techno logy; equity; state-level policy; multilevel modeling. Introduction The NAACPs Call for Action in Education (2001) asked state education agencies to outline a five-year education equity plan for improving educ ation and reducing disparities. Section VI of the plan focused on closing the digital divide. While students attending the country s poorest schools are not far from the national average for computer access, a cl oser look at the statistics on more sophisticated technology, su ch as Internet access and actual computer usage, reveals that the digital di vide is deep and wide[T]eachers in majority-minority schools were more likely to cite the follo wing as barriers to the use of computers for instruction time: not enough computers (45 percent), outdated incompatible or unreliable computers (32 percent) and Internet access not being readily available (36 percent). (NAACP, 2001, p. 12). The Call for Action made recommendations to educators and state-level policy-makers, including that state and local educational agencies should identify the size of the current gap, including differences in level of sophistication, real computing power, and resources for training

PAGE 3

Digital Equity in Education 3 and implementation. The methods used in this study may be a mean s by which education officials can analyze and report those data. Furt hermore, the findings from this study about digital equity suggest that policy makers need to pay attention to some important differences that exist in computer access and computer use in schools. This study is an examination of digital equi ty in education as a multilevel organizational phenomenon combining several datasets and usi ng advanced, multilevel modeling techniques to answer several questions. First, are advanced lear ning technologies distributed and used differentially across different student and school demographic categor ies? In other words, is there digital equity in education across the country? In addition, does the distribution of computers, equitable or not, have a relationship to the use of computers? Third, st ate policies might contribute to digital equity or inequities, and how are state-level learning technolog y policies related to the patterns of digital equity in education? Conceptual Framework This study considers educational technology 1 to be a set of learning resources and examines the extent to which those resources are distri buted differentially across student and school characteristics. Solomon, Allen, and Resta (2003) su ggest that this notion of differential access to technology in education might be described as digital equity Digital equity in education means ensuring that every student, regardless of socioeconomic status, language, race, geography, physical restrictions, cultural background, gender, or other attribute historically associated with inequities, has equitable access to advanced technolo gies, communication and information resources, and the learning experiences they provide (p. xiii). Generally, digital equity is often discussed using the term digital divide Attewell (2001) goes further to suggest that there are two digital di vides: computer access and computer use. In schools, the computer access divide is often de fined in terms of availability and is often expressed with statistics such as student-to-c omputer ratios or percentages of instructional rooms with Internet acce ss. Attewell points to general conclu sions that schools that serve the poorhave less computing equipment an d slower web connections (p. 253) than schools that serve wealthier populations as evidence of the computer access divide in schools. Attewells second divide, the computer use divi de, might be further separated into quantity and quality. That is, when examining digital equity in education, one might consider differences in the amount of use and the ways computers are used in schools. Some students might use computers more or less frequently than others, and those diffe rences are empirically and practically interesting. However, as Attewell (2001) points out, not all uses of computers have equivalent educational benefits (p. 2). Therefore, the second half of the computer use divide might be described as social differences in the ways computers are used at school. For the purposes of quantitative study, computer access and computer use requires a more operational definition of digital equity in education. To that end, Green (1 982) states that equity 1 Education technology is admittedly a rather broad term. Th e focus of this study is computer-related teaching and learning technologies, or, as Cuban ( 2001) suggests, new technolo gies (p. 12). These new technologies include the hard infrastructure of wiring, computers, software applications, and other equipment, including laserdisc players, overhead-mou nted presentation machines operated from a keyboard, digital cameras, and so on (Cuban, 2001, p. 12). Cuba n adds that new technologi es also include the soft infrastructure of technical support and professional development.

PAGE 4

Education Policy Analysis Archives Vol. 15 No. 3 4 implies justice and justice requires that people b e treated alike or differently according to their likenesses and differences (p. 318). By noting that only certain likenesses and differences are relevant, educational equity is then defined by Green (1982) as [A] statistically desirable social condit ion within which there is [a] random distribution of resources, attainment, an d educational achievement in respect to variables irrelevant to educational justice together with a predictable distribution in respect to variables relevant to educational justice (p. 324). Filling in this definition require s a decision about which variables are relevant or irrelevant to educational justice. Citing the work of Walzer (1 983), Stone suggest s that [o]ne a pproach is to look at society as it is and say that those ch aracteristics people cons ider relevant are by definition relevant (p. 55). Green (1982) takes a similar approach by sug gesting that [w]e may consult our conscience for an answer (p. 324) Thus, educationally relevant variables might include choice, ability, or effort, because educati onal inequalities that arise from differences in choice, ability, or effort do not normally di sturb our conscience. [T] hese are variables we appeal to whenever we argue that educational inequities are nevertheless fair (Green 1982, 324). Similarly, variables irrelevant to educational justice might include sex, social class, race, and geography. There should be no differences in the distribution of educational resources, attainment, and achievement across these variables. Thus, the beginnings of an operational definiti on of digital equity used in these analyses appear by combining Greens definition of educat ional equity to Attewells notion of two digital divides. We can thus say that digital equity in ed ucation is a statistical condition where access to technology is randomly distributed between schools according to educationally irrelevant school variables (e.g. racial composition and urbanicity), and use of educational technology is randomly distributed within schools according to educationally irrelevant student variables (e.g. sex, race, SES, and geography). In other words, if there are diffe rences or inequalities in computer access or computer use, those differences should not arise from variables that are irrelevant to educational justice. For several reasons, this study focuses on th e policy goal of equitable distribution of educational technology resources. First, educationa l technology policy, especially policies developed and promulgated in response to the federal gove rnments Technology Literacy Challenge at the end of the twentieth century, is reasonably characterized by distributive conflict. In other words, there is a finite pool of technology resources available for di stribution to schools, teachers, and students, and the attendant issues are about who gets what resources and how. Furthermore, as Stone (1997) writes, "[e]quity is the goal for all sides in a di stributive conflict; the conflict comes over how the sides envision the distribution of whatever is at issue" (p. 39). In addition, in addition to an examination of the distribution of educational technology resources and the relationships between computer access and computer use, this study looks at the role of the states and state education agencies in th at distribution. And, where states are involved in education policy, equity is often a major policy go al (Dayton, 2000). Finally, education technology policy is an area of education policy where the states have assumed a powerful role and a policy domain that might also be characterized by distribu tive conflict. Therefore, it is most reasonable to analyze education technology against the goal or value of equity.

PAGE 5

Digital Equity in Education 5 Review of the Literature EQUITY AS A STATE-LEVEL VALUE AND EDUCATION TECHNOLOGY POLICY AS A STATE-LEVEL DISTRIBUTIVE CONFLICT Equity and liberty are values that are consta ntly debated and often balanced in state legislatures. While those debates ar e contentious, to say the least, Dayton (2000) suggests that the debate is essential in a democratic society. If either ideals of liberty or equity were pursued to th e extremes, significant harm would result to individuals an d the community. While the protection of liberty is essential in a democracy, it is not desi rable to allow the powerful to overwhelm the weak in their pursuit of self-interests. And while equity is essential to justice, it is not desirable to reduce all individu als to the lowest common level. A just balance is essential, in which all persons are grante d equity in educational opportunities, but retain the liberty to excel and make independent choices concerning their lives. (p. 107). Thus, the conflicting idea ls of liberty and equity are at the core of many educational policy disputes at the state level. Ho wever, Dayton (2000) suggests that ideals of liberty favor more local control, while the promotio n of statewide equity indicates greater st ate involvement and control. That is to say, although state constitu tions recognize state-level duties to provide for public education, liberty may be tter be promoted through local control. Further, there is a strong historical tendency fo r states to pursue only one goal at a time, neglecting or suppressing the others (Mitchell and Encarnacion 1984, p. 9). Much of the focus of early education technology policy, particularly at the federal and state levels, was on equity, or specifically the equitabl e distribution of education technology resources. For example, two of the four pillars of the Technol ogy Literacy Challenge were that All teachers and students will have modern multimedia computers in the classroom and that [e]very classroom will be connected to the information superhighway (U.S Department of Education, 1999). Further, as states developed their master technology plans in response to federally initiated goals, local education agencies stepped into line behind the state plans (Zhao & Conway, 2001). It was only after the hardware, connectivity an d other infrastructure developments were in place that policy makers turned their attention to issues of content, use, and integration. But up to that very recent point of time and change in foc us, the thinking was that without a critical mass of resources, technology was not going to have an impact on the educational process. Or, to borrow a classic line from the baseball movie Field of Dreams if you build it they will come. DIGITAL EQUITY IN EDUCATION: DIFFERENCES IN COMPUTER ACCESS AND COMPUTER USE IN SCHOOLS In general, the picture that has emerged from the research and the data on digital equity in education is that there exist sign ificant disparities in access to high-quality technologies and serious inequities in how technology is distributed to and used for different groups of students. The overall picture. In his analysis of the 1996 National Assessment of Educational Progress (NAEP) math data, Wenglinsky (1998) discovered patterns of educational technology use among different types of students. Contrary to much of the data reported elsewhere, Wenglinsky found that there were no disparities in access to computers at school, or in their frequency of use at home for

PAGE 6

Education Policy Analysis Archives Vol. 15 No. 3 6 those families that own a computer at home. How ever, there were significant differences across ethnic groups in access to home computers. For in stance, 64% of White or Asian students had access to a home computer compared to 44% of Black or Hispanic students. Further, while access to computers within school did not vary acro ss groups, disadvantaged groups of students do seem to be less likely to have teachers well-prepar ed to use computers, and disadvantaged eighthgraders seem to be less likely to be exposed to higher order learning through computers (p. 24). These findings led Wenglinsky to conclude that policies to promote computer access in school have succeeded in eliminating some inequities, but there remained inequities in teacher preparedness and what is taught using computers. More recent data from the federal government confirm Wenglinskys findings and document additional discrepancies. First, there are clear ine quities with respect to preparedness of teachers to use computers or the Internet at school. According to a 2000 National Center for Education Statistics report, 37% of teachers in schools with the fewest low-income students report feeling well prepared or very well prepared to use computers and the Internet, compared to 32% of teachers in schools serving the highest percentage of lowincome students. Second, that same report documented differences by student demographics in the way computers are used, particularly student socioeconomic status. Generally speaking, a much lower percentage of teachers of lowincome students assign more advanced computer activities such as word processing, spreadsheets or multimedia activities when compared to teacher s of higher income students (31% vs. 55%). However, these same teachers are more likely than teachers of higher income students to have students use computers for skill-and -drill activities (35% vs. 26%). In contrast to Wenglinskys findings about a ccess, however, there are data to suggest that there are, in fact, disparities in access to educat ional technology, particularly the Internet. Nationally, by the year 2000, 98% of schools were connected to the Internet in some fashion, and 77% of all instructional rooms had Internet access (National Ce nter for Education Statistics, 2002). However, schools in urban areas (or even on the urban fringe ) had a lower percentage of instructional rooms with Internet access (66%) than schools in rural (85%) and other areas. Also, by 2000, low-minority schools (less than 6%) had a higher percentage of instructional rooms with Internet access (85%) than schools where more than half of the students are minorities (64%). The racial divide Krueger (2000) studied the magnitude of the racial divide in the use of computer technology among school children using data from the October Current Population Survey (CPS) School Enrollment Supplement in 1984, 1989, 1993 and 1997. L ooking at all grades combined, Black students were 16% less likely than White students to use a computer in school in 1984, but only 6% less likely in 1997. Hispanic st udents were more likely to use computers in school than Black students prior to 1997, but as of that year, Hispanic students were 5% less likely than Black students (and 11% less likely than White st udents) to use computers in school. Using that same dataset, Kreuger also reported that in 19 97, 20.5% of White students reported using the Internet in school compared to 14.8% of Black st udents and 11.8% of Hispanic students. Those gaps were slightly larger in high schools. From th ese data, Krueger (2000) concluded that Black and Hispanic students seem to lag behind in using the latest technology in school (p. 2). The gender divide The data on computer use in schools by gender suggest that there is very little difference in computer usage in schools and at home between boys and girls. The differences that exist have stayed consistently small with boy s using computers at school a little bit more than girls. Volman and van Eck (2001) conducted a review of the literature on gender equity and information technology in education. With respect to access to information technology in schools, the research in the 1980s generally demonstrated that boys had greater access to computers in homes, during extracurricular activities, and during school when they spent mo re time in front of computers. Often the differences were not large, but they were certainly consistent (Volman &

PAGE 7

Digital Equity in Education 7 van Eck, 2001, p. 616). The research in the 1990s is not as consistent, though Volman and van Eck (2001) state that several authors suggest that i t is mostly boys who use the computers at school outside lessons (p. 616). Otherwise, the gaps that existed between boys and girls in computer use in schools seem to be disappearing. The relationships between computer access and computer use in schools This study is not just about differences in computer access and computer use, but it is also concerned with the relationships between access and use. A number of critics (Cuban, 2001; Oppenheimer, 1997) have pointed out that increased access to computin g technology in schools and classrooms has not had any discernable impact on teaching or learning Cuban (2001), most notably, suggests that educational technology has been oversold and un derused; computers are being deployed into schools and classrooms, but not used. However, converging streams of evidence suggest that there are, in fact, relationships between computer access and computer use. Norris, Sullivan, Poirot and Soloway (2003) conducted a large-scale survey of teacher technology use and found a significant and sub stantive correlation between technology access and use; almost without exception, the strongest predi ctors of teachers technology use were measures of technology access (p. 16). Through a similarly larg e-scale survey of teachers, ODwyer, Russell and Bebell (2004) conclude that, among other factors, in creased availability of technology is likely to result in increased teacher-directed use of computers by students, and increased use of technology by teachers to deliver instruction and prepare for class. Thus, the literature suggests significant dispar ities in access to high-quality technologies and some inequities in how technology is distributed to and used for different groups of students. The significance and seriousness of those gaps might depend on which data are used and who is doing the analysis. Former FCC chairman Michael Powell argued in 2001 that digital divide is a dangerous phrase. In reference to th e notion of a digital divide, he said, I think there is a Mercedes divide. Id like to have one; I cant afford oneI th ink its an important social issue, but it shouldnt be used to justify the notion of essentially the socialization of the deployment of the infrastructure (quoted in Labaton, 2001, p. B1). On the other hand, in arguing that access to cyberspace is the new issue in educational justice, First and Hart (2002) assert that the impa ct of the digital divide on the education of racial and ethnic minority students in some way violates the Americans with Disability Act, the Fourteenth Amendments equal protection clause, Title VI of th e 1964 Civil Rights Act, and state constitutional law (p. 410). While this may be an extreme argu ment, it is the case that many see schools as institutions for leveling the playing field in a digital world characterized by the haves and have nots. That is to say, where significant differences have been documented in access to advanced technologies in homes, particularly between the national averages and African Americans and Latina/os (United States Department of Commer ce, 2000), schools are expected to be different. And at a time when the federal government is backing off its commitment to major digital equity initiatives (e.g. the U.S. Department of Commerces Technology Opportunities Program (TOP) and the U.S. Department of Educations Community T echnology Centers (CTC) Program), it is unclear that schools have met that challenge or if they will continue to be able to do so. This study attempts to bring some clarity to the issue.

PAGE 8

Education Policy Analysis Archives Vol. 15 No. 3 8 Research Methods DATA SOURCES To address the research questions on digital equity in education, it was necessary to have data on access to and use of educational technolog y resources, and data on state-level educational technology policies across the states. No single databa se included all of those data, so two databases were combined. First, the NAEP 2000 State Math ematics database contains data on technology access and use between and within schools. In addition to a portion of the cognitive part of NAEP, each student selected to be part of the NAEP sa mple completes a series of survey questions. The responses to those questions, along with data provided by the students teachers and school administrators, become part of the comprehensive NAEP dataset used in these analyses. Second, the bulk of the data for the state-level database came from the State-by-State Education Technology Policy Survey conducted by the Milken Exchan ge on Educational Technology (MEET) in the summer of 1998. By surveying and interviewing th e state education technology director (or their designated representative) in each of the 50 states, MEET produced a profile of in-depth information on state legislation and projects, fundin g, standards, support structures, etc. all around the issue of learning technology. MULTILEVEL MODELING AND MODEL SPECIFICATION In further operationalizing the construct of di gital equity in education, there are two major analytic issues. One is how to measure gaps or inequalities in access to and use of educational technology as multilevel organizational phenomena, and the other is how to examine those gaps as outcome variables regressed on relevant independ ent factors. Resolving these matters necessitates not only empirically reliable but also theo retically valid model building (Lee, 1998). The data sources used to examine digital equi ty in education are of three types (computer access and use, demographics and policies), but also cover three levels of the education process: individual student data, school data and state data That is, the data are nested; the students are educated within schools that exist within states. Thus, the use of single-level statistical models that assume independence of observations would be inva lid. As Snijders and Bosker (1999) assert, [m]ultilevel statistical models are always need ed if a multi-stage sampling design has been employed (p. 9) or if individual-level outcomes are influenced by organizational factors. For this study, two sets of hierarchical linear models were estimated: computer use models and computer access models. Specifically, since the dependent measures of computer use and computer access are ordinal (categorical) variables, two separate sets of hierarchical generalized linear models (HGLM) were estimated. The computer use models were three-level models (students within schools within states) and the computer ac cess models were two-level models (schools within states). What follows are brief descriptions of th e key variables. A summary table of each of the variables included in the analyses is included as Appendix A. Dependent Variables Computer use The 2000 State NAEP 4th Grade mathematics dataset includes more survey items about educational technolog y than in prior years. There are two questionnaire items that

PAGE 9

Digital Equity in Education 9 directly address the construct of computer use: one is about types of use, the other is about frequency of use. This study, however, is only concerned with amounts of computer use. The dependent measure of amount of computer use, ther efore, is an item that appears on the student questionnaire as part of NAEP data collection. Sp ecifically, the item is When you do mathematics in school, how often do you use a computer? 2 The response set includes four options: almost every day, once or twice a week, once or twice a month, and never or hardly ever. It is worth noting that this question is speci fic to computer use for math and not other subjects. This may be a limitation of this study, though it is asserted here that this item serves as a sufficient proxy measure for the amount of co mputer use. In other studies, single NAEP questionnaire items were used as measures of compu ter use (Wenglinsky, 1998) and math was found to be the most common area for teacher-directed student use of computers in elementary schools (Mann, Shakeshaft, Becker, & Kottkamp, 1999). Thus, in modeling equity in computer use, this item serves as the level-one (student-level) outcome va riable. Table 1 displays the frequencies for this variable. Table 1 Frequency of Computer Use for Math Among 4th Graders, 2000 Computer Use Proportion SE Almost Every Day .146 .002 Once or Twice a Week .169 .002 Once or Twice a Month .101 .002 Never or Hardly Ever .584 .003 As displayed in the table, more than half (58%) of the 4th grade students report never or hardly ever using computers for math. Al most 32% of the students repor t using computers for math at least once a week. The remaining 10% use computers for math on a monthly basis. Computer access. Much like for computer use, the focal point for computer access is on the amount of access rather than the type or quali ty of access. Reports on computer access in schools often include statistics such as student-to-computer ratios, lab vs. classroom deployment, computer hardware specifications, network infrastructure specifications, and/or the type of Internet connectivity. Of those, only the student-to-computer ratio is a measure of the amount of access to computers. That measure, though, is not available in the NAEP dataset. There are, however, some data on computer access available in the NAEP data. The following three items appear on the school-level questionnaire completed by a school administrator of the students in the NAEP sample: (1) Are computers available all the time in the classrooms?, (2) Are computers available in a computer lab?, and (3) Are computers available to the classrooms when needed? 3 The frequencies for these variables are displayed in the following table. 2 The data from those students who gave multiple responses or who omit ted the question were coded as missing values. 3 Though the meaning of this response is not enti rely clear, the assumption here is that the reference is to portable computing devices such as laptops or computers on wheels (COWs).

PAGE 10

Education Policy Analysis Archives Vol. 15 No. 3 10 Table 2 Access to Computers among 4th Graders (student-level), 2000 YES NO Access to Computers Proportion SE Proportion SE Are computers available all the time in the classrooms? .822 .002 .178 .002 Are computers available in a computer lab? .767 .002 .233 .002 Are computers available to the classrooms when needed? .250 .002 .750 .002 There is not a great deal of variance within each of the three items. The data from the three items, however, can be combined into a single school-level measure of computer access. By counting the number of yes re sponses to the three questions, a measure of computer access points is created. Said differently, this compos ite variable is a measure of the number of different ways each student can or might access computers in his or her school. The frequencies for this computer access variable are displayed in the following table. Table 3 Frequency of Computer Access Points among 4th Graders (student-level), 2000 Computer Access Points Proportion SE 0 .011 .001 1 .326 .002 2 .476 .002 3 .187 .002 Thus, the modal situation for fourth grade stud ents across the country is that they attend schools wherein they might access computers two different ways. Only 1.1% of students are in schools with no acce ss to computers. The computer access variable is essentially a sc hool-level variable attached to the data for each student in the NAEP sample. Accordingly, rather than making a statement about schools across the country, it is appropriate to report on the data in table three by stating that, forty-eight percent of students attend schools with two differe nt computer access points. This is a statement about students rather than the schools they attend ev en though the data are essentially school-level data. From a reporting standpoint, this issue is not problematic. Analytically, however, it is a complex issue, particularly for hierarchical linear modeling. Specifically, using a school-level variable as a student-level outcome in multilevel modeling creates a misspecified model. Raudenbush, Fotiu, and Cheong (1998) encountered such a model misspecification problem when modeling the relationship between school climate and student demographic background. School disciplinary climate was a scale created from items on the schoo l questionnaire administered as part of the 1992 Trial State Assessment of the NAEP. To account for the model misspecification, Raudenbush, Fotiu, and Cheong (1998) eliminated the school-l evel models and instead estimated a two-level hierarchical model with students nested within states. Further, robust standard errors were computed using the generalized estimation equation (GEE). These standard errors are relatively insensitive to misspecifications of the variances and covariances at the two levels and to the distributional assumptions at each level (Raudenbush, Fotiu, & Cheong, 1998, p. 258).

PAGE 11

Digital Equity in Education 11 Rather than eliminating school-level variables in hierarchical models, computer access was treated as a school-level mea sure throughout these analyses 4 That is, the computer access variables from the original NAEP dataset wer e aggregated to the school level and included in the school-level dataset. This is a conceptually sensible deci sion and allows for more appropriate hierarchical modeling, including the ability to understand the relationship between student computer use and computer access within the school. As opposed to th e previous two tables with data on student-level computer access, the following two tables depict th e statistics on computer access in the school-level dataset. Table 4 Access to Computers among 4th Graders (school-level), 2000 YES NO Access to Computers Proportion SE Proportion SE Are computers available all the time in the classrooms? .827 .011 .173 .011 Are computers available in a computer lab? .748 .014 .252 .014 Are computers available to the classrooms when needed? .222 .012 .778 .012 Table 5 Frequency of Computer Access Points among 4th Graders (school-level), 2000 Computer Access Points Proportion SE 0 .011 .004 1 .349 .015 2 .471 .015 3 .169 .010 Thus, combining tables four and five, 33% of 4th grade students are se rved in the 35% of schools that have a sing le access point to comp uters. Also, 48% of 4th grade students are in the 47% of schools that have two ac cess points to computers. One percent of students and schools have no computer access. Independent Variables The table in Appendix A also includes descri ptions of the independent variables. This section briefly describes the key independent variables. Student-level independent variables. The NAEP database includes a student-level race variable with six categories (White, Black, Hispan ic, Asian, American Indian, and Other). However, with hierarchical linear modeling, there is a need to be parsimonious with respect to the number of variables included at lower levels (levels one and two in this case). It is possible, for example, that the regression coefficient for every level one predictor could be modeled as a dependent variable predicted by each level two predictor. So to in clude a dummy-coded variable for all (short the one reference group) of the racial groups would create a less than parsimonious situation. Furthermore, 4 One exception is the tests of difference in preliminary analyses section where the NAEP Data Tool is used, and computer access is treated as a student-level variable.

PAGE 12

Education Policy Analysis Archives Vol. 15 No. 3 12 there is reason to believe that there are differe nces in computer use in schools between Caucasian, African American, and Latina/o students. Krueger (2000) showed that in elementary schools in 1997, 80.3% of Caucasian studen ts reported using computers at school compared to 67.3% of African American students and 62.8% of Latina /o students. However, there is limited data on students of other racial backgrounds. For this anal ysis, therefore, the interest is in digital equity in education among only Caucasian, African American and Latina/o students. Also, the proxy variable for student socioeconomic status is whether or not the student is eligible for the free or reducedprice lunch program, a common proxy for family income. School-level independent variables Much like at the student level, the interest at the school level is in the demographic composition. To model equity in computer access (a school-level variable), for example, data on the racial compos ition and urbanicity of the school are included. State-level independent variables. To provide insight into the education technology policies of the 50 states, the Milken Exchange on Educat ional Technology (MEET) conducted a state-bystate education technology policy survey in the summe r of 1998. By surveying and interviewing the state education technology director 5 (or their designated representative) in each of the 50 states, MEET produced a profile of in-depth information on state legislation and projects, funding, standards, support structures, etc. all around the issue of learning technology. The survey consisted of a 75-minute telephone interview an d a nine-page questionnaire, and the data collection resulted in an extraordinary amount of data documenting the differing approaches ta ken by state education agencies with respect to educational technology. The focus of this inquiry is digital equity in education, and the MEET survey provides relevant information on state-level policies. Specifically, there are four categories of educ ation technology-related policies that might be related to digital equity in education: credentialing requirements, student standards, professional development and funding. SPECIAL METHODOLOGICAL ISSUES HLM and Non-Linear Analyses The two outcome variables, computer access an d computer use, are categorical variables. The standard hierarchical linear model (HLM ) and other basic multilevel models are only appropriate for twoand three-level data where the dependent variable and the residuals at each level are normally distributed. This assumption is not usually a problem when the dependent variable at level-1 (student-level) is continuous. Even where the continuous outcome is highly skewed, the data can be transformed to make the distribution of level-1 random effects approximately normal. The use of the standard level-1 model in cases where the outcome variable is binary or categorical (as in this case), however, is inappr opriate for three reasons. First, given the predicted value of the outcome, the level-1 residual can only assume a limited number of values, and therefore cannot be normally distributed. Second, the leve l-1 residual cannot have homogenous variance. Instead, the variance of the residual depends on th e predicted value. Third, the standard level-1 model does not constrain the predicted value of the level-1 outcome. However, where the possible values of the outcome variable are, in fact, lim ited, predicted values need to be constrained 5 According to Lemke & Shaw (1999), [t]he state tech nology director is an individual who has been identified by the state superintendent of education as the chief manager of state education technology programs in the public schools.

PAGE 13

Digital Equity in Education 13 appropriately. Without this constraint, the eff ect sizes estimated by the model are generally uninterpretable (Raudenbush, Bryk, Cheong, & Congdon, 2001). Fortunately, within the HLM software program, it is possible to specify a nonlinear analysis appropriate for binary or categorical (ordinal or multinomial) data. The approach is a direct extension of the generalized linear model of McCullag h & Nelder (1989) to the case of hierarchical data (Raudenbush, Bryk, Cheong, & Congdon, 2001, p. 113) 6 Thus, the approach is referred to as hierarchical generalized linear modeling (HGLM). Th e computer use variable could be considered multinomial or ordinal. Presumably, though, the survey item was developed with an ordered categorical response set. That is, the response set would appear to be ordered from lowest (never or hardly ever) to highest (every day or almo st every day). The computer access variable is ordinal as well. Therefore, ordinal HGLM was used. Missing Data Of the 98,204 students in the original dataset, 7,111 students were systematically eliminated because their race was coded as something other than the three racial categories of interest for these analyses. In addition, 12,037 students are missi ng data on the computer use variable. Since that variable is one of the dependent variables, and gi ven the number of cases missing those data, it was determined that it was most appropriate to elimin ate any student from the dataset who was missing data on the computer use variable. Also, 7,728 students are missing data on the LUNCH variable. Those students were eliminated as well. Ultimately, any student with missing data on any variable to be included in the analyses was eliminated. In the end, the analyses included 70,382 students in 3,479 schools from 40 states. The deletion of the cases wher e there were missing data is not ideal, but the HLM software has proven particularly sensitive to missing data. Table 6 compares the original sample to the final sample on the student-level variables. The percentage of students reporting the various computer use categories is identical. Additionally, any di fferences that exist on the student biosocial characteristics are minimal, and mostly result from having to eliminate data on students from California. 7 6 Until recently, estimation of categorical (multinomial and ordinal) models was only available for two-level data in the HLM software program. Additi onally, the HLM software coul d not incorporate design weight variables for non-linear analys es. However, for these analyses, a beta version of an upgrade to the HLM software was made available by Dr. Yuk Fai Cheong of Emory University, one of the authors of the software. This software allows for non-linear analyses of three-level data and also allows for the incorporation of design we ights in such analyses. 7 The 2000 Mathematics State NAEP invo lved data collection from 101,764 4th grade students from 4,239 public schools in 47 jurisdictions (states, territ ories and DoDEA/DoDDS school s). The nine states that did not participate in the 2000 NAEP are: Alaska, Colorado, Delaware, Fl orida, New Hampshire, New Jersey, Pennsylvania, South Dakota, and Washington. Additionally, representatives from California provided very incomplete data for the Milken Exchange Technology Su rvey. There was simply too much missing state-level data for California, so the students and schools from that state in the NAEP dataset have been excluded. Thus, this study uses data from 40 states.

PAGE 14

Education Policy Analysis Archives Vol. 15 No. 3 14 Table 6 Comparison of original to final sa mple (percentage in each category) Variable Original Sample (n=96,118) Final Sample (n=70,382) Computer Use Never or Hardly Ever 58.9 58.4 Once or Twice a Month 9.8 10.1 One or Twice a Week 16.8 16.9 Almost Every Day 14.5 14.6 Sex Male 51.1 50.0 Female 48.9 50.0 Race Caucasian 69.1 71.5 African American 16.7 15.7 Latina/o 14.2 12.8 SES (school lunch) Eligible 43.9 40.4 Not Eligible 56.1 59.6 Weighting of Data Part Three of the Data Companion that accompanies the 2000 mathematics assessment secondary-use data files begins thusly: Standard statistical procedur es should not be applied to the NAEP data without modification because the sp ecial properties of the da ta affect the validity of conventional techniques of statistical inference (p. 29). For this research, to in sure accurate standard errors, sampling weights were used. In particular, for school-level data, a sc hool weight was computed 8 For the hierarchical linear modeling, the school sampling weights are used, but no special procedures are necessary to estimate the sampling variability. 9 Multilevel modeling, by its very nature, ta kes the clustering or dependence of the observations into accoun t by partitioning the variance by the different clusters. Said differently, HLM takes a model-based approa ch to estimating sampling variability. 8 A full school weight was not provided with the 2000 NAEP data. In e-mail conversations with Al Rogers, Frank Jenkins, and Bruce Kaplan of ETS, an d Alex Sedlacek and Keith Rust of NCES, it was determined that an appropriate school weight would consist of the product of five variables in the dataset: the school base weight (SCHBWT), the school trimming factor (SCHTFC), the school-level subject factor (SCHSFC), the collapsed school non-response cell (CSCNRC), and the school non-response adjustment factor (SCHNRF). 9 Because the clustering or dependence of the observations are taken into account in HLM, the students can be treated as a random sample within the school. Thus, only school weights and not student weights are used in the multilevel modeling.

PAGE 15

Digital Equity in Education 15 Findings Preliminary analyses (including tests of difference and single-level ordinal logistic regression) suggested that there are likely significant differe nces in computer access and computer use among level-1 independent variables. The question is, however, whether or not the presence and magnitude of those differences remain once access and use are modeled as multile vel phenomenon. Further, the second research question guiding these analys es is about the relationship between computer access and computer use. Finally, the third research question has to do with whether or not statelevel policies have an impact on computer access and computer use. All of these questions were addressed simultaneously through multilevel modeling. HIERARCHICAL LINEAR MODELS: RANDOM INTERCEPTS, FIXED SLOPES In these first set of models for both computer access and computer use, the independent variables are considered fixed and are centered ar ound their grand mean. As per the first research question and the operational definition of digital equity, the first goal is to simply determine if there are differences on average in computer access among school s of certain socio-demographic characteristics and differences in computer use am ong students of different demographic categories. Analysis of student and school demographic variable s as fixed effects is, therefore, sufficient and appropriate for the purposes of this research. Also, in this first set of models, all predictors are centered around their grand mean. As a general rule of thumb in multilevel modeling, pred ictors that are fixed are grand mean centered. In this way, the level one intercept is adjusted for differences between groups While Kreft, De Leeuw and Aiken (1995) have demonstrated that there is no statistical preference between grand mean centering and using raw metrics for level one covaria tes, since the covariates are treated as having common effects across groups, it makes theoretical sense to adjust the intercepts for group differences. Further, even dummy-coded covariates are grand-mean centered. Although it may seem strange at first to center a Level-1 dummy variable, this is appropriate and often quite useful (Bryk & Raudenbush, 1992, p. 28). Computer Access Multilevel Model: Summary of Results 10 As per the discussion in the model specificati on section above, three sets of two-level models were run with computer access as the dependent variable. The models, the results of which are summarized in tables 7 and 8, are numbered and described as follows: Model 1, the fully 10 The complete output, along with the detailed speci fication of the models, can be provided by the author. Also, in all of the tables that report results from the hierarchical linear models, the sign of all of the coefficients has been reversed from the output ge nerated by the HLM software. For ordinal HGLM, the HLM software generates signs for the coefficients that are technicall y not incorrect, but they do not indicate the direction of the relationship between the predictors and the dependent variable in a way that those familiar with linear regression, for example, would expect to see. So, in the tables that follow, a coefficient with a negative sign indicates a negative relationship between the predictor and the dependent variable, and a positive sign indicates a positive relationship. This is true for predictors at all leve ls, though the signs of the intercepts and threshold parameters remain the same.

PAGE 16

Education Policy Analysis Archives Vol. 15 No. 3 16 unconditional model; Model 2, the basic model with random intercepts and fixed slopes; and Model 3, the fully specified model with random intercepts and fixed slopes. Table 7 Computer Access, Two-level HGLM results, fixed effects Model 1 Model 2 Model 3 Variable B SE B SE B SE Level 1: Schools Intercept -4.787 ** 0.192 -4.806 ** 0.192 -4.809 ** 0.192 Urban -0.079 0.091 -0.078 0.091 Rural -0.320 ** 0.081 -0.318 ** 0.081 % of students African American -0.724 ** 0.154 -0.719 ** 0.155 % of students Latina/o -0.354 0.250 -0.388 0.252 Level 2: States Funding (technology appropriations per student) -0.003 0.003 Funding (direct distributions) 0.173 0.181 Funding (competitive grants) 0.142 0.162 Threshold Parameter Threshold 1 4.090 ** 0.180 4.106 ** 0.180 4.108 ** 0.180 Threshold 2 6.315 ** 0.185 6.349 ** 0.185 6.353 ** 0.185 p < .10; ** p < .05; *** p < .01 Table 8 Computer Access, Two-level HGLM results, random effects Model 1 Model 2 Model 3 Var. Comp 2 ( df =39) Var. Comp 2 ( df =39) Var. Comp 2 ( df =39) 0.156 189.903** 0.149 182.014** 0.157 175.535** p < .10; ** p < .05; *** p < .01 For model 1 (fully unconditional), the final esti mation of the fixed effects indicates that the mean value of the interce pt and threshold paramete rs differ significantly fr om zero. Further, the variance component of 0.1 56 can be used to compute the intraclass correlation (ICC). For the ordinal HGLM, the ICC, calculated as variance / (variance + ( 2/3)), is 0.045. The ICC, in this case, represents the proportion of variance in the outcome (computer access) that exists between level two (states). Thus, 4. 5% of the variance in computer access is bet ween the states; the vast majority of the variance in computer access exists within the states. The small proportion of between-state variation suggests that factors at the state level are not likely to be powerful correlates of computer a ccess in schools. After adding th e level-1 predictors in model 2, the level-2 variance component does not change much; in fact, it goes down slightly from 0.156 to 0.149. Variance components notwithstanding, two of th e level-1 predictors prove statistically significant. In particular, schools in rural area s and schools with higher percentages of African American students tend to have lower levels of computer access. In model 3, the fully specified model, the three potentially relevant state-level variables are added as predictors of the intercept. The level-2 variance component for model 3 is 0.157, a change of +0.002 from the fully unconditional model, and of +0.008 from model 2. At first glance, it seems

PAGE 17

Digital Equity in Education 17 unusual that the unexplained between state variation would rise from model 1 to model 2. That is, one would think that by adding fixed effects to a model, the result would be a decrease in the levelone residual variance (i.e. an increase in level-two residual variance). However, this is impossible as this residual variance is fixed so that instead, the estimates of the other regression coefficients will tend to become larger in absolute value and the in tercept variance (and slope variances, if any) will also tend to become larger (Sn ijders & Bosker, 1999, p. 229). In general, though, the results for the random effects suggest that the predictors at either level do not contribute much in accounting for th e total variance in computer access. Further, examining the coefficients in model 3, it is clear that the state funding variables demonstrate no significant relationship to computer access at th e school level. The relationship between the level-1 predictors and computer access remain th e same as in the basic model, however. Computer Use Model: Summary of Results As per the discussion in the model specificati on section above, four sets of three-level models were run with computer use as the dependent variable. The models, the results of which are summarized in tables 9 and 10, are numbered and described as follows: Model 1, the fully unconditional model; Model 2, the basic model with random intercepts and fixed slopes (level-one predictors only); Model 3, the basic model with random intercepts and fixed slopes (level-one and level-two predictors only); and Model 4, the fully specified model with random intercepts and fixed slopes Random effects of the intercepts. In a three-level HGLM, there are two sets of random effects for each model. The first variance component refers to the level-two intercept, and the second variance component refers to level-three intercept. Therefore, for the fully unconditional model (model 1), the level-two variance component of .599 means that the ICC is .154. This ICC represents the proportion of variance in the outco me (computer use) that exists between level two units (schools). Thus, 15.4% of the variance in computer use is between the schools. Though the chi-square test yields a chi-square statistic that is statistically significant, the level-three variance component (.02423) is very small, and yields an I CC of .007. This means that less than 1% of the total variance in computer use exists between the states. Combining this information, it can be concluded that the vast majority of the total variance in computer use exists within schools. Certainly, the small proportion of between-state variation suggests that factors at the state level are not likely to be powerful correlates of comp uter use in schools. In fact, with an ICC of less than .05, it is arguable that it is not necessary or meaningful to include a third level in multilevel modeling of computer use. However, a decision was made to proceed with the three-level HGLM to carry out the testing of a priori theories of the effects of state-level predictors. Swanson and Stevenson (2002) made a similar decision when an alyzing the impact of state policies on standardsbased classroom instruction. In their analyses, a fully unconditional model showed that only 3.6% of the total variation in standards-based classroom instruction exists between states, but they continued with a three-level (classrooms within schools within states) model anyway. I f theoretical arguments suggest that such effects might be present, the an alyst should proceed with posing Level-2 models [and, by extension, level-3 models] for qj...Again, evidence consistent with a null hypothesis does not mean it is true (Bryk & Raudenbush, 1992, p. 203).

PAGE 18

Education Policy Analysis Archives Vol. 15 No. 3 18 Table 9 Computer Use, Three-level HGLM r esults, fixed effects, models 1 Model 1 Model 2 Model 3 Model 4 Variable B SE B SE B SE B SE Level 1: Student Intercept 0.418 0.028*** 0.416 0.029*** 0.374 0.058*** 0.361 0.114*** Sex -0.071 0.003*** -0.071 0.003*** -0.071 0.003*** Lunch -0.072 0.004*** -0.072 0.004*** -0.072 0.005*** African American 0.292 0.007** 0.288 0.007*** 0.288 0.007*** Latina/o 0.127 0.006** 0.126 0.006*** 0.126 0.007*** Level 2: School Computers in the classroom 0.168 0.035*** 0.165 0.035*** Computers in a lab 0.003 0.033 0.007 0.032 Computers on a mobile lab -0.012 0.031 -0.009 0.031 Urban 0.039 0.037 0.044 0.037 Rural -0.036 0.033 -0.036 0.033 % of students African Americ an 0.326 0.063*** 0.298 0.062*** % of students Latina/o 0.011 0.103 0.001 0.103 Level 3: State Credentialing (pre-service teachers) 0.298 0.066*** Credentialing (new administrators) -0.195 0.107* Credentialing (current tea chers) 0.079 0.103 Funding (direct distributions) 0.097 0.063 Funding (competitive grants) 0.135 0.057** Professional Development (required % for PD) -0.035 0.059 Funding (tech. appropriations per student) -0.001 0.001 Professional Development (% of state involvement) -0.151 0.093 Student tech. stan dards, stand. -0.109 0.062* Student tech. stan dards, integ. -0.135 0.055** Student tech. stan dards, exit -0.011 0.071 Threshold Parameters Threshold 12 0.464 0.001*** 0.464 0.001*** 0.464 0.001*** 0.464 0.001*** Threshold 23 1.525 0.002*** 1.526 0.002*** 1.526 0.002*** 1.526 0.002*** p < .10; ** p < .05; *** p < .01 Table 10 Computer Use, Three-level HGLM resu lts, random effects, models 1 Model 1 Model 2 Model 3 Model 4 Level Var. Comp 2 ( df =39) Var. Comp 2 ( df =39) Var. Comp 2 ( df =39) Var. Comp 2 ( df =39) Level 2 0.599 48756.1** 0.589 46119.6** 0.577 42277.6** 0.577 50943.1** Level 3 0.024 182.27** 0.026 194.76** 0.029 219.462** 0.021 168.58** p < .10; ** p < .05; *** p < .01

PAGE 19

Digital Equity in Education 19 Fixed effects For model 1 (fully unconditional), the final estimation of the fixed effects indicates that the mean value of the intercept an d threshold parameters differ significantly from zero. Further, across all of the models, the effect of the individual predictors at all three levels is consistent. Specifically, at the student level, all four of the independent variables are statistically significant (i.e. p < .01) in all of the models. Further, the magnitude of those effects does not change from model 2 to model 4. In terms of practical significance, however, the coefficients for sex and lunch-program participation are very small (-0.07). In practical terms, those effects are so small as to be considered meaningless; the statistical significan ce is a result of the large number of students in the sample. Thus, it is safe to conclude that th ere are practically no differences in computer use between boys and girls or between students who are eligible for free or reduced-price lunch and those who are not. There are, however, differences in computer use by race. The coefficient for Latina/o students is consistently 0.13, and for Af rican American students, it is consistently 0.29. Thus, Latina/o students and African American stud ents are significantly more likely to fall into a higher use category on the dependent variable than Caucasian students. Interpretation of the coefficients of the level2 and level-3 fixed effects is slightly more straightforward since the formulae for the level-2 and level-3 models in ordinal HGLM are standard regression formulae. Further, in the random effects ANCOVA model, the coefficients for the level2 and level-3 fixed effects can be interpreted as the effect on the average intercept (adjusted for group differences), which in the ordinal model is the threshold between the first and second categories of the dependent variable. In the ordered logit model, the idea is that there is a continuous latent variable D (amount of computer use) that has various threshold points. A students value on the observed dependent variable Y depends on whether or not she has crossed a particular threshold on the latent variable D. Thus, a significant positive value for a coefficient of a level-2 or level-3 fixed effect means that the pred ictor, on average, increases the threshold between the first and second categories of the dependent variable. The threshold values are used to determine the probabilities for each response category. Additionally, given the formulae for determining t hose probabilities, an increased threshold between the first and second categories of the dependent variable ( 1) means that the probability of a student reporting the lowest category would increase. Finally since the ordered logit model is a cumulative probability model, increasing the probability of th e lowest category decreases the probability of a higher category. Therefore, a positive coefficient for a level-2 fixed effect actually decreases the probability that a student would fall into a high er use category, and vice versa for a negative coefficient. 11 Thus, the two fixed effects that prove to be statistically significant in relationship to the intercept are the percentage of students in a school who are African American and the availability of computers all the time in the classroom. Given the negative coefficients, it can be said that the higher the percentage of students that are Africa n American, the more likely a given student is to report a higher use category; i.e. students in sc hools with higher percentages of African American students use computers more frequently. Addi tionally, students in schools where there are computers available all the time in the classrooms (as opposed to or in addition to a laboratory) are more likely to use computers more frequently. Those effects are statistically significant and of a magnitude to be considered practically significant as well. 11 Remember, however, that the signs from the HLM output have been reversed in the table to indicate the direction of the relationship between the predictors and the latent outcome variable. So, schoollevel predictors with positive coefficients in the table have a positive relationship with computer use generally.

PAGE 20

Education Policy Analysis Archives Vol. 15 No. 3 20 Before discussing the level-3 fixed effects, it is worth reiterating that the ICC associated with the level-3 variance component (.007) is so small as to be almost entirely insignificant. Furthermore, with only 40 units at level-3, additional caution needs to be taken when interpreting level-3 estimates. Therefore, what follows is simply for discussion purposes; likely all or almost all of the variance in computer use exists within states, an d the effects of state policy variables on actual computer use are minimal at best. Interpretation of level-3 effects is complicated but the level-3 dependent variable is the mean level-2 intercept. 12 Therefore, level-3 fixed effects might be understood as the effect of level-3 independent variables on the average school effect on computer use. As with the level-2 fixed effects, positive coefficients for level-3 fixed ef fects suggest a negative relationship between the covariate and computer use, and vice versa. As originally run, Model 4 included all 11 sta te policy variables. The results are displayed in table 9. Five of the state-level covariates show up as statistically significant predictors. However, including 11 level-3 predictors in a model with only 40 level-3 units is likely not efficient or appropriate. As Bryk & Raudenbush (1992) suggest, A common rule of thumb for a regression analysis is th at one needs at least 10 observations for each predictor. The analog ous rules for hierarchical models are a bit more complex. For predicting a level-2 outcome, for example, 0j, the number of observations is the nu mber of level-2 units, J, and the conventional 10observations rule can be applied against this count. (p. 211) Thus, by extension, with 40 level3 units and a single le vel-3 outcome, there should not be more than four predictors. Therefore, a three-level mo del was run with the four most promising statelevel predictors (i.e., those stat e-level predictors in model 4 th at were statistically significant ( p <.10) and with coefficients of the highest value). Tables 11 and 12 contain the results. Those tables show that in the final fixed effects model, the level-3 variance component is the lowest of any of the models, so that three of the four predictors demonstrate a statistically significant relationship to the dependent variable is surprising, if not meaningless. Regardless, it could be argued that in states where technology standards are integra ted into subject area standards, computer use might be lower. However, there may be increased computer use in states where preservice teachers must meet technology-related requireme nts to receive their initial teaching credential and states where earmarked state funds for K-12 education technology distributed as competitive grants to school districts. 12 Alternatively, the level-2 intercept might be thought of as the threshold between the lowest two categories of computer use for a school where all of the school-level covariates have a value of zero (i.e. a suburban school with no computers av ailable in classrooms, labs or to be brought in to classrooms, and with no African American or Latina/o students). That such a scenario is unrealistic demonstrates the complexity in interpreting the level-3 fixed effects.

PAGE 21

Digital Equity in Education 21 Table 11 Computer Use, Three-level HGLM r esults, fixed effe cts, final model Variable B SE Level 1: Student Intercept 0.429 0.090*** Sex -0.071 0.004*** Lunch -0.072 0.005*** African American 0.289 0.007*** Latina/o 0.126 0.007*** Level 2: School Computers in the classroom 0.167 0.035*** Computers in a lab 0.004 0.032 Computers on a mobile lab -0.010 0.031 Urban 0.039 0.037 Rural -0.034 0.032 % of students African American 0.310 0.061*** % of students Latina/o -0.001 0.103 Level 3: State Credentialing (pre-service teachers) 0.264 0.062*** Credentialing (new administrators) -0.163 0.086 Funding (competitive grants) 0.124 0.052** Student technology standards integrated into subject standards -0.130 0.054** Threshold Parameters Threshold 1 0.464 0.001*** Threshold 2 1.526 0.002*** Table 12 Computer Use, Three-level HGLM resu lts, random effects, final model Level Var. Comp 2 ( df =39) Level 1 and Level 2 0.563 54664.369** Level 3 0.016 136.092** HIERARCHICAL LINEAR MODELS: RANDOM INTERCEPTS, RANDOM SLOPES Thus, the random intercept, fixed slopes models suggest that, on average, there are some inequalities in computer use that arise from studen t-level variables and some differences in computer access that arise from school-level variables. Those findings are substantively important and interesting. However, it is possible that the eff ect of those independent variables might differ between higher-level units. For example, it is certa inly possible that the amount of computer use for girls relative to boys in one school might be di fferent in another school. The multilevel modeling process that allows us to test those possibilities is to allow the coefficients of independent variables to vary randomly across higher-level units. A test of the variance component generated for any

PAGE 22

Education Policy Analysis Archives Vol. 15 No. 3 22 independent variable included as a random effect determines whether or not the predictor does, in fact, vary randomly across higher-level units. Extending the random intercept, fixed slopes models to include random slopes is fairly straightforward. However, estimation algorithms for these models are less stable than for the standard hierarchical model, and it is not uncommon that it is impossible to obtain converging parameter estimates for models with even only one random slope (Snijders & Bosker, 1999, p. 231). Therefore, a cautious approach was taken. As a first step, all level-1 variables were set to vary randomly, but the slope coefficients were not pred icted by any higher-level independent variables (the predictors for the random intercepts, however, were maintained). Then, for the computer use model, all level-2 variables were set to vary ra ndomly, but none of the slope coefficients were predicted by any state-level predictors. These initial models showed that all student-leve l variables did, in fact, vary randomly across schools. However, converging parameter estimate s were never obtained when any of the schoollevel variables (in the computer access model and the computer use model) was allowed to vary randomly. 13 Further, whereas it is common wisdom that different kinds of schools serve certain groups of students differently (e.g. academically at-risk students tend to achieve higher in smaller schools), there is no a priori reason to assume that the effect of a school being located in an urban area, for example, is any different in one state than another. As a result, all school-level predictors remain fixed in all multilevel models. So, for exampl e, the effect of a school being in a rural area is constrained to be the same across all the states. Thus, the random intercept, fixed slopes model for computer access stands as the final analysis of computer access. For computer use, however, a series of random intercept, random slopes models were run with student-level predictors allowed to vary randomly and school-level predictors constrained to be fixed. More specifically, for each of the student-leve l predictors, a model was run wherein the same independent variables used to predict the random intercept were used to predict the random slope of that student-level predictor. In each of those models, the other student-level predictors were allowed to vary randomly across schools, but no attempt was made to predict those slopes using school-level variables. For all four of those models, the statistically significant predictors of the slope coefficient were noted for inclusion in a final model. Thus, tables 13 and 14 include the results for two random intercept, random slope models of computer use. Model 1 includes random slopes for each of the student-level predictors, but no school-level predictors for the slope coefficients. Model 2 includes random slopes for each of the student-level predictors, and the coefficients for each of those slopes are predicted by the schoollevel variables that demonstrated statistical significance in the preliminary models run separately for each student-level slope coefficient. In both cases, the random intercepts are predicted by the same schooland state-level predictors used in the final random-intercept, fixed-effects models described in tables 11 and 12. As demonstrated in table 14, after including the student-level predictors as random effects in model 1, the residual variances of the coefficients of those predictors differ significantly from zero. Thus, the results for model 1 suggest at least that the effects of the student-level predictors vary randomly across schools. That is, the magnitude of the effect of race, sex or SES is different in different schools. The purpose of model 2 was to s ee what, if any, school-level factors moderate or exacerbate those differences. 13 The default setting on the HL M software was for 100 macro iter ations. This was manually extended to 150 macro iter ations, but the models still did not fully converge.

PAGE 23

Digital Equity in Education 23 Table 13 Computer Use, three-level random intercept, random slopes HGLM r esults, fixed effects Model 1 Model 2 Variable B SE B SE Level 1: Student Intercept 0.4642 0.161*** 0.673 0.029*** Level 2: School For Intercept Computers in the classroom 0.194 0.057*** 0.027 0.071 Computers in a lab 0.106 0.053** 0.110 0.052** Computers on a mobile lab 0.028 0.049 0.033 0.049 Urban 0.0721 0.059 0.017 0.065 Rural -0.106 0.052** -0.117 0.052** % of students African American 0.339 0.115*** -0.081 0.138 % of students Latina/o 0.006 0.192 -0.157 0.202 For sex slope Intercept 0.099 0.026*** 0.247 0.058*** Computers in the classroom 0.198 0.069*** For lunch-program slope Intercept 0.171 0.038*** 0.080 0.117 Computers in the classroom 0.217 0.097** Urban 0.206 0.083** % of students African American 0.430 0.162*** For % African American slope Intercept -0.020 0.066 -0.176 0.172 Computers in the classroom 0.249 0.172 % of students African American 0.933 0.228*** For % Latina/o slope Intercept 0.336 0.064*** -0.247 0.208 % of students Latina/o 1.002 0.343*** Level 3: State Credentialing (pre-service teachers) 0.353 0.088*** 0.351 0.085*** Credentialing (new administrators) -0.232 0.123* -0.227 0.120* Funding (competitive grants) 0.154 0.075** 0.145 0.073* Student technology standards integrated into subject standards -0.167 0.077** -0.170 0.074** Threshold Parameters Threshold 1 0.539 0.001*** 0.539 0.001*** Threshold 2 1.751 0.003*** 1.751 0.003*** p < .10; ** p < .05; *** p < .01

PAGE 24

Education Policy Analysis Archives Vol. 15 No. 3 24 Table 14 Computer Use, three-level random intercept, random slope HGLM resu lts, random effects Model 1 Model 2 Level and variable Var. Comp 2 ( df =39) Var. Comp 2 ( df =39) Intercept 1 2.158*** 28812.566 2.144*** 27356.540 Sex slope 2.244*** 24510.877 2.237*** 24623.071 Lunch program slope 4.264*** 21327.864 4.252*** 21473.589 % African American slope 7.663*** 20313.443 7.673*** 20863.710 Levels 1 % Latina/o slope 9.141*** 21505.219 9.163*** 21667.989 Level 3 Intercept 1 / Intercept 2 0.029*** 103.997 0.026*** 97.226 p < .10; ** p < .05; *** p < .01 Four of the school-level predictors that are included as fixed effects in model 1 demonstrate a statistically significant relationship to the depend ent variable (average threshold1 for computer use across all schools). Specifically, and stated in ge neral terms with respect to the latent continuous computer use variable, computer use is likely to be higher in schools where computers are available all the time in the classrooms, where computers ar e available in labs, and in schools with higher percentages of African American students. Computer use is likely to be lower in schools in rural areas. Also, three of the state-level predictors are significant ( p <.05). In particular, and again writing in general terms, computer use is likely to be high er in states where pre-service teachers have to meet technology-related requirements in order to recei ve their initial teaching credential and states where earmarked state funds for K-12 education technology distributed as competitive grants to school districts. Computer use is likely to be lo wer, however, in states where student technology standards are integrated into subject-area standards. For model 2, the slopes of the student-level predictors were treated as outcomes at level-2 and predicted by varying school-level factors. To repeat, the school-level variables used to predict the individual slopes were selected from preliminary analyses wherein, for each of the student-level predictors, a model was run with the same independent variables used to predict the random intercept. Those school-level variables that proved statistically significant in those preliminary analyses were noted for inclusion in model 2. As table 14 suggests, the variance components for all of the random effects are virtually identical to those from model 1, and so the residual variance of all of the random effects is different from zero. The effect of all of the student-level variables varies randomly between schools. 14 Thus, with respect to computer use, different schools treat girls, African American students, Latina/o students, and students eligible for free or reduced-price lunch differently. However, as shown in table 13, the intercepts for % African American and Latina/o or for lunch-program participation do not differ significan tly from zero. This might suggest that for these three variables, though the effects vary randomly between schools, on average the effects are not different from zero. With respect to sex, however, it might be said that, on average, girls are more likely to demonstrate higher levels of computer use. 14 It is worth noting that only the intercept for the SEX slope equation is significantly different from zero. This might suggest that for the other three variables, though the effect varies randomly between schools, on average the effect is not different from zero.

PAGE 25

Digital Equity in Education 25 By further examining the results for model 2 in table 18, we see that school-level variables do have a relationship to the slopes of the student-leve l predictors. In particular, having computers in the classroom increases the slope for sex and lunc h-program participation. So, having computers available all the time in the classroom increases the effect of being a girl or the effect of being eligible for free or reduced-price lunch. Also, th e effect of being eligible for free or reduced-price lunch is also increased in urban schools and schools with higher percentages of African American students. Finally, the effect of being African Amer ican is significantly increased in schools with greater percentages of African American students, and the effect of being Latina/o is significantly greater in schools with greater percentages of Latina/o students. That is, for African American and Latina/o students, whatever differences in computer use that are attributable to race are increased when those students are in schools with higher percentages of students of the same race. Finally, by including school-level variables as predictors of the slopes-as-outcomes, the impact of the various school-level predictors on average computer use changed from model 1 to model 2. In particular, as shown in table 13, average computer use is still lower in schools in rural areas. The effect of having computers available all the time in the classrooms has disappeared, but having computers available in a lab increases the likelihood of higher levels of computer use. The state effects stay the same from model 1 to model 2. Discussion SUMMARY OF FINDINGS With respect to computer access, the data su pported a few descriptive conclusions, including that 83% of schools serving 4th grade students have computers available all the time in the classrooms, and 75% of schools serving 4th grade students have computers available in a computer lab. Only 1% of all schools report no computer access. In terms of how school-level computer access varies across schools of different popula tions and demographics, the multilevel models suggested that schools in rural areas and schools with higher percentages of African American students are more likely to have lower levels of computer access. With respect to computer use, more than half (58%) of the 4th grade students report never or hardly ever using computers for math, whereas almost 32% of the students report using computers for math at least once a week. The remaining 10% use computers for math on a monthly basis. Modeling those data as three-level models, the first conclusion reached was that the vast majority of the total variance (over 80%) in computer use exists within schools. Additionally, when the slopes or the coefficients of the student-level predictors were allowed to vary randomly between schools, there were student-level differences that vary between schools, and that were, to some degree, related to school characteristics. Specifically, girls and students eligible for free or reducedprice lunch were more likely to use computers more frequently when computers are available in the classroom. Also, students eligible for free or redu ced-price lunch were more likely to use computers more frequently when their schools are in urban areas and/or have higher percentages of African American students. Finally, African American an d Latina/o students were more likely to use computers more frequently when they are in school s with greater percentages of students of their own race. Two school-level predictors had an impact on average (school-level) computer use. Specifically, average computer use is lower in school s in rural areas, but having computers available in a lab increases the likelihood of higher levels of computer use.

PAGE 26

Education Policy Analysis Archives Vol. 15 No. 3 26 Finally, in terms of state policies, no more than 5% of the variance in computer access can be attributed to state factors, and less than 1% of the variance in computer use was between states. Yet the findings suggested that where student tech nology standards are integrated into subject-area standards, computer use was likely lower than in other states. However, in states where pre-service teachers must meet technology-related requirements to receive their teaching credential and states where funds earmarked for technology are distributed as competitive grants, computer use was likely to be higher. DISCUSSION OF FINDINGS Digital Equity in Education: Differenc es in Computer Access and Computer Use Computer access The findings with respect to computer access could be viewed as both expected and surprising. It might be an expected finding because it is not atypical to mention inequity in education in the same discussion with rural schools and with schools with high concentrations of minority students. For example, schools in rural areas tend to serve communities with relatively low property values. In those cases, and particularly where school revenues depend heavily on local property values, per-pupil expenditures are lower. Therefore, to make an investment or a serious financial commitment to building a technological infrastructure in schools is more difficult for schools in rural areas. At the same time, however, there have been grea t efforts to ensure that divides in access to technology in schools do not come to mirror the divides that exist with respect to technology penetration into homes. In particular, the federa l E-rate program, which began in 1996, was and continues to be a major undertaking to provid e discounts on telecommunication services to schools and libraries serving disadvantaged students, especially in rural areas and poor communities. Though those discounts do not apply directly to the sort of technology access measured by the computer access variables used in these analyses, one of th e expected consequences of providing discounts on telecommunications services was to free up funds for further infrastructure development such as hardware purchases. Over $7 billion were committed to schools and libraries in the first four years of the E-rate program, and E-rate per-student fundin g to school districts increased significantly with poverty; the most disadvantaged dist ricts received almost ten times as much per student as the least disadvantaged (Dickard, 2002). Further, according to The E-Rate in America: A Tale of Four Cities the E-Rate has freed up resources to pay for other elements of district educational technology programs such as computer and software purchases and teacher professional development (Carvin, 2000). Ultimately, though, the findings from the analys es reported herein suggest that there may still be inequities in computer access. Furthermore, t hough the measures differ slightly, the findings on differences in computer access both confirm and di sconfirm findings from a recent National Center on Education Statistics (2006) report. NCES reports that as of 2000, the student-to-computer ratio in schools with 50% or more minority students was 8.1, compared to 5.7 in schools with less than six percent minority enrollment. On the other hand, that same report states that the ratio was 5.0 in rural schools, a lower ratio than in urban schools (8.2), urban fringe schools (6.6) and schools in towns (6.2). One possible explanation for the different findings for rural students lies in the different measures. It may be that rural schools offer fewer access points to computers (e.g. in a lab only, or only in the classroom), but given generally low enrollments in rural schools, the student-to-computer ratio is lower in those schools.

PAGE 27

Digital Equity in Education 27 Computer use With respect to the descriptive statistics on computer use generally, it is somewhat surprising that by the year 2000, almost 60% of all 4th grade students report that they never or hardly ever use computers for math. Furt her, those percentages have not increased since the same question was asked of 4th graders during the administ ration of the 1996 NAEP. As Wenglinsky (1998) reported, in 1996, approxima tely 42% of African American students used computers for math at least once a week. In 2000, that statistic has dropped to approximately 39%. Only Latina/o students have seen a slight incr ease in computer use between 1996 and 2000. Given the significant amount of resources that have been devoted to implementing and integrating technology into the teaching and learning proce ss and the tremendous growth in the educational technology industry, it is hard to believe that st udents were not using computers (at least for math) any more in 2000 than they did in 1996. It is also both interesting and important to know that over 80% of the variance in computer use is within-school variance. In other words, the frequency with which 4th-grade students use computers for math is mostly determined by factors within the school. These analyses only examined student characteristics as predictors of computer use, and some differences were noted. However, there is still a lot of unexplained variation in computer use. It has become increasingly clear that computer use in schools is very much a local, schoolby-school matter. Whereas the multilevel models did not include a classroom level, one might imagine that a classroom-level intraclass correlati on coefficient might demonstrate that most of the variance in computer use is between-classroom vari ance. That is, classroom and, more probably, teacher factors are likely most predictive of the frequency of computer use for individual students. Since the seminal work of Weatherly and Lipsky (1977) we know that teachers are street-level bureaucrats and the ultimate education policy make rs. With respect to computer use in schools, we know, for example, that teachers inclined towa rd constructivist practices are more likely to incorporate computers into the te aching and learning process (Becker, 2000). Whereas teacher-level data do not necessarily exist and the NAEP data sampling framework does not include selection by classrooms, Cheong et al. (2001) have shown that it is possible and appropriate in some cases to include teachers or classrooms as a second level in multilevel modeling of NAEP data. When and where relevant teacher-level data exist in other NAEP datasets, further research into the withinschool correlates of computer use is certainly warranted. With respect to differences in computer use across student demographic categories, Krueger (2000) and others would have us believe that African American and Hispanic students lag behind in using computers in school, though Wenglinsky (199 8) determined that African American students were more likely to use computers more frequently than other students. Also, Volman and van Ecks (2001) review of the literature on gender equi ty and information technology in education showed that whereas the research in the 1980s genera lly demonstrated that boys had greater access to computers in homes, during extracurricular activiti es, and during school when they spent more time in front of computers, that gap decreased consid erably over the 1990s, though it is mostly boys who use the computers at school outside lessons (p. 616). This study does not definitively resolve the conflicting conclusions about differences in computer use by race, nor does it allow us to conclude that girls have surpassed boys with respect to frequency of computer use in schools. However, there are, in fact, differences in comp uter use across student demographic categories, those differences vary from school to school. On e response to the question of whether there are differences in computer use by race or sex might be, it depends. All of this suggests that educational equity in general, and digital equity in education more particularly, are complex constructs to operationa lize and even more complicated to measure. The analyses reported herein yield some findings that bring us closer to an understanding of digital equity, and the methodology and framework used advance our ability to measure digital equity as a

PAGE 28

Education Policy Analysis Archives Vol. 15 No. 3 28 construct. Unquestionably, however, further resear ch on digital equity in education needs to be conducted. The Relationships between Computer Access and Computer Use When all level-1 and level-2 predictors were constrained to be common across all higherlevel units, we saw that the availability of computers in classrooms, as opposed to a laboratory, would result in increased use of computers. However, when the modeling allowed the effects of individual student characteristics to vary between schools (i.e. the assumption that girls, for example, might be treated differently in one school than in another), having computers in classrooms and having computers in labs were both positively related to the likelihood of increased computer use. Then, finally, we see that while having computers in labs remains positively related to the likelihood of increased computer use, having computers available in the classrooms covaries more with equity effects than with overall computer use. That is to say, having computers available all the time in the classroom increases the effect of se x or the effect of eligibility for free or reduced-price lunch. Thus, since the average effect of being a girl is positiv ely associated with computer use, for example, we might say that having computers available all the ti me in the classroom makes girls even more likely to be in a higher computer use category than without the classroom computers. A first glance into this relationship between computer access and computer use came from an evaluation of the West Virginia Basic Skills/Computer Education (BS/CE) program conducted by Mann et al. (1999). In that statewide initiative, the state funded the deployment of the equivalent of four computers per classroom, but schools had the option of placing those computers in classrooms or in a laboratory. In many cases, the decision to put the computers in the classrooms was made because there was literally no room fo r a computer laboratory. However, in the end, students in schools where computers were distributed to classrooms made greater achievement gains than those in schools with computer laboratories. The logical implication from those findings was that having computers in the classrooms allowed for increased time on the computers, and, hence, improved achievement gains attributable to such computer use. Undoubtedly, there are classrooms with computers that do little more than collect dust (Cuban, 2001). Where computers are in the classroom, however, students can use the computers on a rotating basis every day. Teachers can integrate computers into the curriculum more regularly when they know that the computers are always available in the classroom, or they can use them for remediation or even as incentive for students who complete other assignments early. As Norris, Sullivan, Poirot and Soloway (2003) wrote of their study of computer access and computer use, [b]y far, the single most significant predictor of technology use is the number of classroom computers (p. 21). The Impact of State-Level Policies on Computer Access and Computer Use Generally, the range of variation in state ed ucation technology policy characteristics for which data was collected and which might be relate d to levels of computer access and/or computer use (professional credentialing, student technolog y standards, professional development, and funding) was significant at the time of the Milk en Exchange Survey of state technology directors. That variation is empirically interesting and valuable for educational research and evaluation purposes.

PAGE 29

Digital Equity in Education 29 Having 50 states taking di fferent approaches to education can provide a powerful advantage in the long run if research an d evaluation can identify successful and unsuccessful approaches. Iden tifying what works, in turn, can help states refine and adapt successful policies in a cont inual and ongoing pr ocess of improving education. Evaluating the effects of different levels of resources, different uses of resources, and changing stat e policies them becomes crit ical to improving schools and student outcomes (Grissmer, Flanagan Kawata, & Williamson, 2000, p. 2). Unfortunately, the results of the variance partitio ning processes revealed that very little of the variance in computer a ccess or computer use is between-state variation. In fact no more than 5% of the variance in computer access can be attr ibuted to state factors, and less than 1% of the variance in computer use is state-level variation. Despite the movement to wards increased authority and accountability for stat es in education, these findings about overall state effects are consistent with prior research. Policy enac ted by distant federal and state actors has traditionally been viewed as a weak lever for pr omoting meaningful changes in core activities of schooling like classroom instru ction and student learning (Swa nson & Stevenson, 2002, p. 17). In the arena of education technology surprising the weakness of the state as a policy lever might be explained by the data on the fiscal effort s states had made toward technology as of 1998. As of 1998, no state had committed more than 1. 9% of the total education budget on education technology (Lemke & Shaw, 1999). In most states technology appropriations represented less than one percent of all spending on education from the state. On a per-student basis, the average technology appropriation across the sta tes was about $29 (with a low of zero in Wyoming and a high of $123 in Ohio) (Lemke & Shaw, 1999). It is un reasonable, therefore, to assume that state policy would account for much of the variance in computer access or computer use. Yet, like Swanson and Stevenson (2002), despite little variance attributable to state characteristics, the multilevel models suggested that some of the state-level technology policy variables significantly predict some of the variance in computer use (not computer access). In particular, in states where student technology standards are integrated into subject area standards, computer use might be lower than in other states. However, there may be increased computer use in states where pre-service teachers have to meet technology-related re quirements in order to receive their initial teaching credential and states where earmarked state funds for K-12 education technology distributed as competitive grants to school districts. The finding with respect to student technology standards is difficult to interpret in light of the absence of effects for the other standards variables. It would be simple if it could be said that by integrating technology standards into the core subje ct standards, they get lost and lack emphasis on the use of technology itself. However, states with standalone technology standards do not show any significant differences in computer use than others. In other words, whereas there is a significant negative relationship between computer use and the integrated standards variable, it is not the case that technology standards should be standalone standards. Thus, the only conclusion to be drawn here is that if the policy goal is to increase computer use, developing student technology standards that are embedded within the core subject standards is not likely to be effective. The other two significant state-level predicto rs make good sense. A common expectation of advocates of computer use in schools is that as older, more experience teachers retire and are replaced by younger new teachers who tend to be more comfortable with technology, computer use in schools will naturally increase. That argument is rational, but comfort with technology does not necessarily translate into the ability to integrate te chnology into the teaching and learning process. Further, where we know that new teachers tend to re sort to pedagogy of the sort that had been modeled to them as primary and secondary studen ts, technology use in schools is still new enough that new teachers at the turn of the 21st century likely did not experience teachers who integrated

PAGE 30

Education Policy Analysis Archives Vol. 15 No. 3 30 computers into the curriculum. Thus, it is logical th at computer use is likely to increase in states where pre-service teachers have to meet technology-related requireme nts in order to receive their initial teaching credential. Additionally, there are good reasons to imagine why computer use might be higher in states where earmarked state funds for K-12 education technology are distributed as competitive grants to school districts. Having schools and districts compete for necessarily limited funds forces decisionmakers in those local education agencies to pl an technology deployment and use in the schools. Also, funds flow to those schools and districts that have demonstrated both an intent and interest in using technology and a well-developed plan for us ing the technology, not schools that just happen to be at the end of the line for distribution. The federal example of competitive versus distributed technology funds is the distinction between the now defunct Technology Literacy Challen ge Fund (TLCF) program (essentially a block grant program) and the now defunct Technolog y Innovation Challenge Grant (TICG) program (essentially an inducement program). The TICG program required consortia of developers and local education agencies to propose innovative uses of technology in education and to compete for limited numbers but large in size grants. Through the TLCF program, on the other hand, federal technology funds were distributed dir ectly to each of the states that in turn, distributed funds to local education agencies on a mostly formula basi s. Though there is little data on the overall effectiveness of TICG, Adelman et al. (2002) found no association between TLCF grant participation and teacher-reported frequency of student computer use. By the time federal funds filtered down through the states, districts, schools an d into the classrooms, the critical mass was lost and impacts on teaching and learning are limited at best. This is not to say, however, that inducements are the only effective class of policy instrument in the arena of education technology policy. The West Virginia Basic Skills/Computer Education (BS/CE) program still stands as one of the first successful comprehensive statewide technology programs. In policy terms, the BS/CE program is closer to a mandate than an inducement. Through BS/CE, a critical mass of co mputers was distributed to schools each year, and all the districts had to do was to select one of two software packages chosen for their emphasis on teaching basic skills of math and literacy. Just-i n-time professional development was offered to teachers and funded by the state. As a result, additional analyses of the NAEP data showed that students in West Virginia are significantly more li kely to report using computers every day or almost every day than students in any other state. CONCLUSION The findings from this study about digital equity suggest that policy makers need to pay attention to some important differences that exist in computer access and computer use in schools. Thus, the NAACP Call for Action referenced in the introduction is, in fact, an appropriate call. However, given the findings about the overall im pact of states on computer access and computer use, it is not clear that directing their recommendat ions to state education agencies and officials is warranted. Yet, states have assumed a leadership role in educational policy in general, and education technology policy in particular. Therefore, that th e NAACP directs their recommendations mostly to the states and state education agencies is no a ccident. Hopefully, though this study has provided some evidence from which both state and local educ ation policy makers can begin to respond to the Call for Action with respect to digital equity in education.

PAGE 31

Digital Equity in Education 31 Appendix Table A-1 Summary of Variables Variable Label Description DEPENDENT VARIABLES Level 1: Student-level Computer Use COMP_USE A student-level measure of the frequency of use of computers for math. Possible values include: 1=never or hardly ever, 2=once or twice a month, 3=once or twice a week, 4=every day or almost every day. Level 1: School-level Computer Access COMP_ACC A school-level measure of the number of access points to computers. Possible values include 0,1,2 and 3. INDEPENDENT VARIABLES Level 1: Student-level Sex SEX Coded as 0=male, 1=female. RACE_AA Dummy-coded as 0=Caucasian or Latina/o, 1=African American. Race RACE_LA Dummy-coded as 0=Caucasian or African American, 1=Latina/o. Socioeconomic Status LUNCH Eligibility for free or reduced-price lunch; proxy for socioeconomic status. Coded as 0=not eligible, 1=eligible. URBAN Dummy-coded as 0=suburban or rural, 1=urban. Level 1 or 2: School-level Urbanicity RURAL Dummy-coded as 0=suburban or urban, 1=rural. PCT_AA Percent of students in the school who are African American. Racial Composition PCT_LA Percent of students in the school who are Latina/o. COMP_CL Are computers available all the time in the classrooms? Dummy-coded as 0=no, 1=yes. COMP_LAB Are computers available all the time in laboratories? Dummy-coded as 0=no, 1=yes. Computer Access COMP_MOB Are computers available to the classrooms when needed? Dummy-coded as 0=no, 1=yes. Level 2 or 3: State-level Technology-related Credentialing Requirements STANDTPR Do pre-service teachers have to meet technology-related requirements in order to receive their initial teaching credentials? Coded as 0=no, 1=yes.

PAGE 32

Education Policy Analysis Archives Vol. 15 No. 3 32 Variable Label Description STANDADM Do new administrators have to meet technology-related requirements in order to receive their initial credential? Coded as 0=no, 1=yes. STANDTCH Do practicing teachers have to meet technology-related continuing education requirements in order to maintain their teaching credential? Coded as 0=no, 1=yes. SEP_STD Does the state have standalone student technology standards? Coded as 0=no, 1=yes. INT_STD Does the state have student technology standards that are integrated into the core subject standards? Coded as 0=no, 1=yes. Student Standards for Education Technology EXIT_STD Does the state ha ve technology-related exit requirements for high school graduates? Coded as 0=no, 1=yes. TECH_PD The extent to which states have engaged in the five possible professional development opportunities into which the MEET survey inquired. It is computed as a fraction or a percentage, and therefore ranges from 0 to 1. Professional Development for Education Technology PD_PCT Does the state require districts to spend a certain percentage of state technology funds on professional development? Coded as 0=no, 1=yes. FUND_AVG A three-year (1996 1998) average of funding earmarked for education technology on a per student basis. DIRECT_$ Are state technology funds distributed as direct allocations to districts? Coded as 0=no, 1=yes. Education Technology Funding COMPET_$ Are state technology funds distributed through competitive grants? Coded as 0=no, 1=yes.

PAGE 33

Digital Equity in Education 33 References Adelman, N., Donnelly, M., Do ve, T., Tiffany-Morales, J., Wa yne, A., Zucker, A. (2002). The integrated studies of educational technology: Professional developmen t and teachers use of technology. Menlo Park, CA: SRI International. Attewell, P. (2001). The first and second digital divides. Sociology of Education, 74 (3), 252. Becker, H.J. (2000). Findings from the teaching, learning, and computing survey: Is Larry Cuban right? Educational Policy Analysis Archives 8 (51) Retrieved November 7, 2002, from http://epaa.asu/epaa/v8n51 Bryk, A.S. & Rauden bush, S.W. (1992). Hierarchical linear models: Ap plications and data analysis methods Newbury Park, CA: Sage Publications. Carvin, A., (Ed.) (2000). The e-rate in America: A tale of four cities Washington DC: The Benton Foundation. Cattagni, A., & FarrisWestat, E. (2001). Teacher use of computers and the Internet in public schools and classrooms: 1994 Stats In Brief. Washingt on, DC: National Center for Education Statistics (NCES). Retrieved April 12, 2003 from http://nces.ed.gov /pubsearch/pubsinfo.asp?pubid=2001071 Cheong, Y.F., Fotiu, R.P., & Raudenbush, S.W. (2 001). Efficiency and robustness of alternative estimators for twoand three-level models: The case of NAEP. Journal of Educational and Behavioral Statistics, 26(4), 411. Cuban, L. (2001). Oversold & underused: Computers in the classroom Cambridge, MA: Harvard University Press. Dayton, J. (2000). Recent litigati on and its impact on the statelocal power balan ce: liberty and equity in governance, litigation, and the school finance policy debate. In N.D. Theobald & B. Malen (Eds.), Balancing local control and state r esponsibility for K-12 education (pp. 93). Larchmont, NY : Eye on Education. Dickard, N. (Ed.). (2002). Great expectations: Leveraging Amer icas investment in educational technology. Washington DC: The Benton Foundation. First, P.F., & Hart, Y.Y. (2002). Access to cyberspace: The new issue in educational justice. Journal of Law & Education, 31 (4), 385. Green, T.F. (1982). Excellence, equity, and equality. In L. S. Shulman & G. Sykes (Eds.), Handbook of Teaching and Policy (pp. 318). New York: Longman Grissmer, D., Flanagan, A., Kawata J., & Williamson, S. (2000). I mproving student achievement: What state NAEP t est scores tell us Santa Monica, CA: RAND.

PAGE 34

Education Policy Analysis Archives Vol. 15 No. 3 34 Healy, J.M. (1999). Failure to connect: How computers affect our childrens minds and what we can do about it New York: Touchstone Books Idaho Council for Technolog y in Learning. (1999). The Idaho technology initiative: An accountability report to the Idah o legislature on the effects of monies spent through the Idaho Council for Technolo gy in Learning. Boise, ID: State Division of Vocational Education, State Department of Education, Bureau of Technology Services. Kreft, I.G.G., De Leeuw, J., & Aiken, L.S. (1995). The effect of different forms of centering in hierarchical linear models. Multivariate Behavioral Research, 30 1. Krueger, A.B. (2000). The digital divide in educating Afri can American students and workers. Working Paper 434. Princeton, NJ: Princeton University, Industrial Relations Section. Retrieved August 16, 2002, from http://www.irs.princeton.edu/pubs/working_papers.html Labaton, S. (2001, February 7). New F. C.C. chief would curb agency reach. The New York Times p. B1. Lee, J. (1998). State policy correlates of the achievement gap among racial and social groups. Studies in Educational Evaluation, 24 (2), 137. Lemke, C. & Shaw, S. (1999). Education technology policies of th e 50 states: Facts and figures. Santa Monica, CA: Milken Exchange on Ed ucational Technology, Milken Family Foundation. Mann, D., Shakeshaft, C., Becker, J. & Kottkamp, R. (1999). West Virginia story: Achievement gains from a statewide technology initiative Santa Monica, CA: Milk en Family Foundation. Milken Family Foundation. (1998). T he Milken Exchange state-by-state education tech nology policy survey. Santa Barbara, CA: Milken Family Foundation. Mitchell, D.E. & Encarnacion, D.J. (1984). Alter native state policy mechanisms for influencing school performance. Educational Researcher, 13 (5), 4. NAACP Education Department (2001). NAACP call for action in education Baltimore, MD: NAACP Education Depart ment. Retrieved February 24, 2002, from http://www.naacp.org/work/education/educalltoactn2.pdf National Center for Educ ation Statistics. (2000). Stats in brief: Teacher use of computers and the Internet in public schools (NCES No. 2000-090). Washington, D.C.: U.S. Department of Education. National Center for Educ ation Statistics. (2006). Internet Access in U.S. Public Schools and Classrooms: 1994 (NCES No. 2007-020). Washington, D.C.: U.S. Department of Education.

PAGE 35

Digital Equity in Education 35 Norris, C., Sullivan, T., Poirot, J. & Soloway, E. (2003). No access, no use no impact: Snapshot surveys of educational technology in K-12. Journal of Research on Technology in Education, 36 (1), 15. ODwyer, L.M., Russell, M., & Be bell, D.J. (2004). Identifying teacher, school and district characteristics associated wi th elementary teachers use of technology: A multilevel perspective. Education Policy Analysis Archives, 12 (48). Retrieved November 1, 2004, from http://epaa.asu.edu/epaa/v12n48/ Oppenheimer, T. (1997, July). The computer delusion. The Atlantic Monthly pp. 45. Raudenbush, S., Bryk, A., Cheong Y.F., & Congdon, R. (2001). HLM 5: Hierarchical linear and nonlinear modeling Lincolnwood, IL: Scientific So ftware International, Inc. Raudenbush, S.W., Fotiu, R.P., & Cheong, Y.F. (1998). Inequality of access to educational resources: a national report card for eighth-grade math. Educational Evaluation and Policy Analysis, 20 (4), 253. Snijders, T., & Bosker, R. (1999). M ultilevel analysis: an introduction to basic and advanced multilevel modeling. Thousand Oaks, CA: SAGE Publications. Solomon, G., Allen, N.J., & Resta, P. (E ds.). (2003). Toward digital equity: Bridging the divide in education. Boston: Allyn and Bacon. Stoll, C. (1999). High tech heretic: Why computers dont belo ng in the classroom and other reflections by a computer contrarian New York: Doubleday. Stone, D. (1997). Policy paradox: The art of political decision making. New York: W.W. Norton & Company, Inc. Swanson, C.B., & Stevenson, D.L. (2002). Standard s-based reform in practi ce: Evidence on state policy and classroom instruction fr om the NAEP state assessments. Educational Evaluation and Policy Analysis, 24 (1), 1. U.S. Department of Commerce. (2000). Falling through the net: Toward digital inclusion Washington: U.S. Department of Commerce. U.S. Department of Education. (1999). Technology Literacy Challeng e Fund: Report to Congress Washington: U.S. Department of Educati on, Office of Elementary and Secondary Education. Volman, M., & van Eck, E. (2001). Gender equity and information technolo gy in education: The second decade. Review of Educational Research, 71(4), 613. Walzer, M. (1983). Spheres of justice New York: Basic Books. Weatherly, R., & Lipsky, M. (1977). Street le vel bureaucrats and institutional innovation: Implementing special education reform. Harvard Educational Review, 47 (2), 171.

PAGE 36

Education Policy Analysis Archives Vol. 15 No. 3 36 Wenglinsky, H. (1998). Does it compute? The re lationship between educational technology and student achievement in mathematics Princeton, NJ: Educational Testing Service Policy Information Center. Zhao, Y. & Conway, P. (2001). What s in, whats out: an analysis of state educational technology plans. Teachers College Record ID Number: 10717. Retrieve d February 14, 2003, from http://www.tcrecor d.org/Content.asp?ContentID=10717 About the Author Jonathan D. Becker, J.D., Ph.D. Hofstra University Email: Jonathan.D.Becker@hofstra.edu Jonathan Becker is an assistant professor in the Fo undations, Leadership and Policy Studies Department of the School of Education and Allied Human Se rvices at Hofstra University. Dr. Becker teaches courses in the politics of education, the social and legal contexts of education, and educational research methods. His research agenda includes continued study of digital equity in education, educatio nal equity as a multilevel orga nizational phen omenon, and the intersection between educational leadership and educational technology. Dr. Becker is also the director of the doct oral program in Educational Lead ership and Policy Studies.

PAGE 37

Digital Equity in Education 37 EDUCATION POLICY ANALYSIS ARCHIVES http://epaa.asu.edu Editor: Sherman Dorn, University of South Florida Production Assistant: Chris Murre ll, Arizona State University General questions about ap propriateness of topics or particular articles may be addressed to the Editor, Sherman Dorn, epaa-editor@shermandorn.com. Editorial Board Noga Admon Jessica Allen Cheryl Aman Michael W. Apple David C. Berliner Damian Betebenner Robert Bickel Robert Bifulco Anne Black Henry Braun Nick Burbules Marisa Cannata Casey Cobb Arnold Danzig Linda Darling-Hammond Chad d'Entremont John Diamond Amy Garrett Dikkers Tara Donohue Gunapa la Edirisooriya Camille Farrington Gustavo Fischman Chris Frey Richard Garlikov Misty Ginicola Gene V Glass Harvey Goldstein Jake Gross Hee Kyung Hong Aimee Howley Craig B. Howley William Hunter Jaekyung Lee Benjamin Levin Jennifer Lloyd Sarah Lubienski Susan Maller Les McLean Roslyn Arlin Mickelson Heinrich Mintrop Shereeza Mohammed Michele Moses Sharon L. Nichols Sean Reardon A.G. Rud Lorrie Shepard Ben Superfine Cally Waite John Weathers Kevin Welner Ed Wiley Terrence G. Wiley Kyo Yamashiro Stuart Yeh

PAGE 38

Education Policy Analysis Archives Vol. 15 No. 3 38 Archivos Analticos de Polticas Educativas Associate Editors Gustavo E. Fischman & Pablo Gentili Arizona State University & Universidade do Estado do Rio de Janeiro Asistentes editoriales: Rafael O. Serrano (ASUUCA) & Lucia Terra (UBC) Hugo Aboites UAM-Xochimilco, Mxico Armando Alcnt ara Santuario CESU, Mxico Claudio Almonacid Avila UMCE, Chile Dalila Andrade de Oliveira UFMG, Brasil Alejandra Birgin FLACSO-UBA, Argentina Sigfredo Chiroque IPP, Per Mariano Fernndez Enguita Universidad de Salamanca. Espaa Gaudncio Frigotto UERJ, Brasil Roberto Leher UFRJ, Brasil Nilma Lino Gomes UFMG, Brasil Pia Lindquist Wong CSUS, USA Mara Loreto Egaa PIIE, Chile Alma Maldonado University of Arizona, USA Jos Felipe Martnez Fernndez UCLA, USA Imanol Ordorika IIE-UNAM, Mxico Vanilda Paiva UERJ, Brasil Miguel A. Pereyra Universidad de Granada, Espaa Mnica Pini UNSAM, Argentina Romualdo Portella de Oliveira Universidade de So Paulo, Brasil Paula Razquin UNESCO, Francia Jos Ignacio Rivas Flores Universidad de Mlaga, Espaa Diana Rhoten SSRC, USA Jos Gimeno Sacristn Universidad de Valencia, Espaa Daniel Schugurensky UT-OISE Canad Susan Street CIESAS Occidente,Mxico Nelly P. Stromquist USC, USA Daniel Surez LPP-UBA, Argentina Antonio Teodoro Universidade Lusfona, Lisboa Jurjo Torres Santom Universidad de la Corua, Espaa Llian do Valle UERJ, Brasil


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 15issue 3series Year mods:caption 20072007Month February2Day 1313mods:originInfo mods:dateIssued iso8601 2007-02-13


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20079999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00510
0 245
Educational policy analysis archives.
n Vol. 15, no. 3 (February 13, 2007).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c February 13, 2007
505
Digital equity in education :a multilevel examination of differences in and relationships between computer access, computer use and state-level technology policies/ Jonathan D. Becker, J.D., Ph.D..
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.510