USF Libraries
USF Digital Collections

Respondent fatigue in self-report victim surveys

MISSING IMAGE

Material Information

Title:
Respondent fatigue in self-report victim surveys examining a source of nonsampling error from three perspectives
Physical Description:
Book
Language:
English
Creator:
Hart, Timothy C
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
National Crime Victimization Survey
Fatigue bias
Nonresponse
Survey research
Research methods
Dissertations, Academic -- Criminology -- Doctoral -- USF
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: Survey research is a popular methodology used to gather data on a myriad of phenomena. Self-report victim surveys administered by the Federal government are used to substantially broaden our understanding of the nature and extent of crime. A potential source of nonsampling error, respondent fatigue is thought to manifest in contemporary victim surveys, as respondents become "test wise" after repeated exposure to survey instruments. Using a special longitudinal data file, the presence and influence of respondent fatigue in national self-report victim surveys is examined from three perspectives. Collectively, results provide a comprehensive look at how respondent fatigue may impact crime estimates produced by national self-report victim surveys.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2006.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Timothy C. Hart.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 195 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001789589
oclc - 137840183
usfldc doi - E14-SFE0001456
usfldc handle - e14.1456
System ID:
SFS0025775:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Respondent Fatigue in Self-Report Victim Surveys: Examining a Source of Nonsampling Error from Three Perspectives by Timothy C. Hart A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Criminology College of Arts and Sciences University of South Florida Major Professor: Thomas Mieczkowski, Ph.D. Kim Lersch, Ph.D. Wilson R. Palacios, Ph.D. Callie Marie Rennison, Ph.D. Date of Approval: March 24, 2006 Keywords: National Crime Victimization Su rvey, fatigue bias, nonresponse, survey research, research methods Copyright 2006, Timothy C. Hart

PAGE 2

Dedication This dissertation is dedicated to my wi fe Jennifer, whose love and support is immeasurable. It is also dedicate to our ne wborn son Ellis. Only a few weeks old, he has already brought a lifetime of j oy into our lives. Finally, I de dicate this manuscript to my stepfather Rex, who passed away shortly before it was completed.

PAGE 3

Acknowledgements This dissertation might never have been completed if it were not for certain individuals, whose aid and support must not go unrecognized. First, I must thank my major professor Dr. Thomas Mieczkowski fo r his sage advice and subtle guidance throughout my entire graduate expe rience at the University of South Florida. I must also thank Dr. Kim Lersch and Dr. Wilson R. Pal acios for their insight and encouragement during the writing process. Th ird, I could not have completed this manuscript without the assistance of Dr. John Cochran. The depth and breath of his statistic al expertise proved to be an invaluable resource. Thank you. And finally, I could not have completed this dissertation without the support and guidan ce of my friend and colleague Dr. Callie Marie Rennison. I am truly indebted to her for the time and energy that she contributed to this endeavor.

PAGE 4

i Table of Contents List of Tables iii List of Figures v Abstract vi Introduction 1 Literature Review 4 Respondent Fatigue 4 Response Bias 5 Nonresponse Bias 9 Understanding Crime and Victimization 15 Defining Crime 15 Information Associated with Criminal Events 17 Crime as a Social Indicator 18 Building Theories of Crime and Crime Causation 20 Methodological Issues with Self-Report Victim Surveys 22 Design and Analysis of Victimization Surveys 22 Survey Mode 23 Question Wording and Questionnaire Design 25 Series Victimization 26 Reference Periods 27 Criteria for Assessing Validity of Victim Survey Data 29 Sample Design, Coverage, and Nonresponse 30 Respondent Fatigue in Victim Surveys 33 Data 37 NCVS Longitudinal Data File 38 Perspective 1: Respondent Fatigue and Survey-Design Effects 40 Objectives 43 Measures 44 Dependent Variable 45 Independent Variables 49 Control Variables 50 Results 52 Conclusions 62

PAGE 5

ii Perspective 2: Modifying the Operati onal Measure of Respondent Fatigue 65 Objectives 66 Measures 68 Dependent Variable 69 Independent Variables 69 Control Variables 71 Results 72 Conclusions 79 Perspective 3: Assessing respondent fatigue over multiple waves of Self-Report Victim Surveys 82 Objectives 83 Measures 85 Dependent Variable 86 Independent Variables 86 Results 92 Conclusions 101 Discussion 104 Respondent Fatigue and Survey-Design Effects 105 Modifying the Operational M easure of Respondent Fatigue 110 Assessing Respondent Fatigue Over Multiple Waves of Self-Report Victim Surveys 112 Summary 114 References 116 Appendices 137 Appendix A: NCVS-1 Basic Screen Questionnaire 138 Appendix B: NCVS-2 Crime Incident Report 168 Appendix C: NCVS-551 Rotation Chart 194 About the Author End Page

PAGE 6

iii List of Tables Table 1 Descriptive statistics for the first perspective 47 Table 2 Partially specified surv ey-weighted logistic regression using survey-design effect s to predict victimization 53 Table 3 Partially specified surv ey-weighted logistic regression using control variables to predict victimization 56 Table 4 Fully specified survey -weighted logistic regression predicting victimization 58 Table 5 Impact of survey-desi gn effects after controlling for individual correlates to victimization 61 Table 6 Descriptive statistics for the second perspective 70 Table 7 Partially specified survey-w eighted logistic regression models predicting nonresponse at TIS2 73 Table 8 Fully specified survey-weight ed logistic regr ession predicting nonresponse at TIS2 76 Table 9 Survey-weighted logistic regression predicting nonresponse at TIS2 by survey mode 78 Table 10 Descriptive statistics for the third perspective 88 Table 11 Partially specified surv ey-weighted logistic regression using survey-design effects to predict nonresponse over multiple waves of interviews 93 Table 12 Partially specified surv ey-weighted logistic regression using social environment fact ors to predict nonresponse over multiple waves of interviews 95 Table 13 Partially specified surv ey-weighted logistic regression using household attributes to predict nonresponse over multiple waves of interviews 96

PAGE 7

iv Table 14 Survey-weighted logi stic regression predicting nonresponse over multiple waves of interviews 98

PAGE 8

v List of Figures Figure 1 Three perspectives used to examine respondent fatigue 35 Figure 2 Key elements of the first perspective 40 Figure 3 Key elements of the second perspective 65 Figure 4 Key elements of the third perspective 82 Figure 5 Groves and Coupers (1998) conceptual framework for survey cooperation 87

PAGE 9

vi Respondent Fatigue in Self-Report Victim Surveys: Examining a Source of Nonsampling Error from Three Perspectives Timothy C. Hart ABSTRACT Survey research is a popular methodology used to gather data on a myriad of phenomena. Self-report victim surveys admini stered by the Federal government are used to substantially broaden our understanding of th e nature and extent of crime. A potential source of nonsampling error, respondent fati gue is thought to mani fest in contemporary victim surveys, as respondents become tes t wise after repeated exposure to survey instruments. Using a special longitudinal data file, the presence and influence of respondent fatigue in national self-report vi ctim surveys is examined from three perspectives. Collectively, results provide a comprehensive look at how respondent fatigue may impact crime estimates produced by national self-report victim surveys.

PAGE 10

1 Introduction Survey research is a popular methodology us ed in the United States for more than 6 decades. Large national surveys advance and improve our understanding of employment and labor, political, agricultural, and economic issues. Federally-sponsored surveys are also used to colle ct data on various aspects of the criminal justice system, including law enforcement (see Reaves & Hart, 2000; see also Reaves & Hickman, 2004), criminal victimization (see Catalano, 2004, 2005), state court processing (see Hart & Reaves, 1999; see also Rainville & Reav es, 2003; see also Reaves, 2001), and prison and jail inmates (see Harrison & Beck, 2005; see also Harrison & Karberg, 2004). Although surveys are a tool that can provide a wealth of information about a variety of topics, two sources of error can threaten the accuracy of estimates produced by this methodology: Sampling error and Nonsampling error. Sampling error is one form of measurem ent error that can be produced during survey research. It occurs when a sample is drawn making it systematically different from the population that it is intended to repres ent. When this occurs, inferences derived from the sample and generalized to the populat ion can be erroneous. Historically, one of the most recognized examples of sampling e rror occurred during th e 1948 presidential election between Harry Truman and Thomas E. Dewey. Pollsters interviewed a sample of voters that was not representative of the overall voting population and projected

PAGE 11

2 Dewey the victor. The Chicago Daily Tribune used the erroneous results and ran the famous headline Dewey Defeats Tr uman, which it later retracted. Researchers must also guard against nons ampling error when they employ survey research. Nonsampling error represents all other forms of error not associated with drawing a sample. Some sources of nonsam pling error include que stionnaire design and question wording, data codi ng, editing, entry, and proces sing. Another source of nonsampling error can be respondent fatigue or the burden a re spondent experiences during the survey process. Although the full impact of nonsampling error cannot be quantified, researchers can design and admini ster surveys in ways that minimize its effects. For example, identifying factors th at influence respondent fatigue in national self-report victim surveys enables researchers to deve lop methodological approaches guarding against it. In doing so, our abil ity to derive more precise national crime estimates is improved. The current study explores the effects of respondent fatigue associated with national self-report victim surveys. It examines this issue from three perspectives. The investigation begins by reassessing the multip le exposure to stimuli problem believed to be associated with the survey design of the National Crime Victimization Survey (NCVS) (Lehnen & Reiss, 1978a, 1978b). The work of Lehnen and Reiss is replicated to determine whether survey-design characteristics of contemporary self-report victim surveys produce respondent fatigue. The second perspective extends the work of Lehnen and Reiss (1978a, 1978b) by modifying the operational measure of fatigue Lehnen and Reiss used the decline in reported victimization as a measure of fati gue. In the second perspective, however,

PAGE 12

3 respondent fatigue is examined in terms of whether respondents who are exposed to longer interviews during their initial National Crime Victimi zation Survey interview are more likely to refuse to participate during their next interview.1 This approach permits a more robust understanding of th e factors that predict respond ent fatigue, and provides the foundation for a more theoretically based approach for looking at this important methodological issue. The third perspective investigates re spondent fatigue over multiple waves of victim surveys, incorporating the conceptual framework of household nonresponse theory developed by Groves and Couper (1998). This strategy provides additional insight into the issue of respondent fatigue believed to be associated with the design of contemporary self-report victim surveys by combining the ap proaches presented from the previous two perspectives. The third facet of this resear ch examines the multiple exposure to stimuli problem using nonresponse as the operational measure of fatigue, over multiple waves of victim surveys, while integrating an appropriate theoretical perspective. Combined, these perspectives provide an in -depth look at the na ture and extent of respondent fatigue associated with national se lf-report victim surv eys. Results offer answers to questions about how respondent fatigue impacts national crime estimates produced by this methodology, and how survey ad ministrators can minimize its effects. Each perspective is described below in grea ter detail; but before continuing, relevant literature is reviewed and discussed. 1 Members of households selected to participate in National Crime Victimization Surveys (NCVS) are interviewed every 6 months for 3 years.

PAGE 13

4 Literature Review Respondent fatigue Respondent fatigue can manife st during surveys in two distinct ways. First, participants can grow tired during an interview or boredom can overcome a respondent while completing a self-administered questionnai re. In either case, if answers given in response to questions systematically differ across respondents as a result of the burden experienced while participating, then respondent fatigue has manifest as response bias (see Weisberg, 2005). If a respondent chooses not to participate in a mail or telephone survey, partake in an interview, or skips an swers during a self-admin istered questionnaire because they grow tired of pa rticipating, then respondent fati gue has been exhibited in an entirely different form: Nonresponse bias (see Groves & Couper, 1998; see also Groves, Dillman, Eltinge & Little, 2002). Unlike re sponse bias, nonresponse bias is more commonly associated with longitudinal survey s. That is, when respondents are exposed to an interview during one wave of a longitudi nal survey and refuse to participate in a subsequent wave(s), and the decision not to participate is systematic among nonrespondents, nonresponse bias is introduced. Regardless of how they manifest, both response bias and nonresponse bias create er ror in measurement and considerable research has been undertaken to better unde rstand possible sources of each. Studies examining both are discussed below in greater detail.

PAGE 14

5 Response bias Response bias is believed to manifest from a number of sources related to the task of participating is a survey. The method by which a surv ey is administered (i.e., the survey mode) is one example. Face-to-face interviews, telephone interviews, and mailed or in-person self-administered questionnaires are common survey modes used to collect data. Although research fails to demonstrate th at one mode is superior to another, some important generalizations about survey mode as it relates to response bias can be made. In terms of misinterpretation, omission, or lying, all survey delivery methods appear to work well in minimizing respons e effectsif respondent s are asked factual questions, questions that do not threaten the respondent, or that do not make the respondent feel there is a socially desi rable answer (Dillman, 1978; Groves & Kahn, 1979; Groves & Mathiowetz, 1984; Hochs tim, 1967; Jonsson, 1957; Sudman & Bradburn, 1974; Thornberry & Scott, 1973). Mu ch research also s uggests that survey modes which provide more anonymity are superi or at minimizing response effects than those that provide less, when sensitive quest ions or questions asso ciated with a higher degree of social desirability are asked (Catania, Gibson, Chitwood & Coates, 1990; Catania, Gibson, Marin, Coates & Greenbl att, 1990; Combs & Freedman, 1964; Henson, Roth & Cannell, 1974; Knudsen, Pope & Iris h, 1967; Mooney, Poullack & Corsa, 1968; Turner, Lessler & Devore, 1992). Yet despite demonstrating the influence mode can have, research fails to consiste ntly point to one survey deli very method as being better in all situations for reduc ing response effects.

PAGE 15

6 Response bias is also suspected of bei ng tied to question type (i.e., open-ended versus closed-ended questions) as well as que stion length and wording. As with survey mode, research is unable to consistently esta blish links between each of these task-related factors and response effects. For exam ple, open-ended questions may produce substantively richer information than cl osed-end questions because they can more accurately reflect nuances of meaning that ar e lost by forcing a respondent into a fairly tightly controlled set of alternative answ ers (Bradburn, 1983, p. 279). However, with the exception of when topic sa liency is being measured or when questions are being pretested, research fails to demonstrate that one form of question is more likely to produce unwanted response effects than the other (Dohrenwend, 1965; Schuman & Presser, 1978; Sudman & Bradburn, 1974). On the other ha nd, research has done a somewhat more convincing job at establishing a connecti on between question length and wording and response bias. Recent studies demonstrate th at variations in question wording affect respondents answers on attitudinal surveys (L ockerbie & Borrelli, 1990; Rasinski, 1989; Turner, Lessler & Devore, 1992), suggesting that survey researchers should avoid including lengthy questions or complicated word ing if response effects are to be reduced. Question order is another task-related source of res ponse bias that receives considerable attention from researchers. Ge nerally, the focus of question order-effect research is in one of five areas. For exam ple, past research de monstrates a strong link between question order and recall. Results s how that attitudes expressed about topics where a respondent has low saliency or recall are influenced more so by question order than topics where the respondent has high saliency (Hayes, 1964; Landon, 1971; Segall, 1959). In addition, overlapping content with in different sections of the same

PAGE 16

7 questionnaire can produce a redundancy effect. Past research indica tes that respondents answers can be adversely affected if they feel they are being asked the same question repeatedly throughout the same survey (B radburn, 1983; see also Weisberg, 2005). A consistency effect is another type of questio n-order effect associated with the task of taking a survey. Among one of the most fr equently examined topics within questionorder effect research, studies show that survey questions can produce variation in answers among respondents depending on where in relati on to other questions they are placed (Ayidiya & McClendon, 1990; Benton & Da ly 1991; Hart, 1998; McFarland, 1981; Narayan & Krosnick, 1996; see also Schuman & Presser, 1996). Finally, the order in which survey questions are asked can also pr oduce response bias that manifests as either a rapport or fatigue effect. A rapport eff ect occurs when nervousness or hesitancy diminishes during the course of a survey due to an increase in trust or comfort developing between the interviewer and respondent, whereas a fatigue effect manifests when respondents answers are adversely affected due to the burden produced by the task of participating in a survey (Bradburn, 1983; Lehnen & Reiss, 1978a, 1978b; Sudman & Bradburn, 1974; see also Weisberg, 2005). Ag ain, both are tied to the order in which questions are asked and have been shown to be potential sources of response bias. Each form of response bias discussed above is tied to the task of survey participation. While research is far from being able to provide a single protocol for administering surveys in a manner that elim inates response bias entirely, findings do provide some insight into important consider ations that must be made when conducting surveys. In addition to survey task, past research demonstrates the importance of interviewers and the effects produced by interviewer-respondent interaction.

PAGE 17

8 Interviewers are a likely source of response bias (B ailey, Moore & Bailar, 1978; Groves & Kahn, 1979; Hanson & Marks, 1958; Kish, 1962; Stock & Hochstim, 1951). Some of the earliest studies on interviewer e ffects demonstrate that their characteristics and behaviors can bias results (Hyman, 1954; Katz, 1942). Interviewer competence, prior expectations of survey results, r ace, age, gender and their interaction with respondents are factors that have been shown to influence respondents answers to survey questions (Athey, Coleman, Reitman & Ta ng, 1960; Campbell, 1981; Cotter, Cohen & Coulter, 1982; Davis, 1997; Dohrenwend, Colombotos & Dohrenwend, 1968-69; Finkel, Guterbock & Borg, 1991; Freeman & Butle r, 1976; Hatchett & Schuman, 1975-1976; Schaffer, 1980; Schuman & Converse, 1971; Tu cker, 1983; Williams, 1964). Things as seemingly innocuous as an interviewers pace, volume or choice of words used during an interview can influence survey responses (Oksenberg, Coleman & Cannell, 1986). As with factors associated with survey ta sk, understanding how interviewers and the interviewer-respondent interaction can create response bias is vitally important if surveys that minimize its effects are to be developed and administered. Finally, response bias may al so be a product of certain respondent characteristics or personality dispositions (i.e., a response set). Couch and Kensiton (1960) identified one of the first such response sets during an investigation of a y ea-saying bias in a study of authoritarian personalities. While la ter studies failed to demonstrate a similar pattern (Brandburn, Sudman, Blair & Stoc king, 1978; Orne, 1969; Rover, 1965), other respondent demographics such as age, gende r, and marital status have been tied to socially desirable answers to certain survey questions (Crown & Marlowe, 1964; Sudman & Brandburn, 1974; see also Weisberg, 2005). These and similar findings not only

PAGE 18

9 demonstrate how certain responde nt characteristics can infl uence survey responses, but more importantly, they emphasize the need for researchers to be cogni zant of sources of response bias that are beyond their control. To varying degrees, past research demons trates how the survey task, interviewer characteristics, interviewer-respondent intera ction, and respondent characteristics can influence survey responses (Bradburn, 1983; see also Weisberg, 2005). Yet despite numerous studies approaching the problem fr om different angles, no formal theory for understanding response bias has been produced from the scientific community. Thus, respondent fatigue simply remains one form of response bias that is part of a larger laundry list of many other type s. Researchers investigating nonres ponse bias, however, have used a much different approach. Unlik e response-bias research, formal theoretical perspectives play an integr al role in guiding research investigating why respondents choose to participate in surveys. Nonresponse bias Propositions at the core of nonresponse-bias research are derived from a formal theoretical perspective. Sugge sting that survey nonresponse should be considered a form of social exchange, Don Dillman (1978) originally presented the theoretical foundations of survey nonresponse as a part of his Tota l Design Method (TDM) of mail and telephone surveys. Dillmans ideas serve as the cornerstone for understanding the nuances of survey participation. Recently, more refi ned perspectives on nonresponse have been offered (Groves & Couper, 1998; Dillman, 2000) These new ideas provide additional insight into what factors infl uence respondents decisions to participate in surveys. A

PAGE 19

10 discussion of the evolution of key ideas a ssociated with survey -nonresponse research follows. In 1978, Don Dillman developed a theoretically based methodology for conducting mail and telephone surveys: the Tota l Design Method (TDM). Consisting of two parts, the goal of the TDM is to maxi mize both the quality and the quantity of surveys. In order to achieve this goal, according to Dillman, survey researchers must identify each aspect of the surv ey process that may affect eith er the utility or quantity of response and to shape each of them in such a way that the best possible responses are obtained (p. 12). Dillman argues that researchers must therefore organize the survey effects so that the design intentions are carried out in compete detail (p. 12). Dillman (1978) believes that the aforem entioned objectives can be achieved if surveys response is viewed as a form of social exchange. Social exchange theory states that a behavior will occur if the perceived costs of the behavior are less than the perceived rewards (Blau, 1964; Goyder, 1987; Homans, 1961; Thibault & Kelly, 1959). According to Dillman and the TDM, therefore, three factors must be present in order to maximize survey response: costs must be minimized, rewards must be maximized, and trust between interviewer and res pondent must be established. The perceived cost of participating in a su rvey is difficult to gauge. Nevertheless, research shows that cost must be considered when administering a survey, due to its effect on response rates (Blumberg, Fuller & Hare, 1974; Carpente r, 1974-1975; Linsky, 1975; Tedin & Hofstetter, 1982). When costs are high, participation is low; but when costs are reduced, participation increases. According to Dillman (1978), several steps can be taken to minimize cost. First, the surv ey task must be brief. Brief surveys cost

PAGE 20

11 respondents less time to complete. Survey s must also minimize mental and physical effort or cost. Again, surveys that require ex tensive metal or physical effort to complete will result in higher rates of nonresponse, according to Dillman. Surveys must also eliminate any chance of the re spondent feeling embarrassed or insubordinate. Both are viewed as intangible cost. Fi nally, surveys must avoid direct monetary costs. Dillman argues that mail surveys accompanied by a pos tage-paid reply envelopeso as to not require respondents to spend their own money on returning it in order to participate increases participation. In short, surveys that are brief, require little mental or physical effort, eliminate embarrassment or insubor dination, and require no direct out-of-pocket expense for the respondent increases participation. In addition to minimizing costs, Dillman (1978) argues that survey nonresponse is reduced if administrators provi de rewards for completing surveys. Considerable research demonstrates a correlation between increased reward and higher response rates (Berk, Mathiowetz, Ward & White, 1987; Chromy & Horvitz, 1978; Church, 1993; Godwin, 1979; James & Bolstein, 1990, 1992; Mize, Fleece & Roos, 1984; Nederhof, 1993; Willimack, Schuman, Pennell & Lepkoski, 1995). All rewards do not need to be financial, however. For example, nonrespons e can be minimized if interviewers show positive regard to respondents participation or express appreciation for participation. Interviewers can also convey a sense of reward if they show support for respondents values. Dillman argues that both financia l and nonfinancial rewards help reduce nonresponse. In short, adopting a professiona l consulting approach by interviewers and administrators produces higher response rate s because these appro aches increase a sense of reward on the part of respondents.

PAGE 21

12 Both cost and reward are key components of the TDM. According to Dillman (1978), trust in another key component that is necessary in order to reduce survey nonresponse. Trust can be established in di fferent ways during the administration of a survey. For example, tokens of appreciat e in advance of a survey can be offered (Dillman, 1978). A cover letter fr om a local official asking for community participation in a community survey can yield positive results, due in part to the trust that such a letter can establish (see Groves & Couper, 1998; see also Groves, et. al., 2002). Also, the organization conducting a survey can be identi fied and its legitimacy conveyed before a survey is administered. The Census Bureau, for example, issues notification letters to respondents in samples surveyed for the Federa l government. Letters arrive in envelopes embossed with the Census Bureaus logo a nd address, composed on official agency letterhead. The official notific ation letters are designed to in still trust, via legitimacy of the survey and help minimize nonresponse (Dillman, 1978). Dillman (1978) outlined how the quality a nd quantity of survey responses would increase if survey administrators adopted the TDM. Although some findings showed the TDM produced a modest effect on response rates, response quality or both, little evidence pointed to the mechanisms by which these effects manifested (Butz, 1985; Couper & Groves, 1991; Dillman, Gallegos & Frey, 1976; Dillman, Singer, Clark & Treat, 1996; Groves, Cialdini & Couper, 1992; Singer, 1993; Singer, Hippler & Schwarz, 1992; Singer, Mathiowetz & Couper, 1993; Singer, Von Thurn & Miller, 1995). As a result, modifications to some of the original ideas presented in the TDM were developed. More recently, nonresponse research focuse s on two areas of particular interest: controllable influences of survey nonrespons e and uncontrollable in fluences. Building

PAGE 22

13 from ideas originally proposed by Dill man (1978) and the TDM, Groves and Couper (1998) incorporate several fact ors that researchers are unabl e to controlas well those that they can controlin th eir theory of nonresponse in hous ehold interview surveys. They argue that economic conditions, the survey taking climate, and neighborhood characteristics are direct causa l influences of survey nonrespo nse. As indirect measures of social environmental influences on su rvey nonresponse, Groves and Couper argue that researchers cannot contro l these influential predictors of survey participation. Household(er) factors such as household structure, socio-de mographic characteristics, and psychological predisposition of the hous eholder, are also beyond the control of survey researchers according to Groves and C ouper. Yet despite being uncontrollable, as with social environmental factors, they pl ay a key role in a respondents decision to participate in a survey. Groves and Couper (1998) argue that th ere are other factors that influence participation in household surveys, and that th e researcher can control these factors. For example, Groves and Couper provide evidence that survey design features including topic, mode, and respondent selection can eff ect respondents decisions to participate in surveys. Moreover, they argue that intervie w-related factors must be considered, since they also affect nonresponse. These factor s include socio-demogra phic characteristics, interviewer experience, and in terviewer expectations. Fi nally, Groves and Couper stress the importance of the interaction that ta kes place between householder and interviewer and its role in producing nonresponse. Acco rding to Groves and Coupers, mechanisms that influence survey particip ation include both those factor s that can be controlled by researcher as well as those beyond their control.

PAGE 23

14 With their theory of nonresponse in h ousehold interview surveys, Groves and Couper (1998) advanced our understanding of the complex process of survey participation beyond the TDM. Moreover, recen t tests of components of their theoretical model2 have helped identify important di stinctions between nonresponse and noncontact, item nonresponse and unit nonresponse,3 and effects of nonresponse across diverse types of surveysincluding cross-na tional programs (see Groves, et al., 2001). Collectively, this research furthers our overall understanding of nonresponse bias. In doing so, researchers are in a position to improve the survey research methodology in ways that reduce the effect of th is form of nonsampling error. Improving survey research has broad implications. For example, as noted above, the Federal government relies on self-report victim surveys to assess the nature and extent of crime in the United States. Findings from some of the earliest investigations into respondent fatigue suggested that it wa s a possible source of nonsampling error in the National Crime Survey (Biderman, 1967; Biderman, Johnson, McIntyre & Weir, 1967). Despite the threat re spondent fatigue poses to estimation, however, little empirical attention is directed to this methodological issue an d its effect on contemporary victimization estimates produced by national surveys. The remaining chapter provides an in-depth look at crime and criminal victim ization, methodological issu es associated with measuring crime, and the problems that respondent fatigue may pose when crime is measured by self-report victim surveys. A closer look at these issues, when combined 2 A conceptual diagram of Groves and Coupers theoretical model is provided in Chapter Six. 3 Item nonresponse occurs when a respondent does not respond to particular items within a survey. Unit nonresponse occurs when a respondent does not respond to any question on a survey.

PAGE 24

15 with the information provided above, pr ovides the foundation for an in-depth examination of respondent fatigue associat ed with self-report victim surveys. Understanding crime and cr iminal victimization Defining crime Since 1929, the Uniform Crime Reporting (UCR) program has provided official crime statistics (Federal Bureau of Investig ations, 2004). Violations of criminal code brought to the attention of law enforcement of ficials are summarized in a classification system that standardizes offe nses for reporting purposes. Law enforcement agencies then voluntarily submit these reports to the Federal Bureau of Investigati ons (FBI). Part I Index4 offenses contained within annual UCR reports include homicide, rape, robbery, aggravated assault, burglary, larceny, and auto theft. Prior to victim surveys, crime was defined only in terms of official statistic s like those generated from the UCR. Over time, it became apparent that official statistics were incomplete. Most obviously, unreported crimes were not represen ted in official statistics. Therefore, quantifying the amount of crime not captured by UCR summary report s was a key aim of President Johnsons Crime Commission (Bider man & Reiss, 1967; see also Presidents Commission on Law Enforcement and Admi nistration of Justice, 1967). The Commission suggested using a large-scale national survey to examine crime from a 4As of June 2004, the FBI discontinued the use of the Crime Index in the UCR program and its publications. The FBI (2004) notes, "The Crime Index was driven upward by the offense with the highest number, in this case larceny-theft, creating a bias ag ainst a jurisdiction with a high number of larcenythefts, but a low number of other serious crimes such as murder and forcible rape" (p. 5). They go on to conclude that, "the Crime Index no longer serves its original purpose, that the UCR Program should suspend its use, and that a more robust index should be developed" (FBI, p. 5, 2004).

PAGE 25

16 victims perspective to broaden our ov erall understanding of nature, extent, and consequences of crime. Obtaining information directly from crime victims rather than official statistics offered a new perspective on crime. Using this approach, crime is defined in terms of criminal victimization, which conceptually re sts on three underlying characteristics (see Skogan, 1981). First, criminal victimization is defined as a discrete rather than a continuous event that is bound by space and time. That is, victimization is an event that involves a victim(s) and an offender(s). Th e event has a beginning and an end, between which some criminal activity occurs. Mo reover, the event not only occurs within a specific time frame, but it occu rs in a specific location. De fining victimization this way permits the counting of individual criminal ev ents such as robbery, larceny, or assault that occur at day or nighttime, at home or at school, and between re latives or strangers. This definition excludes events that are ongoing or continuous. For example, spousal abuse, bullying, or insider trading are consid ered criminal events, but because they are ongoing and enduring they are diffi cult to count. For this reas on, events that span hours, days, weeks, or even months are exclude d from the definition of victimization. The second defining characteristic of crime as measured by victim surveys is that events are knowable only as distinct individu al incidents. Focusing on incidents permits the creation of victimization rates or the amount of crim e experienced by individuals given a standardized factor (e.g., per 1,000 pe rsons age 12 or older) as a measure of crime. An alternative approach is to define victimization in terms of victims. Analyzing victims rather than incidents permits the creation of proportions of individuals or households victimized as a way to assess cr iminal activity. While both approaches are

PAGE 26

17 worthwhile methods for assessing crime, using incidents and not indivi duals as the unit of analysis is an important distinction that is at the heart of the c onceptual definition of victimization as measured by surveys. The final defining characteristic of vict imization is that it can be understood independently from the social context in which it occurs. That is, we can identify victimization regardless of the social meani ng ascribed to an activ ity by those directly involved. While identifying criminal incident s may seem straightforward for a crime like robbery, the criminality of an incident betw een friends or family (e.g., intimate partner violence) is less clear. The ability to unde rstand victimization independently from its social context allows events to be placed into standardized crime cat egories regardless of the way events are perceived by those affected by them. Thus, in addition to being a discrete incident bound by space and time, victimization is defined as being understandable despite its abst ract social contex t. Combined, these characteristics provide the conceptual framework for the defi nition of crime as measured by surveys. Information associated with criminal events Data from victim surveys expanded ou r overall understandi ng of crime beyond that which could be gleaned from official statistics. Based on victims perspectives, crime identified by self-report surveys takes on a different definiti on than those captured in official data, and provides additional info rmation associated with criminal events. Most notably, crime identified by victim survey s includes both crimes that are reported as

PAGE 27

18 well as those that are not reported to the police5the latter commonly referred to as the dark figure of crime (Biderman, 1967; see also Biderman & Reiss, 1967). In addition to defining crime differently, victim surveys are able to provide more detailed information on criminal incidents than official data. Fo r example, based on the conceptual definition described above, victim surveys offer more robust victim-, offende r-, and event-specific information than summary information offered by the UCR. Despite what may be viewed as apparent inconsistencies between official data and crime measured by victim surveys results from the two crime measures are strikingly consistent, when programmatic difference s are taken into account (Booth, Johnson & Choldin, 1977; Chilton & Jarvis, 19 99; Maltz, 1999; see also U.S. Department of Justice, 2003b). When viewed in conjunction with offi cial data, victimization estimates provide a more comprehensive understanding of crime. While the original objective of self-report victim surveys was to serve primarily as a calibrator or supplementary yardstick for UCR data (National Research Council, 1976), the realization of victim surveys as a robust measure of crime surp assed its original goal. Crime as a social indicator In the late 1800s, Andre-Michel Guerry's essay on the moral statistics of France offered insight into the use of crime data as a social indicator of th e overall welfare of a nation (see Guerry, Whitt & Reinking, 2002). Ot hers followed, but most defined crime in 5Victimization measured by the National Crime Victimization Survey (NCVS) includes threatened, attempted and completed violent crimes (i.e., rape, se xual assault, robbery, an d simple and aggravated assault), property crimes (i.e., burglary, motor vehi cle theft, and other proper ty crime) and personalproperty theft (i.e., pocket pickings and purse snatch ings). Crimes reported to law enforcement and identified via the UCR program include homicide, forcible rape, robbery, aggravated assault, burglary, larceny-theft, and motor vehicle theft.

PAGE 28

19 a way that was rooted in an institutional approach that focused on a legitimate, organized social response to be havior that violated legal norms (see Biderman & Reiss, 1967). Until data from victim surveys were available, crime as a social indicator was almost entirely based on official statistics. Victim surveys offer many advantages over official statistics. Though about half of all crime is not reported to the poli ce (Hart & Rennison, 2003), victim-survey data include information on crimes that are repor ted as well as not re ported to the police. Moreover, victim-survey data contain detail ed information on victim-, offender-, and event-characteristics of incidents. For thes e reasons, victimization estimates of persons and households can be used as a social in dicator, often in conj unction with official statistics, to gauge a broade r understanding of the overall he alth of the nation. On a general level, victimizati on estimates provide information on the annual levels and characteristics of crime as well as changes in levels of crime over longer periods of time (Biderman & Lynch, 1991; Blumstein, 2000; Blumstein & Wallman, 2000; Catalano, 2004, 2005; Klaus, 2002; LaFree & Drass, 1993; Lynch, 2001; Paez & Dodge, 1982; Rand, Lynch & Cantor, 1997; Reiss, 1977a ; Rennison, 2001a; Rennison & Rand, 2003a; U.S. Department of Justice, 1994). Given the robust nature of victim-survey data, however, more specific applica tions of its uses as a social indicator of well-being have been realized. Victim-survey data also permit the use of crime as a social indicator in a more refined manner, and often in ways that official statistics cannot be used. For example, the extent to which legislative efforts aimed at decreasing domestic violence have been assessed using victimization estimat es (Dugan, Nagin & Rosenfeld, 1999, 2003;

PAGE 29

20 Greenfeld, Rand, Craven, Klaus, Perkins, Ri ngel, et al., 1998; Rand & Rennison, 2004; Rennison, 2003; Rennison & Planty, 2003; Rennison & Rand, 2003b; Rennison & Welchans, 2000). Keeping the nations schools safe is another legislative priority, and victimization estimates are used to gauge levels of violence experienced among school children and those attending colleges and uni versities (Bastian & Taylor, 1991; DeVoe, Peter, Kaufman, Ruddy, Miller, Planty, et al., 2003; Finkelhor, Asdigian & DziubaLeatherman, 1995; Fisher, Cullen & Turner, 200 0; Hart, 2003). Furthermore, assessing the level of risk for certain types of crime not included in official statistics like violence in the workplace (Bachman, 1994; Duhart, 2001; Warchol, 1998), crimes involving firearms (Perkins, 2003), cybercrime (Ranta la, 2004), and violence against women and the elderly (Craven, 1996, 1997; Klaus, 1999; Klaus & Rennison, 2002; Rennison & Rand, 2003b) have also been demonstrated in light of victimization data. The availability of disaggregated victim-survey data containing comprehensive information on crime incidents, victims, offe nders, and context of incidents eliminates complete reliance on official data as a social indicator. Victim-survey data offer more than just a new way to assess social welfare, however. The availability of victim-survey data also affords researchers the opport unity to explore new ideas related to criminological theory. Building theories of crime and crime causation Crime is a relatively infrequent event and in order to study it using self-report victim surveys, large samples of the populat ion must be obtained. Self-report victim surveys collect information from both victims and non-victims. From crime victims, data

PAGE 30

21 provide in-depth insight into victim-, offe nder-, and event-characte ristics of criminal incidents. Based on these characteristics, data from self-report victim surveys produce a rich vein of information from which resear chers mine to build theories of crime and crime causation. The nature of emerging national level victim-survey data in the late 1970s allowed researchers to develop two general theoretical strategies to better understand crime and crime causation: approaches that focused on victims and those that focused on offenders (Cantor & Lynch, 2000). Victim-ori ented approaches used survey data to develop general ideas of personal victim ization (Hindelang, Gottfredson & Garofalo, 1978) as well as specific co rrelates to crime (Cohen & Fe lson, 1979). Regardless of differences within the victim-oriented st rategy, efforts to unders tand crime and crime causation that developed from this approach shared a common theme: a focus on the occurrence of crime experienced by victims. Othe r theories of crime and crime causation used victim-survey data to refine ideas c oncerning criminal offenders, since victimsurvey respondents are asked to provide deta iled offender-related information for crimes that involved victim-offender contact. Macrolevel theoretical approaches that focused on offenders were difficult to entertain prior to the availability of national level victimsurvey data, given the absence of offender-based information in official statistics like the UCR. More specific examples of the use of vi ctim-survey data in the development of criminological theory exist. The emergence of victimization data provided researchers with insight into the relationships between so cial contextual, ecologi cal, and structural correlates and victimization (Baumer, Ho rney, Felson & Lauritsen, 2003; Decker, 1980;

PAGE 31

22 Lauritsen, 2001, 2003; Lauritsen & White, 2001) Opportunity theo ry and life-style factors associated with victimization have al so been assessed using crime-victim data (Cohen & Cantor, 1981; Lynch & Cantor, 1992; Sampson & Lauritsen, 1990), as well as theories that address the re lationships between offending and the life course (Laub & Lauritsen, 1993). In general and specific ways, the availabi lity of victimization data offered an entirely new perspective on crime for those developing or testing theory. Cantor and Lynch (2000) note that criminological theori es such as routine activities theory, opportunity theory, and even rational choice th eories of crime flourished in large part because of the availability of victim survey data (p. 90). As availa bility and application of information generated from victim surv eys increased, so did the awareness and understanding of the surveys strengths and weaknesses. Methodological issues as sociated with self-report victim surveys Design and analysis of victimization surveys In the early 1970s, the Law Enforcem ent Assistance Administration (LEAA)6 sponsored the National Crime Survey (NCS). The goal of the NCS was to measure the levels of criminal victimization of persona l and households for the crimes of rape, robbery, assault, burglary, mother vehicle theft, and larceny (Lehnen & Skogan, 1984, p. v). In preparation for a national survey ai med at measuring crime from the victims perspective, methodological challenges were identified, evaluated, and documented. Over time, design and analysis of victimization surveys, criteria for assessing the validity 6 LEAA became the Bureau of Justice Statistics (BJS) in December 1979.

PAGE 32

23 of victim-survey data, and issues related to the sample design, coverage, and nonresponse were recognized as issues th at could significantly impact th e self-report victim survey estimation. Design features of national level self-r eport victim surveys can affect survey results (Cantor & Lynch, 2000, 2005). The National Crime Victimization Survey (NCVS), for example, is drawn from a stratified, multistage, cluster sample employing a rotating panel design that is comprised of eligible household members age 12 or older, residing in the home at the time of the surv ey (Catalano, 2004, 2005; see also Rennison & Rand, 2003a). Survey mode, question wording and questionnaire design associated with screening procedures, and the use and length of reference periods represent some of the critical design features shown to impact estimates produced by the victim-survey methodology. Survey mode Survey modeor the means by which a survey is administeredcan significantly affect conclusions drawn from victim-surve y results (Groves, 1977; Groves & Couper, 1992, 1993; Woltman & Bushery, 1977b). Mail, telephone, and face-to-face surveys were three modes that developers initially re garded as most promising for administering victim surveys at the national level. Furthe r review suggested that mail surveys were a less effective option and were soon abandoned (Dodge & Turner, 1971). Initial testing of self-report victim-survey results failed however to indicate that pe rsons interviewed by telephone were any more or less likely to refuse to participate than those who were

PAGE 33

24 interviewed face-to-face (Turner, 1977). As a result, in-person and telephone survey modes were adopted for use in the NCS. Research into the effects of different survey modes continued following the fielding of the NCS. Studies conducted after panels bega n completing all NCS enumerations7 showed that victim surveys conducte d entirely in person produced higher reports of household victimization by persons other than household respondents;8 yet, interviews conducted in-person did not affect overall personal victim ization estimates for any given crime type (Woltman & Bushery, 1977b). Conversely, telephone interviews were not as effective as in-person interviews in identifying less serious crimes like petty larceny. As a result, it was concluded that conducting interviews over the telephone for each interview wave risked underestimating overall victimization rates, since petty larcenies made up a considerab le portion of the overall number of victimizations. Despite these findings, computer-assiste d telephone interviews (CATI) were introduced to the NCVS as a part of the survey redesign9 completed in 1992 (Hubble & Wilder, 1988; Kindermann, Lynch & Cantor, 1997; Persely, 1996; Taylor, 1989; U.S. Department of Justice, 1989, 1994). While not able effects to vict imization estimates corresponded to the adoption of the CATI m ode, most were generally attributed to modifications made to question wording and questionnaire design of incident screening questions. In sum, results of early methodolog ical studies of self-re port victim surveys 7NCS sampled households were interviewed 7 times, once every 6 months, for 3 years. 8A household respondent is a sampled-unit respondent who provides information about the entire household. 9As a part of the redesign, the National Crime Su rvey was renamed the National Crime Victimization Survey.

PAGE 34

25 demonstrate that the survey delivery method can impact both participation as well as reported victimization. Question wording and questionnaire design Improper question wording and questi onnaire design related to screening questions used to identify crim inal incidents can also threat en the validity of national self-report victim survey results. For this reason, these issues r eceived considerable attention during NCS pretests. Initial results demonstrated that specific screening questions were more effective at eliciting crimes than were general questions (Dodge, 1970, 1977b); changing the order of screeni ng questions reduced the chances of duplicating incident reports (Murphy & D odge, 1970); subtle changes in question wording helped differentiate rape from aggr avated assault and attempted rape (Turner, 1972); and quality control was improved wh en screening questions and incident questions were administered separately (Kalish, 1974). The redesign of the NCS not only addre ssed survey-design features related to mode and question wording, but it also substa ntially modified screening questions based on prior research. For example, cue questi ons used on the Basic Screen Questionnaire (NCVS-1)10 instrument were expanded to improve respondent recall (see Biderman & Cantor, 1984; Biderman, Cantor & Re iss, 1982, 1984; Biderman & Lynch, 1981; Bushery, 1981; see also Groves & Couper, 1992, 1993). Moreover, refined descriptions of crime incidents were included and specific questions about rape and sexual assaults were added. The impact of question wordi ng in victim surveys was quantified when 10See Appendix A for a copy of the Ba sic Screening Questionnaire (NCVS-1).

PAGE 35

26 post-redesign results revealed that about tw ice the number of rape s were reported after changes were made to the survey (Bachman & Saltzman, 1995; see Bachman & Taylor, 1994). Due in large part to the surveys rede sign, the dramatic rise in the number of rapes identified increased awareness of and concern for a unique type of victimization captured in self-report victim surveys. Series victimization Victim surveys face the unique challenge of dealing with series victimization. As noted above, one aspect of the conceptual definition of crime as measured by victim surveys is that it is a discrete event bound by space and time. Some criminal events identified in victim surveys are ongoing in na ture. These incidents are classified as series victimizations Because they are not consistent with the conceptual definition of crime, the question then becomes how should they be usedif at allin the creation of aggregate victimization estimates? According to NCVS protocol, continuous criminal events identified by survey respondents are considered series victimization if th e victimization consist of at least 6 incidents11 so similar in detail that a respondent is unable to distinguish events to the extent that they can be individually recorded on separate incident forms12 (see U.S. Department of Justice, 2003a). Initial i nvestigations into the impact of series victimization suggested that they account for about 5% of all personal and household victimization, although they are most commonl y associated with assault and household 11Originally, the number of continuous indistinguishable incidents that defined series victimization was 3. The number was changed to six as part of the NCS/NCVS redesign. 12See Appendix B for a copy of the NCVS Incident Form (NCVS-2).

PAGE 36

27 larceny (Dodge, 1975). More recen t research suggests that seri es victimizations represent between 6% and 7% of all violent victimi zations recorded by the NCVS (Rennison & Welchans, 2000). Given the relatively common occurrences of these types of victimizations, however, they can substantially impact the estimates for overall victimization. Research also suggests that reports of seri es victimizations is linked to interviewer experience or lack thereof, victim characteris tics such as age and type of employment, crime type, and mode of interview (Dodge, 1975, 1977a; Dodge & Lentzner, 1978; Lauritsen & Quinet, 1995; Lynch, Berbaum & Planty, 1998). Since reports of series victimization are ongoingspanning time and spacethey cannot be reconciled with nonseries incidents. Therefore, according to NCVS protocol, series victimizations are excluded from annual victimization estimates13 (see Catalano, 2004, 2005; Reiss, 1977b). Excluding series victimization from national es timates of crime is a result of screeningquestionnaire design, which is based entirely on the conceptual defi nition of crime when measured by victim surveys. In addition to mode and question wording or questionnairedesign effects, other controvers ies associated with survey de sign exist. Using a reference period as means to address recall bias is one example. Reference periods Recall bias is a type of response effect. It is a methodological problem related to the rotating panel design of the NCVS (Woltman, Busher y & Carstensen, 1975). Recall bias occurs in retrospective surveys when respondents erroneously include or exclude 13They are included in othe r NCVS special reports.

PAGE 37

28 events from a specified time fr ame, by virtue of failing to accurately recall the date on which an event occurred. Including an event that occurred on a date outside a survey reference period is considered forward telescoping, whereas excluding an event that took place during a survey reference period by reporting that it took place outside the specified time frame is called backward telescoping (see Biderman & Cantor, 1984; see also Murphy & Cowan; 1976). Like the issues de scribe above, the effect of recall bias received considerable attention during NCS pr etests. Initial tests revealed that forward telescoping occurred slightly more often wh en a 12-month reference as opposed to a 6month reference period was used (Dodge, 1970; Turner, 1972); and that the accuracy of recall varied across crime type (Murphy & Dodge 1970). In later stud ies, the impact of recall biasassociated with a rotating pane l design and introduced by telescopingwas linked to unbounded interviews14 and to certain characteristics of criminal incidents (Balvanz, 1979; Gottfredson & Hindelang, 1977; Turner, 1976b; Woltman & Cadek, 1977). Contemporarily, effects of reference-peri od length on victimization estimates are made clearer upon examination of three distin ct victim surveys: the NCVS, the British Crime Survey (BCS), and the National Violence Against Women Survey (NVAWS). Despite the added costs, the NCVS uses a ro tating panel design with a 6-month reference period, whereas the BCS and the NVAWS use a 12-month reference period. Despite their shared goal (i.e., assessing victimization) results across each of these victim surveys 14Bounding interviews is a quality assurance process us ed to minimize the effects of telescoping. Each incident reported during an interview is checked against incidents reported for the same respondent during the previous interviews. For more on bounding see Murphy & Cowan, 1976 and Addington, 2005.

PAGE 38

29 are substantially different. Researchers attr ibute much of the variation in levels of reported victimization identified across each of these surveys to the length of reference period used (see Cantor & Lynch, 2000; Fisher, Cullen & Turner, 2000; Rand & Rennison, 2002, 2004, 2005). In addition to studies of survey-design f eatures discussed above, investigations into the impact of proxy interv iews and small supplements to victim surveys have also been conducted (Cowan, Murphy & Wiener, 1979; Turner, 1976a). While results do not indicate that these features significantly affect survey results, the research demonstrates a need to learn more about what aspects of vi ctim surveys can affect estimates. Indeed, efforts to better understand victim-surve y methodology are evident well before (and continued long after) the fieldi ng of initial self-re port victim survey via the NCS. Criteria for assessing the validity of victim-survey data Carmines and Zeller (1979) define validity as the extent to which any measuring instrument measures what it is intended to measure (p. 17). A series of survey-design pretests conducted in Washington, DC, Baltimore, Maryland, San Jose, California and Dayton, Ohio provide some of the earliest insi ght into the validity of victim surveys (see Dodge, 1970; Kalish, 1974; Murphy & Dodge 1970; Turner, 1972). Initial victimsurvey pretests employed a reverse-records chec k technique to assess the ability of this new methodology to measure crimes known to po lice. In each of th e studies, victims identified in official law-enforcement record s were engaged in victim-survey interviews. Results of interviews were compared to in formation contained within police reports for each respondent. Initial findings indicated th at victim surveys provided an overall valid

PAGE 39

30 measure of crime. While flaws in the revers e-records check technique used to assess the validity of victim surveys ha ve since been demonstrated (Biderman & Lynch, 1981), the ability of victim surveys to validly measur e crime is generally acknowledged (Thornberry & Krohn, 2003). Despite the general acceptance of victim surveys as a valid measure of crime, controversies over the cr iteria for assessing the validity of victim-survey data persist. Qualitative analysis of the classification of crimes identified in victim surveys, as well as other methods aimed as assessing the conten t validity of victim surveys, have been recommended (see Cantor & Lynch, 2000). While these ideas have generated relatively little reaction from the resear ch community, issues related to sample design, coverage, and nonresponse associated with self-report vi ctim surveys are often at the forefront of researchers concerns, especially among those who attempt to use victim-survey data like those produced by the NCVS. Cantor and Lynch suggest, however, that a renewed interest in assessing the validity of victim-survey data if national crime estimates produced by surveys begin to substantially diverge from those produced by official records. Sample design, coverage, and nonresponse Sample design and selection are vital com ponents of survey research. The impact of sample design, coverage, and nonresponse on victim surveys is widely documented and has changed over time (Biderman, 1970; Bushery, 1981; Dodge & Turner, 1971; Reiss, 1982; Taylor, 1989; Taylor & Rand, 1995; Tourang eau & McNeeley, 2003; U.S. Department of Justice, 1989, 1994; Woltman & Bushery, 1977a). Other methodological

PAGE 40

31 issues like coverage and nonresponse are clos ely tied to sample design and present challenges to self-report victim surveys. For example, the use of victim surveys has become a common part of American culture. They also have a growing international appeal.15 Yet, while a trend in survey us e is increasing, so is the publics unwillingness to cooperate and participate in survey s (de Leeuw & de Heer, 2002). Arguably, respondents decreasing willingne ss to participate in surveys makes it more difficult to derive accurate estimates of a population fr om sample statistics. While the NCVS benefits from response rates that cons istently hover near 90%, nonresponse can nevertheless present a challenge to victim su rveys and their ability to produce valid and reliable estimates, especially if nonresponse ma nifests in systematically different ways among certain subgroups. Examples of controve rsies associated with victim surveys due to sample design, coverage, and nonresponse be come more apparent when the analytic challenges facing those who use victim-survey data are examined. Crime in the U.S. is not equally distributed across the population. Minorities, for example, experience a disproportionately larg e amount of victimization compared to the overall population (Bastian & Taylor, 1994; Greenfeld & Smith, 1999; Hindelang, 1978; Rennison, 2001b, 2002). Creating a problem for re searchers using victim-survey data is the fact that those at higher risk of victimization are often not sufficiently represented in victim-survey samples (i.e., young, black males) or excluded from samples altogether (i.e., the homeless). Crime is also disproportionately con centrated spatially (Duhart, 2000; Gibbs, 1979). In general, the distributi on of crime within cities differs to a greater extent from 15Between 1989 and 2000, over 70 different countries participated in the United Nations Office of Drugs

PAGE 41

32 the distribution of crime across cities. Thus, relatively fewe r numbers of individuals are exposed to relatively high levels of risk, mo st notably from crimes such as rape, robbery, and assault. As a result, indi viduals exposed to these high-ri sk areas can represent certain crime types in victimization estimates di sproportionately, depending on sample design and selection procedures. Those attempti ng to use victim-survey data like those produced from the NCVS must address the problem of crime distribution. Another analytic challenge to using vict im-survey data is the problem of large standard errors associated with sub-classes of victimization. As the National Research Council (2003) recently noted, analyzing crime data at levels of aggregation such as counties or census tracts is necessary for many researchers seeking answers to policy questions. Yet, the infrequency with whic h crime occurscombined with the current sampling designprevents data gleaned from th e NCVS from yielding reliable estimates at sub-national levels. A similar problem is presented when analysis of sub-groups of the population or sub-crime type analysis is desired. Recent figures from the NCVS reveal that estimates of rape or sexual assault experienced by males are based on 10 or fewer cases16 for every category of victimoffender relationship identified in the survey (Catalano, 2005). A reduction in sample size produces a corresponding increa se in standard error. Thus, apparent differences in victimization rates across sub-na tional, -population, or -crime t ype categories can actually be due to inherent variability rather than true differences in victimization rates. and Crimes International Cr ime Victim Surveys (ICVS). 16Estimates displayed in NCVS reports based on 10 or fewer unweighted sample cases are identified as unreliable.

PAGE 42

33 The analytic challenges noted above illust rate controversies related to sample design, coverage, and nonresponse associated wi th self-report victim surveys. While progress has been made in understanding an array of methodological problems associated with this methodology, some questions rema in unanswered. Research examining the challenges victim surveys face must therefore continue if solutions that address these weaknesses are to be realized. One area in which investigation is overdue is respondent fatigue. The following section examines this particular methodologi cal issue related to self-report victim survey s in greater detail. Respondent fatigue in victim surveys Past examinations of the self-report vi ctim survey methodology exposed problems commonly associated with longitudinal survey s. For example, nonsampling error caused by nonresponse, panel attriti on, telescoping, and the use of proxy interviews are issues worthy of attention in the NCS/NCVS (Biderman & Cantor, 1984; Bushery, 1978; Lehnen & Reiss, 1978a, 1978b; National Resear ch Council, 1976; Sliwa, 1977; Taylor, 1989; Woltman, 1975; Woltman & Bushery, 1977a; 1977b; Ybarra & Lohr, 2000, 2002). In part because of these issu es, the survey underwent a massiv e redesign that resulted in substantial methodological changes when implemented in 1992. For example, cue questions used on the Basic Screen Questionn aire (NCVS-1) were changed to improve respondent recall, more descriptions of crim e incidents were included, computer-assisted telephone interviewing was in troduced, and specific questions about rape, and the inclusion of questions about sexual assaults were added. Given these improvements to the survey, it is surprising that findi ngs from some very early methodological

PAGE 43

34 investigations of the self-report victim surv ey methodology continue to be accepted as part-and-parcel of contemporary victim surv eys. One example of this conventional wisdom is that multiple interviews generate fatigue and cause a decreased level in reporting victimization in response to certa in survey items (Thornberry & Krohn, 2003). One very early publication suggested that a possible source of nonsampling error in the NCS is respondent fatigue, also known as fatigue bias (Biderman, 1967; Biderman et. al., 1967). Biderman et al first identified motivational fatigue during NCS pretests by comparing rival techniques of survey administration (see Skogan, 1981). The first technique allowed a respondent to become t est wise to the survey instrument. The survey was administered in a way that perm itted a respondent to link a positive response (i.e., reporting being victimized) with a lengt hy respondent task (i.e., being asked more detailed questions about a vi ctimization). The second method of survey administration circumvented this situation by asking all de tailed victimization questions following all general incident-screening questions. Biderm an et al. found that the second interviewing procedure (i.e., the non-test-wise version) produced 2 times the number of reported victimizations than the test-w ise version. These findings su pported the idea that fatigue bias contributed to nonsampling error in the NCS. While the conclusions are important, they are based on a cross-sectional survey of only 183 respondents. Biderman et al. (1967) noted that the i ssue of respondent fati gue deserved more attention. In the 1970s, claims that respondent s could become test wise were supported by research that assessed the relationship be tween respondent fatigue and specific design features associated with the NCS (Lehne n & Reiss, 1978a, 1978b). Lehnen and Reiss argued that the multiple exposure to stimuli problem in the NCS due to repeated

PAGE 44

35 exposure to the same questionnaire substa ntially decreases th e number of reported victimizations by respondents. Indeed, Lehnen and Reiss (1978b) concluded that a principal source of response e rror in the NCS was due to re spondents repeated exposure to the survey. They suggested that an NCS respondent has several opportunities to learn what is desired and become sensitized to the objective of the survey (Lehnen & Reiss, 1978a, p. 112). The importance of the work of Lehnen and Reiss (1978a, 1978b) is clear. However, nearly three decades have passed and replications of their work have not been conducted. Given the significant changes in the NCVS methodology implemented during this time, much remains unknown about the natu re and extent of re spondent fatigue in self-report victim surveys. In short, the level of respondent fatigue in the contemporary victim surveys and its subseque nt threat to estimation is unclear. Therefore, this dissertation investigates the methodological issue of responde nt fatigue believed to be associated with contemporary national self-report victim surveys; and examines the issue from three perspectives (Figure 1). The first examines respondent fatigue and surveydesign effects. The second examines res pondent fatigue by modi fying the operational Figure 1. Three perspectives used to examine respondent fatigue Perspective 1Perspective 2Perspective 3 Uses nonresponse as the measure of fatigue. Focuses on multiple waves of interviews. Integrates theoretical concepts of household nonresponse. Uses contemporary NCVS data. Examines respondent fatigue and survey-design effects. Uses individuals as the unit of analysis. Uses nonresponse as the measure of fatigue. Focuses on first and second interviews only. Uses individuals as the unit of analysis.

PAGE 45

36 measure of fatigue, while the third assesses respondent fatigue over multiple waves of self-report victim surveys. Before each pe rspective is presented in greater detail and analyses conducted, a descri ption of the data used fo r this study is offered.

PAGE 46

37 Data Secondary analysis of data collected vi a the National Crime Victimization Survey (NCVS) is used for this study. The NCVS is a stratified, multistage, cluster sample employing a rotating panel design. Stratifyi ng the NCVS sample involves dividing the eligible population into strata or groups based on the vari able(s) of stratification (e.g., region). The sample is selected from these strata. Cluster sampling is a procedure in which the population is divided into clusters (e.g., housing un its selected within sampled enumeration districts). Once cl ustered, a probability sample of clusters is selected for study. Multistage refers to the fact that there is more th an one step in the sampling process. NCVS interviews are conducted continuous ly throughout the year in a rotating panel design. In this scheme the sample of households is divided into six rotation groups. Within each of the six rotation groups, six pa nels are designated. A different panel is interviewed once every six months covering seven interviews. A new rotation group of households enters the sample every six months replacing a group as it is phased out after being in the sample.17 Household members eligible fo r interview are those individuals age 12 or older residing in the home at th e time of the survey. Interviews with respondents are gathered through both faceto-face and telephone interviews. During the basic screening interview, demographic information such as age, 17 See Appendix C for a copy of the NCVS Rotation Chart (NCVS-551)

PAGE 47

38 gender, race and Hispanic origin for each el igible household member is collected. Some of this information (i.e., age and marital stat us) is updated during subs equent interviews if necessary. When respondents report an incide nt during this proce ss, detailed incidentbased data are collected. For example, ch aracteristics of the crime (e.g., month, time, location and type of crime), victim and offe nder relationship, offender characteristics, self-protective actions taken by the victim, c onsequences of victim behaviors, whether the crime was reported to the police and the presence of any weapons represent some of the information collected on the incident form. NCVS Longitudinal Data File Typically, each year NCVS data are co mpiled and released for public use. Recently, the Census Bureau compiled NCVS records from 1996 to 1999 and created a public-use, longitudinal data file (Bureau of Justice Statistics, 2002). The 1996-1999 NCVS Longitudinal Data File is a nested, hierarchical, incident record-defined file containing 5 types of records: 1) index address ID records; 2) address ID records; 3) household records; 4) personal records; and 5) incident records. The index address ID records are unique to the longit udinal file and allow linkage of individuals records, for each sampled household, across all 7 waves of interviews. The address ID records contain household identifiers, as well as ro tation and panel information. The household records contain information about th e household as reported by the household respondent. Personal records contain inform ation about each eligible household member as reported by that person. Finally, incident records contain data for each incident reported by an individual respondent.

PAGE 48

39 The use of the NCVSspecifically the l ongitudinal release of the NCVSoffers advantages in studying respondent fatigue. Fi rst, by using the longitudinal NCVS one is able to shift the unit of analys is to the individual respondent. This is a more conceptually appealing way to examine respondent fatigue since it is the indivi dual who learns the survey design and then responds based on this knowledge. Also, by shifting the unit of analysis to the individual respondent, and using the longitudinal file, one is able to follow a specific respondent over time. The shift in unit of analysis also means that household mobility may be accounted for. Another advantage is that focusing on the individual respondent allows the removal of unbounded interviews. The use of unbounded data results in artificially high estimates of victimization, as respondents telescope out-ofscope victimizations into th e current reference period (Addington, 2005). In sum, postredesign longitudinal NCVS data allows a be tter opportunity to inve stigate the issue of respondent fatigue believed to be associat ed with self-report victim surveys.

PAGE 49

40 Perspective 1: Respondent Fatigue and Survey-Design Effects Figure 2 Key elements of the first perspective. The first perspective examines respondent fatigue by replicating the original work of Lehnen & Reiss (1978a, 1978b) with contem porary victimization data produced by the National Crime Victimization Survey (NCVS). The availability of longitudinal NCVS data makes it possible to not only replicate th e classic work of Lehnen and Reiss (1978a, 1978b), but to extend it in many ways as well. First, the longitudinal file provides a large representative sample ( n > 323,000). Initial estimates of individual fatigue bias were based on small, non-representativ e, cross-sectional samples ra ising the possibility that findings are not generalizable. Second, extant da ta allow the unit of an alysis to shift from the sub-group to the indi vidual. Lehnen and Reiss ( 1978a, 1978b) utilize subgroups not individual respondentsa s the unit of analysis. Th ese subgroups are constructed Perspective 1Perspective 2Perspective 3 Uses nonresponse as the measure of fatigue. Focuses on multiple waves of interviews. Integrates theoretical concepts of household nonresponse. Uses contemporary NCVS data. Examines respondent fatigue and survey-design effects. Uses individuals as the unit of analysis. Uses nonresponse as the measure of fatigue. Focuses on first and second interviews only. Uses individuals as the unit of analysis.

PAGE 50

41 based on 4 response effect variables.18 While these findings offer insight into the variation associated with these aggregated groups, they do not indicate whether an individual moving across survey enumerations, w ould report fewer victimizations over time. Assuming that the findings from Le hnen and Reiss (1978a, 1978b) also apply to the individual would be a commission of ecol ogical fallacy. At the time of Lehnen and Reiss (1978a, 1978b) work, it was not possibl e to match individual respondents across enumerations and conclusions about individual fatigue bias could not be made. With new data, it is possible to asse ss factors that may predict individual fatigue bias over time. Another way the work of Lehnen and Reiss (1978a, 1978b) is extended is by controlling for changes in household compositi on across interviews. As noted by Lehnen and Reiss as well as by Biderman and Cant or (1984), it is unclea r how much of the suspected response effect measured in earlier work resulted from de sign effects or from sample attrition. The subgroup as the unit of analysis prohibited fo llowing individual respondents through successive interviews. This is problematic since research shows that households that experience victimization at higher rates are most likely to move and no longer be in the sample (Dugan, 1999). Wit hout the ability to follow the individual, Lehnen and Reiss note, the decline in observed reporting with number of previous interviews may be at least partially the resu lt of sample attrition a nd not response fatigue (p. 121). Third, Lehnen and Reiss (1978a, 1978b) do not control for theoretically relevant 18The 4 variables include 1) the number of incident reports completed during the current interview (0, 1, 2, 3 or more); 2) the number of prior interviews completed (0, 1, 2-3, 4 or more); 3) the number of incident reports completed during the previous interviews (0, 1, 2, 3 or more); and 4) the survey mode used during the current interview (in person or telephone).

PAGE 51

42 victimization correlates. Without controlling for importa nt correlates of victimization risk, the true importance of number of prior interviews, number of prior reported victimizations, and survey mode on the leve l of victimization reporting is unclear. Finally, it is unknown if the conclusions reached by Lehnen and Reiss (1978a, 1978b) are applicable today fo r two major reasons. First, the NCS underwent a major redesign that was implemented in 1992. The survey today is a substantially improved instrument. The differences between the preand post-redesign survey are so great that comparing estimates from the NCS to those derived from the NCVS is not recommended (Taylor & Rand, 1995). And second, advances in statistical software now allow one to account for the complex survey design of the NCVSsomething not available to Lehnen and Reiss. Failure to take into account th e fact that the NCS and the NCVS data come from stratified, multi-stage, cluster sampling will lead to an underestimation of standard errors and potentially erroneous conclusions. Lehnen and Reiss (1978a, 1978b) investigat ed response effects to the extent possible given technological a nd data limitations they faced. In fact, data limitations have long hindered a thorough examination of several aspects of the NCS/NCVS methodology. Fortunately, with the availabil ity of longitudinal NCVS data, a more rigorous testing of response effects on the le vel of subsequent reported victimization is possible. Not only is it possi ble, it is long overdue.

PAGE 52

43 Objective The objective of the first pe rspective is to broaden our overall understanding of respondent fatigue believed to manifest in contemporary self-report victim surveys, due to certain survey-design features. A series of questions are addressed in order to meet this goal. First, do survey-instrument characte ristics (i.e., the number of prior interviews, the number of prior reported victimizations, and survey mode19) influence a respondents decision to report victimiza tion? Second, are individual demographic characteristics significant predictors of wh ether a respondent will report victimization, independent of survey-design effects? And third, what is the relative influence of instrument, individual, and lif estyle characteristics on a re spondents decision to report victimization when considered together? St ated formally, the current study tests the following three research hypotheses: H1: Respondents are less likely to report vi ctimization in curr ent interviews if they participated in prior interviews, net of other relevant predictors of victimization. H0: No relationship exists between th e likelihood that re spondents report victimization in current interviews a nd the number of prio r interviews in which respondents participated, whil e controlling for other relevant predictors of victimization. H2: Respondents are less likely to report vi ctimization in curr ent interviews if they reported victimization during prio r interviews, net of other relevant predictors of victimization. H0: No relationship exists between the lik elihood that a respon dent will report victimization during current intervie ws and the number of previously reported victimizations, while controlli ng for other relevant predictors of victimization. 19Survey mode reflects the survey-deliv ery method (i.e., face-to-face or via the telephone) used in the respondents current interview.

PAGE 53

44 H3: The likelihood that respondents report vi ctimization during current interviews is affected by survey mode. H0: Survey mode does not affect the likelihood that respondents will report victimization during current interviews, net of other relevant predictors of victimization. These hypotheses were testing using a se ries of survey-weighted logistic regression models (see Hosmer & Lemeshow, 2000; StataCorp, 2003). The initial model explores the influence of survey-design eff ects of reported victimization in order to address the first research question. Next, a model that includes only control variables is used to illustrate their independent effect on reported victimization. Finally, a fully specified model explores the influence of a ll survey-design characteristics and control variables on reported victimization together, which speaks to the th ird research question and provides results that are used to asse ss each of the aforementioned hypotheses. By using a survey-weighted logistic re gression approach, modeling takes into account the complex sample design and cluste ring factors associated with the NCVS survey methodology. Use of other statistical softwaremost of which assume a simple random samplewould lead to the underestim ation of standard errors and erroneous conclusions. Before presenting the results of the models noted above, however, a description of the measures is provided. Measures Described in greater de tail in the previous ch apter, the 1996-1999 NCVS Longitudinal Data File contai ns 323,265 personal records. Th e file consists of eighteen quarterly collection cycles. A cross-section of the data comprised of various times-in-

PAGE 54

45 sample is necessary for answering the rese arch questions and hypot heses noted above. Several selection criteria were therefore applie d to the longitudinal data file in order to create a subset of data. First, a simple ra ndom sample of 1/18 of all cases was chosen. This process resulted in a cr oss-section of various points in times-in-sample for different respondentsapproximately equal to the amoun t of all interviews conducted during any given quarter. Second, all unbounded interviews were excluded. The use of individuallevel data allows for an important control with respect to unbounded interviews. At the panel level, initial interviews are identifie d by the time-in-sample (i.e., time-in-sample one or TIS1). There are instances, however, where a respondents initial interview does not occur during TIS1. For example, a res pondent might move into a household after TIS1 or a respondent might turn 12 after the household has completed its first interview. The respondents first (i.e., unbounded) interv iew in both situations describe above occurs after TIS1. Finally, since the dependent variable is current victimizations, noninterviews that occurred during the current interview were exclude d. Application of these selection criteria resulted in a sample of 10,613 person-level records. Dependent variable As noted above, the current perspective ex amines how certain design features of self-report victim surveys may affect a re spondents decision to report victimization. Therefore, the dependent vari able is whether the respondent reports victimization during a current interview20 and is referred to as current victimization Victimization includes threatened, attempted and completed violent cr imes (i.e., rape, sexua l assault, robbery, 20 Current interview is used to describe the most rece nt interview in the series of interviews in which a respondent participates. It is during the current interview that reported victimization is measured.

PAGE 55

46 and simple and aggravated assault), property crimes (i.e., burglary, motor vehicle theft, and property theft) and personal-property theft (i.e., pocket pickings and purse snatchings). Current victimization is measured as a dichot omous variable with two response categories: indicates no victimi zation reported during a respondents current interview, whereas indicat es at least one reported vi ctimization. Most of the 10,613 respondents (94%) did not repor t victimization during their current (i.e., most recent) interview (see Table 1). Conceptually, victimizations identified by the NCVS are considered discrete events measured in terms of incidents. In cidents that occur con tinuously that cannot be differentiated by res pondents are excluded.21 The NCVS only measures events that can be uniquely described, thus i gnoring classes of crimes for which victimization is quite prevalent even though the fr equency of individual inci dents is unknown (Skogan, 1981, p. 7). In addition to being di screte incidents, as noted a bove, victimizations are defined independently of those directly involved with the crime. That is, respondents are not asked to determine whether or not they have been victimized. Combined, these three conceptual elements help define the way in which victimization is measured for the current study. Measuring victimization is not unlike measuring other self-reported social phenomena. That is, repeated application of the survey instrument will produce some level of variation in victimization measure d. Since no measure is absolutely reliable, assessing the reliability of se lf-reported victimization is a ma tter of degree. Again, past research examining both test-retest as well as in ternal consistency meas ures of self-report 21 See Chapter Two for a more detailed de scription of series victimization.

PAGE 56

47Table 1. Descriptive statistics for the first perspective. Variables M SD % Min. Max. Dependent variable Current victimizations 0 1 No 93.5 Yes 6.5 Independent variables Prior interviews (dummy variables) 1 6 1 (reference) 26.4 2 20.2 3 17.3 4 13.7 5 12.2 6 10.2 Prior victimizations (dummy variables) 0 3 0 (reference) 82.5 1 12.8 2 3.0 3 or more 1.7 Survey mode 0 1 Telephone 84.5 Face-to-face 15.5 Control variables Demographic characteristics Age (in years) 44.818.512 90 Gender 0 1 Male 45.3 Female 54.7 Race/ethnicity (du mmy variables) 1 4 White non-Hispanic (reference) 77.0 Black non-Hispanic 9.7 Other non-Hispanic 3.8 Hispanic, any race 9.5 Marital status (dummy variables) 1 5 Married (reference) 57.9 Never married 23.8 Widowed 7.1 Divorced 9.1 Separated 2.1 Educational attainment (in years) 13.2 3.60 19 Lifestyle characteristics Time away from home--shopping (dummy variables) 1 5 Never (reference) 1.4 Less than once a month 2.4 Once a month 10.2 Once a week 64.3 Once a day 21.4 Don't know 0.4

PAGE 57

48Table 1. (Continued). Time away from home--entertainment (dummy variables) 1 5 Never (reference) 6.4 Less than once a month 8.8 Once a month 16.4 Once a week 48.4 Once a day 19.5 Don't know 0.4 Use public transportation (dummy variables) 1 5 Never (reference) 78.7 Less than once a month 10.4 Once a month 3.8 Once a week 3.0 Once a day 3.9 Don't know 0.2 Months in current residence 140.2 141.21 1,068 Times moved in the past 5 years 0.71.20 15 Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. Statistics reflect weighted data. Unweighted n = 10,613. data show that self-reported measures are on par (and in some cases exceed) most social science measures (Belson, 1968, Braukmann, Kirigin & Wolf, 1979; Hindelang, Hirschi & Weiss, 1981; Huizinga & Elliott, 1986; Ku lik, Stein & Sarbin, 1968). In addition to reliability, past research has examined th e validity of self-reported victimization. Early studies used to establish intervie w protocol for the National Crime Survey (NCS) employed records check as a means for assessing the validity of self-reported victimization. In three different studies conducted by the U.S. Census Bureau, victims identified in official law-enforcement r ecords were interviewed and results of the interview compared with information in contained within the police reports (Dodge, 1970; Turner, 1972; Yost & Dodge, 1970). A separate study employed reverse records check, where attempts were made to match re ported victimizations with official data (Schneider, 1977). While the aforementioned studies were suspected of overestimating the accuracy of reported victimizations iden tified in the NCS, concordance between

PAGE 58

49 official data and other types of self-reported acts (i.e., de linquency and conviction) are generally high (Blackmore, 1974; Farringt on, 1973; Hardt & Petersen-Hardt, 1977; Hathaway, Monachesi & Young, 1960; Rojeck, 1983). Independent variables Lehnen and Reiss (1978a, 1978b) theori zed that variation in reported victimization across waves of interviews resulted from one of two sources: actual changes in victimization experiences or a respondent learning about the survey design and choosing not to report victimizations in order to minimize their burden. In order to account for both of these sources, a series of instrument-level characteristics are included in the models presented below. Consistent with the work of Lehnen and Reiss (1978a, 1978b), three instrumentlevel independent variables are included in the current analyses. Instrument-level variables include 1) the number of prior interviews in whic h a respondent has participated ( prior interviews ), 2) the total number of victimi zations reported during a respondents prior interviews ( prior victimizations ), and 3) the mode in whic h the current interview is conducted ( survey mode ). Prior interviews is measured as the number of prior interviews in which a respondent participated prior to thei r current interview, and ranges from 1 to 6. Nearly half of all respondent s (47%) were interviewed less than 3 times prior to their current interview. Prior victimizations is measured as an ordinal variable with 4 response categories: indicates no vi ctimizations reported during the current interview, indicates 1 victimization repor ted, indicates 2 victimiza tions, and indicates 3 or more victimizations reported during prior in terviews. The majority of respondents (83%)

PAGE 59

50 reported no victimizations prior to their cu rrent interview. The final independent variablereferred to as survey mode is a dichotomous variable coded as (telephone interview) or (faceto-face interview) to reflect the mode of interview used during the respondents current interview. Most of the current interviews (85%) were conducted over the telephone. Control variables These analyses incorporate important de mographic and lifestyle predictors of victimization as control vari ables. Excluding predictors of victimization risks model misspecification and increases the chances of erroneous conclusions. The literature demonstrates the significance of age, gender, race and Hispan ic origin, marital status, and educational attainment as correlates to victimization (e.g., see Catalano, 2004, 2005; see also Rennison & Rand, 2003). Therefore, these respondent characterist ics are included in the models. Age reflects the age of the respondent during the current interview and is coded as a continuous variable ranging from 12 to 90. Gender is coded as for male respondents and for female respondents. Mo st respondents are female (55%). Race and Hispanic origin is captured through a set of 4 dummy variables: white non-Hispanic (77%), black non-Hispanic (10%), other non-Hispanic (4%), and Hispanic, any race (10%).22 For use in the models, white non-Hispanic is the excluded category. Marital status is captured using a set of 5 dummy variables: currently married (58%), never married (24%), widowed (7%), divorced (9%), and separated (2%). Currently m arried is

PAGE 60

51 the excluded category. Finally, educational attainment is measured as a continuous variable based on the years of schooling co mpleted by the respondent. On average, respondents completed slightly more than 13 ye ars of education at th e time of their most recent interview. Several lifestyle variables are also included in the analyses as control variables. Again, the use of individual-level data pe rmits controlling for these correlates to victimization. Shopping reflects the frequency at which a respondent spends outside their home shopping at drug, clothi ng, grocery, hardware and c onvenience stores; and is captured using a set of 5 dummy variables: never (1%), less than a month (2%), once a month (10%), once a week (64%), and once a day (21%). Never is the reference category. Evening represents how often a respondent spends his/her evenings away from home for work, school or entertainment and is also captured using a set of 5 dummy variables: never (6%), less than a month (9%), once a month (16%), once a week (48%), and once a day (20%). Again, never is the reference category. Transportation is another lifestyle control variable, which indica tes how often a respondent rides public transportation. Like the previous two lifestyle variables, it is captured using a set of 5 dummy variables: never (79%), less than a month (10%), once a month (4%), once a week (3%), and once a day (4%). Again, never is the reference category. Residency measured in terms of months, is a continuous variable used to refl ect the length of time a respondent has lived at their current residence. The le ngth of time respondents have reported lived at their current residence range s from 1 month to nearly 89 years. On 22 Other non-Hispanics category incl udes individuals who describe them selves as an American Indian, Aleut, Eskimo, Asian, or Pacific Islanders. Hispani c is a measure of ethnicity and may include persons of any race.

PAGE 61

52 average, however, at the time of their most recent intervie w, respondents report living at their current residence for between 11 and 12 years. Finally, moved indicates the number of times a respondent moved during the 5 years prior to their most recent interview. On average, respondents report that they moved less than once during the previous 5 years. Results Do survey instrument characteristics asso ciated with self-rep ort victim surveys influence respondents decision to report vi ctimization? Initial findings reveal significant relationships between certain victim-survey design features and their influence over a respondents decision to report victimization, and are cons istent with past research (Lehnen and Reiss, 1978a, 1978b). Table 2 pr esents results obtaine d from a partially specified survey-weighted logistic regressi on model, using survey-design features as predictors of reported victimization. Th e model reveals that the number of prior interviews has a negative effect on the likelihood that a respondent will report victimization during their current interview. In general, respondent s who are interviewed more than once are less likely to report vic timization during their current interview than respondents who are interviewed only once. Specifically, respondents with 2 ( b = -0.35), 3 ( b = -0.55), 4 ( b = -0.83), 5 ( b = -0.82) or 6 ( b = -0.87) prior interviews are less likely than respondents with only 1 prior interview to report vict imization. Again, these finding are consistent with the fi ndings presented by Lehnen and Reiss (1978a) who conclude, first-timers are more likely to report incide nts and that there is a general decline in reporting associated with increasing the number of prior interviews (p. 120).

PAGE 62

53 Results also demonstrate that victimizat ion reported during pr ior interviews has a positive effect on whether a respondent repor ts victimization during their current interview. In general, respondents who repor t victimization during prior interviews are more likely to report victimization during curr ent interviews than respondents who have never reported victimization. Specif ically, respondents who report 1 ( b = 77), 2 ( b = 1.29), or 3 or more ( b = 1.98) victimizations during previ ous interviews are more likely to report victimization during their current interview than respondents who never report victimization. These findings are also cons istent with findings offered by Lehnen and Table 2. Partially specified survey-weighted logi stic regression using survey-design effects to predict victimizationa. Variables b SE Wald Exp(b) Independent variables Prior interviews (dummy variables) 1 (reference) 2 -0.35 0.12 8.63 0.70 3 -0.55 0.11 23.50 0.58 4 -0.83 0.15 30.37 0.44 5 -0.82 0.15 28.34 0.44 6 -0.87 0.17 25.65 0.42 Prior victimizations (dummy variables) 0 (reference) 1 0.77 0.10 54.51 2.16 2 1.29 0.19 47.44 3.63 3 or more 1.98 0.20 102.50 7.22 Survey mode Telephone (reference) Face-to-face -0.20 0.12 2.97 ** 0.82 Constant -2.45 0.08 962.92 0.09 -2 Log-Likelihood -2455.39 Nagelkerke R-squared 0.04* Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aVictimization is coded (0,1). No reported victimization equals 0 and any reported victimization equals 1. Unweighted n =10,613 p < .05 ** p < .10

PAGE 63

54 Reiss (1978a) who concluded, respondents who have reported incidents in the past are more likely to do so currently (p.120). Para doxically, the relations hip between reporting victimization during prior interviews and the likelihood that victimization will be reported during respondents current interview are inconsistent with the notion that exposure to repeated interviews due to surv ey-design methodology results in an increase in respondent burden and a corresponding decrease in reported victimization. Finally, results of the first model demonstrate that survey mode has a slight effect on whether a respondent will report victimi zation. That is, results suggest that respondents interviewed in person are somewhat less likely to report victimization than respondents interviewed via the telephone ( b = -.020, p < 10). While findings from Lehnen and Reiss (1978a) also suggest surv ey mode is a dete rminant of whether victimization is reported, th ey conclude that respondents who are intervie wed in-person are more likely to report victimization than respondents whose interview is conducted over the phone. One possible explanation of these two s eemingly inconsistent findings could be attributed to the differences in the levels of analyses between the two studies. Recall that due to data limitations, Lehnen and Re iss (1978a, 1978b) were unable to conduct analyses at the individual level. Nevertheless, despite the s eemingly inconsistent findings both suggest survey mode can create a respons e effect in self-report victim surveys. Contemporarily, this issue is important due to the fact that an increasing number of NCVS surveys are being conducted over the te lephone in an attempt to reduce costs. However, respondents that complete telephone interviews without repeated attempts to make contact differ demographically from thos e who must be tracked down to complete a

PAGE 64

55 survey in person when a telephon e interview attempt fails. Si nce these charac teristics are also correlated to victimization, an opportuni ty to underestimate victimization as a result of a move towards more telephon e surveys could be created. The current perspective also poses the question, Are individual demographic characteristics significant pr edictors of whether a res pondent reports victimization, independent of survey-design effects? Tabl e 3 presents findings of a second partially specified survey-weighted logistic regressi on model using respondent demographics as well as lifestyle characteristics as pr edictors of reported victimization. Many of variables included in the sec ond model are determinants of reported victimization. For example, younger respondent s are more likely to report victimization during their current intervie w than older respondents ( b = -.02). Similarly, female respondents are somewhat more likely than male respondents to report victimization ( b = 16, p < 10); and respondents who reportedl y have never been married ( b = 27), are divorced ( b = 81), or separated ( b = 91) are more likely to report victimization than respondents who are reportedly married at the time their current interview was completed. Several lifestyle ch aracteristics included in the second model are also determinants of whether a respondent reports victimization. For example, in general, respondents who report spending more time away from home shopping are less likely to report victimization than respondents who re port never spending time away from home shopping. Additionally, results reveal a positive relationship between the extent to which respondents reportedly use public transportati on and the likelihood that a respondent will report victimization. Specifica lly, respondents that use publi c transportation less than

PAGE 65

56Table 3. Partially specified survey-w eighted logistic regression using control variables to predict victimizationa. Variables b SE Wald Exp(b) Control variables Demographic characteristics Age -0.02 0.00 17.77 0.98 Gender Male (reference) Female 0.16 0.09 3.00 ** 1.17 Race/ethnicity (dummy variables) White non-Hispanic (reference) Black non-Hispanic 0.16 0.14 1.32 1.17 Other non-Hispanic -0.12 0.21 0.33 0.89 Hispanic, any race 0.13 0.15 0.70 1.13 Marital status (dummy variables) Married (reference) Never married 0.27 0.12 5.06 1.31 Widowed -0.06 0.24 0.06 0.94 Divorced 0.81 0.13 40.19 2.24 Separated 0.91 0.22 16.73 2.48 Educational attainment 0.01 0.01 0.68 1.01 Lifestyle characteristics Time away from home--shopping (dummy variables) Never (reference) Less than once a month -0.32 0.41 0.60 0.73 Once a month -0.67 0.32 4.40 0.51 Once a week -0.58 0.30 3.70 ** 0.56 Once a day -0.53 0.30 3.13 ** 0.59 Time away from home--entertainment (dummy variables) Never (reference) Less than once a month 0.36 0.24 2.20 1.43 Once a month 0.07 0.24 0.10 1.08 Once a week 0.19 0.21 0.83 1.21 Once a day 0.53 0.22 5.99 1.70 Use public transportation (dummy variables) Never (reference) Less than once a month 0.44 0.13 10.69 1.55 Once a month 0.52 0.19 7.25 1.68 Once a week -0.40 0.28 1.95 0.67 Once a day 0.46 0.19 6.14 1.59

PAGE 66

57Table 3 (continued). Months in current residence 0.00 0.00 0.01 1.00 Times moved in the past 5 years 0.09 0.03 7.79 1.09 Constant -2.36 0.39 36.86 0.09 -2 Log-Likelihood -2441.06 Nagelkerke R-squared 0.04* Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aVictimization is coded (0,1). No reported victimization equals 0 and any reported victimization equals 1. Unweighted n = 10,613 p < .05 ** p < .10 once a month ( b = 44), once a month ( b = 52), or once a day ( b = 46) are more likely to report victimization than respondents that re portedly never use publ ic transportation. Results from the second model also suggest th at there is a positive relationship between respondent mobility and reported victimizati on. That is, respondents that move more frequently are more likely to report victim ization than respondents that move less frequently ( b = 09). Collectively, results from the second model demonstrate that most of the demographic and lifestyle characteristics examin ed are significant predictors of whether a respondent will report victimization; and also illustrate the need to consider these predictors in conjunction with instrument-level factors wh en considering survey-design effects on respondents decisions to report incidents during vi ctim-survey interviews. The final research question asks What is the relative influe nce of instrument, individual and lifestyle characteristic s on respondents decision to report victimization when considered together? Table 4 presents re sults from a fully specified survey-weighted logistic regression model. The model predicts the likelihood that a re spondent will report victimization during their current interview, and contains variable s related to survey-

PAGE 67

58Table 4. Fully specified survey-weighted l ogistic regression predicting victimizationa. Variables b SE Wald Exp(b) Independent variables Prior interviews (dummy variables) 1 (reference) 2 -0.31 0.12 6.46 0.73 3 -0.43 0.11 14.41 0.65 4 -0.66 0.15 18.96 0.51 5 -0.58 0.16 13.23 0.56 6 -0.60 0.17 12.56 0.55 Prior victimizations (dummy variables) 0 (reference) 1 0.64 0.11 34.53 1.89 2 1.04 0.19 29.52 2.83 3 or more 1.75 0.20 74.03 5.77 Survey mode Telephone (reference) Face-to-face -0.2 9 0.13 5.37 0.75 Control variables Demographic characteristics Age -0.01 0.00 10.34 0.99 Gender Male (reference) Female 0.17 0.09 3.64 ** 1.19 Race/ethnicity (dummy variables) White non-Hispanic (reference) Black non-Hispanic 0.18 0.14 1.61 1.20 Other non-Hispanic -0.08 0.21 0.13 0.93 Hispanic, any race 0.16 0.15 1.09 1.17 Marital status (dummy variables) Married (reference) Never married 0.23 0.12 3.60 ** 1.26 Widowed -0.10 0.25 0.18 0.90 Divorced 0.66 0.13 26.42 1.93 Separated 0.84 0.23 13.88 2.33 Educational attainment 0.00 0.01 0.10 1.00 Lifestyle characteristics Time away from home--shopping (dummy variables) Never (reference) Less than once a month -0.24 0.44 0.31 0.78 Once a month -0.62 0.34 3.33 ** 0.54 Once a week -0.54 0.32 2.73 ** 0.58 Once a day -0.48 0.32 2.21 0.62

PAGE 68

59Table 4 (continued). Time away from home--entertainment (dummy variables) Never (reference) Less than once a month 0.36 0.24 2.22 1.43 Once a month 0.07 0.24 0.08 1.07 Once a week 0.17 0.21 0.65 1.19 Once a day 0.52 0.22 5.74 1.69 Use public transportation (dummy variables) Never (reference) Less than once a month 0.43 0.13 10.08 1.53 Once a month 0.51 0.20 6.87 1.67 Once a week -0.42 0.29 2.11 0.66 Once a day 0.49 0.19 6.74 1.64 Months in current residence 0.00 0.00 0.01 1.00 Times moved in the past 5 years 0.06 0.03 3.39 ** 1.07 Constant -2.24 0.41 29.17 0.11 -2 Log-Likelihood -2376.18 Nagelkerke R-squared 0.03* Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aVictimization is coded (0,1). No reported victimization equals 0 and any reported victimization equals 1. Unweighted n = 10,613 p < .05 ** p < .10 design characteristics as well as demographic and lifestyle f actors. Results from this model not only help to answer the final re search question, but also provide information that is used to evaluate each research hypothesis. While the overall model produces a signi ficant proportional reduction in error, a minimal amount of variance in reported vi ctimization is explained (Nagelkerke Rsquared = .03).23 Nevertheless, all instrument-level factors considered are predictors of reported victimization, while controlling for othe r individual-level fact ors associated with victimization. The number of prior interviews, prior victimizations, and survey mode predict the likelihood that vic timization will be reported durin g a current interview. For

PAGE 69

60 example, the number of prior interviews stil l has a negative effect on the likelihood that a respondent will report victimization, while c ontrolling for other relevant predictors of victimization. Respondents with 2 ( b = -0.31), 3 ( b = -0.43), 4 ( b = -0.66), 5 ( b = -0.58) or 6 ( b = -0.60) prior interviews ar e less likely to report vict imization than respondents with only 1 prior interview. Victimization reported during prior inte rviews also remains a positively correlated with whether a respondent reports victimization during their current interview. That is, respondents who report 1 ( b = 64), 2 ( b = 1.04), or 3 or more ( b = 1.75) victimizations during previous in terviews are more likely to report victimization than respondents who neve r report victimization during previous interviews, net of other relevant variables. Finally, results of the final model demonstrate that survey mode still has an effect on whether a respondent will report victimization, once other correlates to victimization are c onsidered. That is, results suggest that respondents interviewed face-to-face ( b = -0.29) are less likely to report victimization than those interviewed via the telephone. Interestingly, the relative influence of many of the survey-design effects is diminished afte r controlling for releva nt demographics and lifestyle characteristics, which is demonstrated in Table 5. Tests for significant differences between coefficients produced by the first (e.g., partially specified model) and third (e.g., fully specified model) are presented in the final table. Results show that th e relative impact the of the num ber of prior interviews on the likelihood a respondent will report victimization is less when individual correlates to victimization are considered than when they are not. The relative impact of the number of prior victimizations on the likelihood that a respondent will report victimization is also 23 A more comprehensive discussion of the models explained variance is presented in the final chapter.

PAGE 70

61 significantly diminished when ot her correlates to victimization are considered. That is, regardless of the number of prior victimiza tions reported by responde nts during previous interviews, the likelihood that respondents report victimization during their current interview is less when individual and lifestyle correlates to victimization are considered than when they are not. These findings de monstrate the importance of being able to examine respondent fatigue believed to be a ssociated with certain survey-design effects of self-report victim surveys at the individual level. Mo re importantly, these findings enable the research hypotheses associated with this pe rspective to be evaluated. Table 5. Impact on survey-des ign effects after controlling for individual correlates to victimizationa. Variables Difference between coefficientsb Independent variables Prior interviews (dummy variables) 1 (reference) 2 1.46 3 2.15 4 2.26 5 1.80 6 1.62 Prior victimizations (dummy variables) 0 (reference) 1 -3.47 2 -2.59 3 or more -4.23 Survey mode Telephone (reference) Face-to-face 1.49 Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aVictimization is coded (0,1). No reported victimization equals 0 and any reported victimization equals 1.bSee Brame, Paternoster, M azerolle & Piquero (1998). p < .05

PAGE 71

62 Conclusions The current study demonstrates that survey -instrument characteristics such as the number of prior interviews, th e number of prior reported vict imizations, and survey mode that are associated with cont emporary self-report victim su rveys influence a respondents decision to report victimization. Based on th ese results, we can reject the first null hypothesis in favor of the alternative: Responde nts are less likely to report victimization if they have participated in prior interv iews, net of other relevant predictors of victimization. Similarly, we can reject the th ird null hypothesis in favor of its alternative. That is, the likelihood that a respondent will report victimization depends on survey mode. However, results from the current pe rspective do not permit the rejection of the second null hypothesis. Although a link is established between the likelihood a respondent will report victimization during a cu rrent interview and wh ether victimization was reported during prior interviews, it is not in the hypothesized direction. Therefore, the second null hypothesis is not rejected. Armed with this knowledge, se lf-report victim-survey admi nistrators may want to reconsider some of the methods currently us ed for conducting longitudinal victim surveys like the NCVS. For example, since there is an inverse correlation between the number of prior interviews and victimization reported during longitudinal victim surveys, fatigue bias that manifests as a response effect may be reduced by decreasing the number of times a household is retained in sample. The Census Bureau attempted to identify the optimal number of months that households should remain in sample when the NCS was initially fielded (Woltman & Bushery, 1977b). Nearly three decades have passed since

PAGE 72

63 those studies were completed. In light of th e current findings, perhaps the time has come to reexamine the optimal number of times to retain a household in sample for contemporary longitudinal self-report victim surveys. Self-report victim-survey administrato rs should also consider developing statistical methods that could be used to correct for the t ypes of response effects observed herein. Statistical adjustments have been developed recently by Ybarra & Lohr (2000) that correct for missing NCVS data. Similar al gorithms could be created that address the positive correlation between reports of victimi zation during previous interview waves and reports of victimization reported during a res pondents current intervie w. Administrators of multiple-wave victim surveys like the NC VS may also need to develop statistical adjustments that attempt to offset response effects associated with survey mode. Telephone surveys are easier and less expensive to conduct than in-person interviews. One way administrators are atte mpting to reduce costs associated with the NCVS is by replacing more face-to-face interv iews with telephone surveys. However, current results suggest that telephone surv eys produce more reported victimization by respondents than face-to-face inte rviews. If mode is a sour ce of response bias in selfreport victim surveys that manifests in terms of decreased reported victimization, then the move away from a survey mode that produces less reported victimization may artificially inflate victimization estimates. Therefore, statistical adjustments for survey mode may need to be developed in order to addre ss possible response bias introduced when an increased number of self-report victim surveys are conduct over the telephone. The current study also demons trates that individual de mographic characteristics are important predictors of re ported victimization, independent of survey-design effects.

PAGE 73

64 More importantly, the relative influences of self-report victim-survey-designs on respondents decisions to report victimiza tion are diminished when considered in conjunction with individual a nd lifestyle correlates to vic timization. Collectively, these findings underscore the need to incorporate co rrelates to victimiza tion in any analyses that seeks to assess the effects of victim-survey design on respondent fatigue. Based on current findings, the conclusion th at survey-design e ffects of self-report victim surveys rests on the assumption that re spondent fatigue manifests as a decrease in respondents willingness to report victimization. The current study is unable to differentiate between the likelihood a respondent does not report victimization because of fatigue and when a respondent does not report victimization because he/she was simply not victimized. Findings based on this operational definition of fatigue may not necessarily be incorrect, but by revisiting this topic with an alternative definition, an improved understanding of fatigue bias as it pe rtains to self-report victim surveys can be realized. The second perspective offers a test of just such an alternative.

PAGE 74

65 Perspective 2: Modifying the Operational M easure of Respondent Fatigue Figure 3 Key elements of the second perspective. Lehnen and Reiss (1978a, 1978b) define respondent fatigue in terms of a reduction in reported victimization during subseque nt waves of victim-survey interviews. If panels report a higher number of victimiza tions during an initial interview compared to later interviews, respondent fatigue is indi cated, according to Lehnen and Reiss. This measurement scheme does not account for instances when respondents are simply victimized less often during the second reference period compared to the first. Therefore, this measure of respondent fatigue raises the possibility of misclass ifying individuals as fatigued when they simply are not vi ctims of crime as much over time. The issue of respondent fatigue can be further examined by modifying the operational measure of fatigue in terms of whether respondents who are exposed to longer interviews during thei r first interview (i.e., they were victims and provided Perspective 1Perspective 2Perspective 3 Uses nonresponse as the measure of fatigue. Focuses on multiple waves of interviews. Integrates theoretical concepts of household nonresponse. Uses contemporary NCVS data. Examines respondent fatigue and survey-design effects. Uses individuals as the unit of analysis. Uses nonresponse as the measure of fatigue. Focuses on first and second interviews only. Uses individuals as the unit of analysis.

PAGE 75

66 information for an incident report) are more likely to refuse to participate in the subsequent interview (rather than reduce the le vel of victimizations they reveal). Linking NCVS interviews from first-time subjects to information about their second interview 6 months later can be used to make this asse ssment. The level of respondents refusal to participatea Type-Z24 noninterview in NCVS victim surveysduring the second interview can be assessed for all respondents. Furthermore, as in the initial perspective, instrumentand respondent-level characteristics can also be examined to provide a better understanding of the correlates of respondent fa tigue in self-report vi ctim surveys that is operationalized as nonresponse. Objectives The objective of the second perspectiv e is to expand our understanding of respondent fatigue that may be associated with the design of contemporary self-report victim surveys. As with the initial perspect ive, a series of questions are addressed in order to meet this goal. First, do survey instrument characteristics (i.e., the number of prior reported victimizations, and survey mode25) influence respondents decision to participate in self-re port victim surveys?26 Second, are individual demographic 24A Type-Z noninterview (i.e., refusal or never availabl e) occurs when an eligible respondent does not provide an interview and the respondent is not the household respondent. A household respondent is the household member that is selected by the interviewe r to be the first household member interviewed. The expectation is that the household respondent will be able to provide information for all persons in the sample household. 25Survey mode reflects the survey-deliv ery method (i.e., face-to-face or via the telephone) used in the respondents initial interview. 26Since data for initial and subsequent interviews are used in this study, a variable that captures information on the number of prior interviews is not included. This variable will be reintroduced into the analysis when respondent fatigue is assessed over multiple waves of interviews.

PAGE 76

67 characteristics significant predictors of wh ether a respondent will participate in selfreport victim surveys, independent of survey -design effects? And third, what is the relative influence of instrument and individua l characteristics on interview participation in self-report victim surveys when considered together? But for the change in operational measure of fatigue, these questions are nearly identical to those posed in the initial study and can also be stated formally as two research hypotheses: H1: Subsequent interviews are more likel y result in nonresponse if respondents report victimization during initial interviews, while controlling for differences in individu al demographics. H0: Alternatively, no rela tionship between nonresponse and victimization reported during initial interviews exists. H2: The likelihood that subsequent in terviews will result in nonresponse is affected by survey mode, net of differe nces in respondent demographics. H0: Alternatively, survey mode has no aff ect on whether subsequent interviews are completed. The analytic strategy adopted to test these hypotheses does not change across the first two perspectives. That is, tests are again carried out using a series of surveyweighted logistic regression models (Stata Corp, 2003). The initial models explore the influence of instrument-level characteris tics on individuals part icipation during the second wave of interviews (i.e., TIS2). Sp ecifically, these models consider the survey mode used and reporting of an incident dur ing the screening process during the first interview. Next, a model that includes onl y respondent demographi cs to determine the role that these variables play on respondent participation duri ng TIS2 is offered. Finally, a fully specified model follows that explor es the influence of all instrumentand respondent-level characteristic s on individuals participati on during TIS2. Upon review

PAGE 77

68 of the fully specified model, two additional models are offer in order to provide a more detailed understanding of the particular effect survey mode has on nonresponse by assessing models for telephone and face-to-face interviews at TIS1 separately. Before presenting the results of these models, however a description of th e measures used is provided. Measures This perspective also relies on data c ontained in the NCVS Longitudinal Data File.27 As noted above, the 1996-1999 NCVS Longitudinal Data File contains 323,265 personal records, consisting of eighteen qua rterly collection cycles. And like the previous approach, several selec tion criteria were applied to the longitudinal file to create a subset of data used in association with th is perspective. A desc ription of the criteria follows. Only an individuals initial and subsequent exposures to the survey were included in the current subset of longitudinal data. Because initial exposure to the survey must have resulted in a completed face-to-face or telephone survey, all individual noninterviews (i.e., Type-Z noninterviews) at TIS1 were excluded. Further, proxy interviews during either the first or s econd interview were excluded. Because the sampling unit in the NCVS is a househol d, households were included only if the occupants did not move out of the sample address between the in itial and subsequent exposure. Finally, only a Type-Z nonintervi ew in which the respondent refused to be interviewed and noninterviews occurring when the respondent was never available 27 For complete information concerning the NCVS Longitudinal Data File see Chapter Three.

PAGE 78

69 were included in the data. Application of th ese selection criteria re sulted in a subset of 32,612 person-level records. While many of the data contained in the models presented from this perspective are similar to those presented in the previous chapter, the 2 samples are independent of one another; therefore, de scriptive statistics for the current sample are provided below, starting with the dependent variable. Dependent variable For the current perspective respondent fatigue is measured using Type-Z noninterviews. This include situations wher e a respondent 1) refuses to be interviewed outright, or 2) avoids the interviewer by never being available to participate in the interview, and is coded as 0 (interview ) or 1 (noninterview). Most of the 32,612 respondents in the current inves tigation (97%) participated in an interview and at TIS2 (see Table 6). Independent variables Independent variables included in this perspective on respondent fatigue are survey mode and the number of victimiza tions reported during a respondents initial interview. It is important to include thes e variables because they have been shown to have an effect on survey pa rticipation in the survey nonresponse li terature (Dillman, Eltinge, Groves & Little, 2002; Finkelhor et. al., 1995; Groves & Couper, 1992; 1993; 1998; Harris-Kjoetin & Tucker, 1998; Johnson, 1988; Lepkowski & Couper, 2002; Madans, Kleinman, Cox, Barba no, Feldman, Cohen, et al., 1986). Victimizations or the number of victimizati ons reported during a respondents initial interview is a continuous variable ra nging from 0 to 7. Higher scores indicate

PAGE 79

70 more reported victimizations during an in itial interview. Fo r respondents reporting victimizations, the mean numb er of victimizations reporte d at TIS1 was 1.3 with a 0.6 standard deviation. Survey mode is coded as 0 (telephone) or 1 (face-to-face) to reflect the mode of interview indivi duals experienced during thei r initial interview. The majority of TIS1 interviews (71%) were conducted face-to-face. Conversely, most Table 6. Descriptive statistics for the second perspective. Variables M SD % Min. Max Dependent variable Respondent fatigue (TIS2) 0 1 Interview 96.5 Noninterview 3.5 Instrument-level characteristics Reported victimizations (TIS1) 0 7 No 89.9 Yes 10.1 Number of victimizations 1.3 0.6 Survey mode (TIS1) 0 1 Telephone 29.0 Face-to-face 71.0 Respondent-level characteristics Age (in years) 43.9 18.1 12 90 Gender 0 1 Male 45.7 Female 54.3 Race/ethnicity (dummy variables) 1 4 White non-Hispanic 77.6 Black non-Hispanic 10.2 Other non-Hispanic 3.6 Hispanic, any race 8.7 Marital status 1 5 Married 58.7 Never married 24.1 Widowed 6.5 Divorced 8.7 Separated 1.9 Educational attainment (in years) 13.2 3.5 0 19 Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. Statistics reflect weighted data. Unweighted n = 32,612.

PAGE 80

71 interviews at TIS2 (87%) were conducted over the telephone. In addition to surveydesign or instrument-level char acteristics, respondent-level ch aracteristics are included in the models as control variables. Control variables Past studies show age, gender, race and et hnicity, marital status, and education are correlated with survey part icipation (see Groves & Couper, 1998). Therefore, it is important to consider these variables when considering th e survey-design effects of contemporary self-report victim surveys on participation. Excludi ng them would also risk model misspecification. More importa ntly, however, since similar demographic characteristics are correlat ed with victimization (e.g., see Catalano, 2004, 2005; Rennison & Rand, 2003) it is important to know whethe r these factors also contribute to nonresponse, given the implications this woul d have on the production of victimization estimates of for some groups. Demographic variables considered in the second perspective are identical to those used in the first. They include the respondent s age, gender, race and Hispanic origin, as well as marital status and educational attainment. Age is a continuous variable ranging from 12 to 90. On average, respondents were reportedly about 44 year s in age at the time of their initial interview. Gender is coded as 0 (male) or 1 (female). Most respondents represented in the current sample are female (54%). Race and Hispanic origin is captured through a set of 4 dummy variables: white non-Hispanic (78%), black non-Hispanic (10%), other non-Hispanic (4%), and Hispanic, any race (9%).28 For the multivariate 28 See footnote 22 on page 53.

PAGE 81

72 models that follow, white non-Hispanic is the excluded category. Marital status is also captured using a set of dummy variables: married (59%), never married (24%), widowed (7%), divorced (9%) and separated (2%). Married serves as the reference category. Finally, educational attainment is a continuous variable measuring the years of completed formal education. It ranges from 0 (no formal education) to 19. On average, respondents reportedly completed 13 years of formal education at the time of their initial interview. Results Do survey instrument characteristics influe nce respondents decisi on to participate in self-report victim surveys? Table 7 presents a series of regression models that evaluate respondent fatigue in self-report victim su rveys and that control for individual characteristics. Except for a difference in the dependent variable used and the unit of analysis, these models are similar to t hose produced by Lehnen and Reiss (1978a, 1978b) and to that which was presented in the previous chapter. For example, Panel A in Table 7 offers a basic model examining the effect of number of reported victimizations at TIS1 on a respondents subsequent willingness to part icipate at TIS2. Findings show that the number of previously reported victimizations is a predictor of subsequent nonresponse. That is, respondents who report victimization at TIS1 are mo re likely to refuse to participate at TIS2 than responde nts who report no victimization ( b = .17). Panel B evaluates the effects of two su rvey characteristicssurvey mode and prior victimizationson subsequent nonresponse. Like the model in Panel A, this model demonstrates a positive effect of prior re ported victimization on subsequent nonresponse

PAGE 82

73Table 7. Partially specified survey-weighted logistic regression predicting nonresponsea at TIS2. Panel A Panel B Panel C Variables b SE Wald Exp(b) b SE Wald Exp(b) b SE Wald Exp(b) Reported victimizations (TIS1) 0.170.084.34*1.19 0.170.08 4.42*1.19 Survey mode (TIS1) Telephone (reference) Face-to-face -0.450.07 43.96*0.64 Age -0.02 0.00 54.11*0.98 Gender Male (reference) Female -0.55 0.07 63.72*0.58 Race/ethnicity (dummy variables) White non-Hispanic (reference) Black non-Hispanic 0.62 0.12 28.20*1.86 Other non-Hispanic 0.47 0.18 6.72*1.59 Hispanic, any race 0.48 0.14 10.43*1.61 Marital status (dummy variables) Married (reference) Never married 0.09 0.09 0.99 1.09 Widowed -1.02 0.29 12.57*0.36 Divorced -0.52 0.15 12.00*0.59 Separated -0.82 0.31 6.95*0.44 Educational attainment -0.01 0.01 0.60 0.99 Constant -3.320.054540.16*0.00 -3.020.06 2330.48*0.05 -2.37 0.19 149.25*0.09 -2 Log-Likelihood 9802.17 9745.67 9393.56 Nagelkerke R-squared 0.00* 0.01* 0.05* Note: Data file is 1996 to 1999 long itudinally linked National Crime Victimization Surveys. aNonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. Unweighted n = 32,612. p < .05

PAGE 83

74 ( b = .17). In addition, findings show a negative effect of survey mode on nonresponse ( b = -.45). That is, respondents who report victimization during TIS1 are more likely to refuse to participate at TI S2net the effect of survey modethan respondents who report no victimization. In addition, persons interviewed in person are less likely to refuse to participate during the following e numeration than those interviewed via the telephone at TIS1even when controlling for wh en prior victimization is reported. These findings demonstrate that rappo rt established between the field representative and the respondent during an in-person in terview matters significantly. The second research question asks, Are individual demographic characteristics significant predictors of whet her a respondent will particip ate in self-report victim surveys, independent of survey-design effect s? Panel C in Table 7 presents findings from a regression model evaluating the pred ictive value of responde nt demographics on nonresponse. Panel C shows that nearly all of the respondent demographics included in the model exert an effect on the probability of nonresponse at TIS2. For example, Age demonstrates a negative e ffect on nonresponse at TIS2 ( b = -.02). This means that younger persons are more likely to refuse to participate during TIS2 than older respondents. Gender also exerts a negative effect on future nonresponse ( b = -.55), demonstrating that nonresponse at TIS2 is less likely among female than male respondents. Net of other individual ch aracteristics, black non-Hispanics ( b = .62), other non-Hispanics ( b = .47) and Hispanics of any race ( b = .48) are more likely than white non-Hispanics to refuse to participat e during TIS2. Findings in Panel C also demonstrate that widowed ( b = -1.02), divorced ( b = -.52) or separated ( b = -.82) respondents are less likely to refuse to part icipate at TIS2 than married persons. No

PAGE 84

75 difference in the probability of married and never married resp ondents likelihood of nonresponse at TIS2 is measured. Similarly, educational attainment fails to predict nonresponse at TIS2. Like in the first pers pective, these findings not only demonstrate that respondent characteristics are a potential source or non response bias in self-report victim surveys, but also illust rate the need for incorporati ng these factors in more robust models assessing fatigue bias. The final question states, What is the relative influence of instrument and individual characteristics on interview participation in self-report victim surveys when considered together? Table 8 presents regr ession output from a fully specified model containing both instrumentand respondent-lev el indicators. Findings show that once respondent demographics are accounted for, th e number of victimizations reported during TIS1 no longer predicts future survey nonresponse, and offer no support for the hypothesis that exposure to a longer survey inst rument during an initia l self-report victim survey interview results in subsequent nonrespo nse. In short, this facet of the survey design does not appear to produce respondent fatigue. Controlling for individualand instrument-level charac teristics, survey mode continues to exert a negative effect of nonresponse at TIS2 ( b = -.32). Specifically, respondents interviewed in-person at TIS1c ompared to respondents interviewed in via the phone at TIS1still are less likely to re fuse to participate at TIS2. With few exceptions, the effects of demographic ch aracteristics on future nonresponse do not change when controls for instrument characte ristics are added to the model. One change that does emerge, however, is the positive effect that never being married has on nonresponse ( b = .07). Persons who are reportedly never married are less likely to refuse

PAGE 85

76 to participate at TIS2 than persons who are reportedly married. A second change measured applies to widowed persons. In Panel C of Table 7, findings suggest that widowed ( b = -1.02) persons are less likely to refuse to participate at TIS2 than married respondents. In Table 8 how ever, the sign of the coefficient for widowed respondents Table 8. Fully specified survey-weighted logistic regression predicting nonresponsea at TIS2. Variables b SE Wald Exp(b) Reported victimizations (TIS1) 0.08 0.09 0.87 1.09 Survey mode (TIS1) Telephone (reference) Face-to-face -0.32 0.07 21.51 0.73 Age -0.02 0.00 46.50 0.98 Gender Male (reference) Female -0.53 0.07 60.12 0.59 Race/ethnicity (dummy variables) White non-Hispanic (reference) Black non-Hispanic 0.64 0.12 29.78 1.89 Other non-Hispanic 0.49 0.18 7.46 1.63 Hispanic, any race 0. 48 0.14 11.7 0 1.61 Marital status (dummy variables) Married (reference) Never married 0.07 0.09 0.58 1.07 Widowed 1.02 0.29 12.49 2.76 Divorced -0.51 0.15 11.44 0.60 Separated -0.82 0.31 6.89 0.44 Educational attainment -0.01 0.01 1.14 0.99 Constant -2.19 0.20 122.01 0.11 -2 Log-Likelihood 9365.38 Nagelkerke R-squared 0.05* Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aNonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. Unweighted n = 32,612 p < .05

PAGE 86

77 flips. This could represent a degree of multicolinearity between this and other variables included in the model.29 Thus far, models demonstrate the si gnificance of survey mode on future nonresponse. Regression models in Table 9 ev aluate whether the observed effects in the fully specified model in shown in Table 8 differ by the survey mode to which respondents were exposed during TIS1. The firs t set of findings presented in Table 9 are based on models only for persons interviewe d in person during TIS1, whereas the second regression output in Table 9 offers findings for respondents who are interviewed over the telephone during TIS1. Results from Ta ble 9 demonstrate that once individual characteristics of respondents are accounted for, the number of reported victimizations measured at TIS1 is not related to nonres ponse at TIS2. This finding holds regardless of the mode of surveying during TIS1. Cons istent with earlier models presented, and regardless of the survey mode younger persons are more likely to refuse to participate during TIS2 than older respondents. And lik e earlier models, fema les are less likely to refuse to participate than males at TIS2, rega rdless of survey mode. Again, regardless of survey mode, findings show that black non-Hispan ics are more likely no t to participate at TIS2 than are white non-Hispanics. However, survey mode appears to play a key role in respondents decisions to participate for some demographic groups. Survey mode makes a difference for Hisp anics and other non-Hispanics with respect to their decision to pa rticipate. A positive effect is found for face-to-face surveys 29 It may also indicate that the model is misspecified, which could also account for the low amount of explained variance associated with this model. A more in-depth discussion on the all the models low levels of explained variance is addressed in the final chapter.

PAGE 87

78 Table 9. Survey-weighted logistic regression predicting nonresponsea at TIS2 by survey mode. Face-to-Face Survey Telephone Survey Variables b SE Wald Exp(b) b SE Wald Exp(b) Difference between coefficientsb Reported victimizations (TIS1) 0.03 0.100.11 1.04 0.15 0.160.93 1.16 -0.62 Age -0.02 0.0043.45*0.98 -0.01 0.006.31*0.99 -1.82 Gender Male (reference) Female -0.61 0.0944.21*0.54 -0.41 0.1114.94*0.66 -1.42 Race/ethnicity (dummy variables) White non-Hispanic (reference) Black non-Hispanic 0.74 0.1426.93*2.09 0.46 0.195.94*1.58 1.17 Other non-Hispanic 0.38 0.222.84 1.46 0.66 0.256.94*1.93 -0.84 Hispanic, any race 0.59 0.1516.33*1.80 0.22 0.221.00 1.24 1.42 Marital status (dummy variables) Married (reference) Never married -0.04 0.110.11 0.96 0.27 0.143.78 1.30 -1.70 Widowed -0.97 0.338.65 *0.38 -1.11 0.603.38 0.33 0.20 Divorced -0.56 0.199.04*0.57 -0.42 0.292.06 0.66 -0.42 Separated -0.87 0.395.53 *0.42 -0.72 0.601.46 0.48 -0.20 Educational attainment -0.01 0.010.75 0.99 -0.01 0.010.63 0.99 0.05 Constant -2.37 0.2499.41*0.09 -2.48 0.2875.92*0.08 0.30 -2 Log-Likelihood -6005.35 -3431.25 Nagelkerke R-squared 0.04* 0.03 Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aNonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. bSee Brame, Paternoster, Mazerolle & Pi quero (1998). p < .05

PAGE 88

79 of Hispanic respondents. When interviewed in person at TIS1, Hispanic respondents are more likely to refuse to participate in TIS2 than white non-Hispanics. In contrast, when interviewed over the phone at TIS1, other non -Hispanics are more likely to refuse to participate at TIS2. Differences in the su rvey mode models are also found for marital status by survey mode. Among those intervie wed in person during TIS1, married persons are more likely to refuse to participate at TIS2 than are never married, widowed, divorced or separated respondents. In contrast, marital status does not predict future nonresponse when the survey at TIS1 is conducted over the phone. Significant predictors of future nonres ponse for respondents who are interviewed initially by telephone, and those interviewed init ially in person are noted above. A useful question to ask is whether the coefficien ts in the two survey-mode models differ significantly. The final column in Table 9 pres ents findings from z-tests, which are used to assess measurable differences between co efficients (Brame, Paternoster, Mazerolle & Piquero, 1998). Findings demonstrate that despite apparent differences between coefficients in the two models, none reached the level of statis tical significance. Collectively, findings provide sufficient info rmation to evaluate the research hypotheses presented in this perspective. Conclusions The current study demonstrates that cer tain survey-instrument characteristics associated with contemporary self-report vi ctim surveyssuch as the number of prior interviewsdo not influence a respondents decision to participate. Based on these results, we fail to reject the first null hypothesis in favor of the alternative: No

PAGE 89

80 relationship between nonrespons e and victimization reported during initial interviews exists. However, results from the current perspective do not permit the rejection of the second null hypothesis. The likelihood that subsequent interviews will result in nonresponse is affected by survey mode. Thus the current study demonstrates that other survey-instrument characteristicssuch as the way a survey is administeredcan influence a respondents d ecision to participate. The objective of the current study was to examine the issue of respondent fatigue in light of an improved dependent variable. The lack of support for a respondent fatigue argument is a key finding. However, other im portant findings have implications for selfreport victim surveys. As noted above, finding s show survey mode matters greatly. The effect of survey mode on future nonrespons e is important to consider in terms of exposure to the survey. A majority of TIS1 interviews are conducted in person (71%). In contrast, about 87% of TIS2 surveys are c onducted via the telephone. Given the increase in the proportion of surveys conducted over the phone between TIS1 and TIS2, it should come as no surprise that nonresponse increases over time. Therefore, administrative costsaving strategies that include relying on more telephone interviews in lieu of in-person interviews should expect a corresponding incr ease in nonresponse and a possible increase in risk of introducing bias due to resp ondent fatigueif the victim surveys are administered longitudinally. Like victimization in general, demogra phic characteristics such as age, gender, and race and Hispanic origin are predic tors of noninterview. If demographic characteristics are linked to nonresponse and to victimization, victimization estimates for these groups could be underestimated. By id entifying the influences of demographics on

PAGE 90

81 nonresponse, specific efforts can be made to retain these individua ls in future data collection efforts. For example, since resu lts from the previous chapter suggest that survey-design effects are associated with an increase likelihood of reported victimization among younger respondents and similar effect s are linked to an increase likelihood of nonresponse among the same group, additional trai ning could be provided to interviewers that not only raises their awareness of the potential impact of survey-design effects on particular subgroups of the population but that also provides them with unique strategies for preventing nonresponse for specific demographic groups. While the current perspective offers seve ral advantages over prior investigations of respondent fatigue thought to be associated with self-repo rt victim surveys, findings should not be viewed as comprehensive. Although an improved operational measure of fatigue is introduced, analyses are limited to only the first 2 waves of victim surveys. The logical next step is to extend the current viewpoint by examining respondent fatigue that manifests in the form of nonresponse ove r multiple waves of interviews. Perhaps by incorporating multiple waves of data a test wise effect such as those observed in past research may emerge (see Lehnen & Reiss, 1978a ). That is, respondent fatigue could be a process that occurs over time, which does not appear until after a second interview. Only through continued empiri cal investigation can we be tter understand the nature and extent respondent fatigue believed to manife st in victim surveys due to certain surveydesign effects.

PAGE 91

82 Perspective 3: Assessing Respondent Fatigue over Multiple Waves of Self-Report Victim Surveys Figure 4 Key elements of the third perspective. The third perspective provides insight into respondent fatigue believed to be associated with contemporary self-report vic tim surveys assessed over several waves of interviews, using nonresponse as the operationa l measure of fatigue. This approach brings the issue of respondent fatigue full circle. It combines the strategy of examining respondent fatigue from a survey-design perspe ctive, using an argua bly more appropriate operational measure, integrating a formal th eoretical perspective on nonresponse. Groves and Coupers (1998) conceptual framewor k for nonresponse in household interview surveys provides the foundation upon which the integration of the first two perspectives is built. Specifically, factors out of the rese archers control (i.e., the social environment factors and household attributes) that infl uence nonresponse as we ll as those factors Perspective 1Perspective 2Perspective 3 Inte g rates theoretical concepts of household nonresponse. Uses nonresponse as the measure of fatigue. Focuses on multiple waves of interviews. Uses contemporary NCVS data. Examines respondent fatigue and survey-design effects. Uses individuals as the unit of analysis. Uses nonresponse as the measure of fatigue. Focuses on first and second interviews only. Uses individuals as the unit of analysis.

PAGE 92

83 under the researchers control (i .e., survey-design features) ar e used to explain variation in nonresponse across multiple waves of victim surveys. Objectives The objective of the final strategy is to flush out the relationship between surveydesign effects of contemporary self-report vi ctim surveys and respondent fatigue from a more theoretically robust viewpoint. Like th e other perspectives, the current study relies on answers to a series of research questions to attain this goal. First, do survey-design characteristics (i.e., the number of prio r interviews, the number of prior reported victimizations, and survey mode30) influence the likelihood a respondent will participate in self-report victim surveys, independ ent of other factors? Second, do social environment factors (i.e., household income, home ownership, whether the respondents home is a singleor multi-unit structure, wh ether or not the respondent operates a home business from their residence, and urbanici ty) effect the likeli hood a respondent will participate in self-report vi ctim surveys, independent of other factors? Third, do household attributes such as the number of children or number of adults residing in a home effect the likelihood a respondent will participate in self-re port victim surveys, independent of other factors? And finally, what is the relative influence of surveydesign, social environment and household attributes on nonresponse during multiple waves of self-report victimization surveys when considered together? Stated formally, the current study tests the follo wing 3 research hypotheses: 30 The survey-delivery method (i.e., face-to-face, telephone, or nonresponse) used during the respondents interview immediately prior to the current interview.

PAGE 93

84 H1: Respondents are more likely not to par ticipate in current in terviews if they participated in prior interviews, ne t of other relevant predictors of victimization, while controlling for other relevant predictors of nonresponse. H0: No relationship exists between the lik elihood that respondents participate in current interviews and the number of prior interviews in which respondents participated, while controlling for othe r relevant predictors of nonresponse. H2: Respondents are more likely not to par ticipate in current in terviews if they reported victimization during prior inte rviews, while controlling for other relevant predictors of nonresponse. H0: No relationship exists between th e likelihood that a respondent will participate during current interviews a nd the number of previously reported victimizations, while contro lling for other relevant predictors of nonresponse. H3: The likelihood that res pondents will participate duri ng current interviews is affected by the mode in which the survey immediately prior to the current survey is conducted, while controlling for other relevant predictors of nonresponse. H0: Survey mode does not affect the lik elihood that respondents will participate during current interviews, while controll ing for other relevant predictors of nonresponse. As with the previous studies, the analytic strategy used is the same. Analyses are conducted using a series of survey-weighted logistic regression models (StataCorp, 2003). The initial model explores the influen ce of survey-design factors on individual nonresponse. Specifically, the model considers the effects that prio r interviews, number of prior reported victimizations, and survey mode of a respondents most recent interview have on nonresponse. Two similar models follow. The first model considers the influence of social environment factors on nonr esponse, independent of all other factors. The next model considers only household attrib ute predictors of nonresponse. Finally, a model that explores the influence of surv ey-design, social environment, and household attribute effects on nonresponse is presented. A description of the analytic results for

PAGE 94

85 each of the aforementioned models follows. Information obtained from the final model is used to assess the above hypotheses. Before presenting the results of these models, however, a description of the m easures used is offered. Measures As with the other perspectives, modifica tions were made to the original NCVS Longitudinal Data File.31 First, variation in the number of prior interviews is required to assess the impact of importance of survey-des ign features (i.e., re peated exposure to survey instruments). Selecting any single pa nel from the file would not suffice, because there would be no variation in the numb er of prior interviews among respondents selected. Conversely, using every panel from th e file would result in repeated measures of the same respondents, which is also undesi rable. Therefore, a simple random sample of 1/18 of all cases was chos en, resulting in a cross-secti on of the data comprised of various times-in-sample. This process produced a subset of data approximately equal to the amount of all interviews conducted during an y given quarter (i.e., similar in size to a survey panel). Second, initial interviews (i .e., TIS1 interviews) were excluded, since the effect that the mode of the previous inte rview has on nonresponse cannot be assessed. Also, only current interviews that are a Type-Z nonintervi ew in which the respondent refused to be interviewed or noninterviews that occurred when the respondent was never available were included. Application of these selection criteria resu lted in a subset of 10,338 person-level records for analysis. Each variable included in models below are described in greater detail in the following sections. 31 See Chapter Three for detailed information concerning the NCVS Longitudinal Data File.

PAGE 95

86 Dependent variable Groves and Coupers (1998) theory of nonresponse in household interview surveys provides the conceptual framework for examining respondent fatigue from the current perspective (Figure 5). Thus, the presence or absence of an interview is used as the dependent variable. Specifically, res pondent fatigue is measured using Type-Z noninterviews, which include 1) refusing to be interviewed outright, or 2) avoiding the interviewer, by never being available to par ticipate in the interview. The dependent variable is coded as 0 (interview) or 1 ( noninterview). Most of the 10,338 respondents in the current investigation (94%) completed their current interview (see Table 10). Independent variables Groves and Couper (1998) argue that su rvey-design, social environment, and household attributes are determinant factors of survey participation. A series of independent variables are used in the current study to assess the relative influence of each of these concepts. For example, the number of prior interviews in which a respondent participated ( prior interviews ), the total number of vic timizations reported during a respondents prior interviews ( prior victimizations ), and the mode in which the survey most recent to the respondents current interview was conducted ( survey mode ) are used to assess the predictive power of surv ey design on individual nonresponse. Prior interviews reflect the number of prior in terviews in which a respondent participated prior to their current interview, and ranges from 1 to 6. It is captured using a set of 6 dichotomous variables, using pr ior interview as the reference category. Prior victimizations or the number of self-reported vic timizations reported during interviews

PAGE 96

87 Figure 5. Groves and Coupers (1998) conceptual framework for survey cooperation. SOCIAL ENVIRONMENT economic conditions survey-taking climate neighborhood characteristics HOUSEHOLD(ER) household structure socio-demographic characteristics psychological predisposition SURVEY DESIGN topic mode of administration respondent selection INTERVIEWER socio-demographic experience expectations Householder-Interviewer interaction Decision to cooperate or refuse

PAGE 97

88 Table 10. Descriptive statisti cs for the third perspective. Variables M SD % Min. Max. Dependent variable Current interview 0 1 Nonresponse 6.7 Completed interview 93.6 Survey-design variables Prior interviews (dummy variables) 1 6 1 (reference) 21.8 2 17.9 3 17.5 4 15.4 5 14.8 6 12.5 Prior victimizations (dummy variables) 0 3 0 (reference) 82.1 1 12.5 2 3.6 3 or more 1.8 Survey modea (dummy variables) 0 2 Non-interview 6.8 Face-to-face 23.7 Telephone 69.6 Social Environment variables Household income (dummy variables) 1 5 Less than $20,000 22.9 $20,000 to $34,999 21.4 $35,000 to $49,999 19.2 $50,000 to $74,999 18.9 $75,000 and over 17.5 Home ownership 0 1 Rents 20.1 Owns 79.9 Single-structure home No 16.8 0 1 Yes 83.2 Home business No 91.9 0 1 Yes 8.1 Urbanicity Urban 25.6 0 1 Rural 74.4

PAGE 98

89 Table 10 (continued). Household attribute variables Adults Household members 12 years and older 2.61.2 1 11 Children Household members younger than 12 years 0.50.9 0 7 Age 45.119.0 12 90 Gender 0 1 Male 46.1 Female 53.9 Race/ethnicity (dummy variables) 1 4 White non-Hispanic (reference) 76.1 Black non-Hispanic 10.1 Other non-Hispanic 3.7 Hispanic, any race 10.1 Marital status (dummy variables) 1 5 Married (reference) 57.2 Never 24.9 Widowed 7.9 Divorced 8.1 Separated 1.9 Educational attainment 13.2 3.5 0 19 Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. Statistics reflect weighted data. Unweighted n = 10,338. aSurvey mode refers to the mode in which the survey immediately prior to the current interview opportunity was conducted. For example, if the cu rrent interview is the respondent's fourth, survey mode refers to the mode in which the respondent's third interview was conducted. prior to the respondents current intervie w is captured through a set of 4 response categories: indicates no vict imizations reported during prior interviews, indicates 1 victimization reported, indicates 2 vict imizations, and indicates 3 or more victimizations. The reference category is . The majority of respondents (82%) did not report victimization prior to th eir current interview. The fi nal variable used to measure the effects of survey-design features is survey mode It is coded as 0 (telephone interview), 1 (face-to-face in terview), and 2 (noninterview) and reflects the mode of

PAGE 99

90 interview experienced by the respondent dur ing the time-in-sample immediately prior to the respondents current interview. Most in terviews conducted prior to the respondents current interview (70%) were conducted over the telephone. Social-environment influences on individua l nonresponse are also included in the analyses because they have been shown to influence nonresponse (see Groves and Couper, 1998). For example, a re spondents household incomes ( household income ), whether a respondent rent s or owns their home ( home ownership ), lives in a singleor multi-unit structure ( single-structure ), operates a home-based business ( home business ), and whether a respondents home is lo cated in a urban or rural area ( urbanicity ) are examined in order to assess the influence that social environment has on respondents decisions to participate in se lf-report victim surveys. Household income is captured through a set of 5 dichotomous variables: Less than $20,000 (23%), $20,000-$34,999 (21%), $35,000-$49,999 (19%), $50,000-$74,999 (19%), and $75,000 and over (18%). For the multivariate models that follow, Less than $20,000 serves as the reference category. Home ownership is a dichotomous variable code d (rents) or (owns). Most of the respondents in th e current sample indicated that they own or are in the process of buying their residence (80%). Single structure is also a dichotomous variable where reflects instances in which the re spondent lives in a multi-structure home and reflects those cases in which the responde nt resides in a singl e-structure home. Eighty-three percent of respondents li ve in a single-s tructure home. Home business is also a dichotomous variable coded (no) or (yes). This variable reflects whether a home business is reportedly operated from th e residence. According to information collected during the current in terview, about 1-in-10 sample d households operate a home-

PAGE 100

91 based business. Finally, urbanicity is a social environment factor and reflects whether a respondents home in located in an urban or rural area. Mo st respondents homes are located in rural areas (74%). Finally, Groves and Couper (1998) demonstr ate the effects of household attributes on nonresponse; therefore, these factors are also incorporated in the models below. For example, the number of household members 12 years and older ( adults ) as well as the number of household members younger than 12 years of age ( children ) are examined in order to assess the relative e ffect each has on nonresponse. Adults is a continuous variable and ranges from 1 to 11. On av erage, there were between 2 and 3 adult household members reportedly residing in resp ondents households at the time of their current interview. Children is also a continuous variable and ranges from 0 to 7. Each sampled household had about 1 member who was younger than 12 years of age at the time of the current interview. Demographic factors are also consider ed and include age, gender, race and Hispanic origin, marital status and educational attainment. Age is a continuous variable ranging from 12 to 90. Respondents average age was about 45 years at the time of the current interview. Gender is coded as 0 (male) or 1 (f emale); and most respondents in the sample are female (54%). Race and Hisp anic origin is captured through a set of 4 dichotomous variables: white non-Hispanic (76%), black non-Hispanic (10%), other non-Hispanic (4%), and Hispanic, any race (10%). For the multivariate models that follow, white non-Hispanic is the reference category.32 Marital status is captured using a set of 5 dichotomous variables: married (57%), never married (25%), widowed (8%),

PAGE 101

92 divorced (8%) and separated (2%). Married serves as the refere nce category. Finally, educational attainment is a continuous variable measur ing the years of completed formal education and ranges from 0 (no formal edu cation) to 19 years. Years of education completed averages about 13 years of formal education completed for all respondents. Results Do survey-design characteristics affect nonr esponse in self-repor t victim surveys, independent of other factors? The initial survey-weighted logistic regression model is presented in Table 11. Findings show that absent other factors unrelated to survey design, the number of prior interviews ha s a negligible effect on nonresponse. Specifically, when respondents participate in 5 prior interviews, they are more likely not to participate in their current interview than when they have not participated in any prior interviews ( b = .37). Paradoxically, however, those with 6 prior interviews are somewhat less likely not to participate in their current interview th an those respondents with no prior interviews ( b = -.35; p < .10). No other substantive relationship between the number of prior interviews and nonrespons e is observed in the first model. Results examining the relationship betw een prior reported victimization and nonresponse provide slightly more support for the notion that respondent fatigue manifests in self-report victim surveys as nonresponse. That is, respondents who report a total of 2 victimizations ( b = .37, p < .10) or 3 or more victimizations ( b = .45, p < .10) during prior interviews are somewhat more likely not to participate during their current 32 See footnote 22 on page 53.

PAGE 102

93 Table 11. Partially specified su rvey-weighted logistic regression using survey-design effects to predict nonresponsea over multiple waves of interviews. Variables b SE Wald Exp(b) Survey-design variables Prior interviews (dummy variables) 1 (reference) 2 0.17 0.15 1.30 1.19 3 0.01 0.15 0.01 1.01 4 -0.07 0.16 0.20 0.93 5 0.37 0.17 4.57 1.44 6 -0.35 0.20 3.12 ** 0.71 Prior victimizations (dummy variables) 0 (reference) 1 0.06 0.13 0.18 1.06 2 0.37 0.21 3.30 ** 1.45 3 or more 0.45 0.25 3.22 ** 1.57 Survey modeb (dummy variables) Non-interview (reference) Telephone -1.52 0.14 122.04 0.22 Face-to-face -1.64 0. 11 219.55 0.19 Constant -1.19 0.14 77.63 0.30 -2 Log-Likelihood 2417.58 Nagelkerke R-squared 0.02* Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aNonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. bSurvey mode refers to the mode in which the survey immediately prior to the current interview opportunity was conducted. For example, if the cu rrent interview is the respondent's fourth, survey mode refers to the mode in which the respondent's third interview was conducted. Unweighted n = 10,338 p < .05 ** p < .10 interview than respondents who never reported victimization. Again, these results could provide support for the second research hypothesi s, if the relationship is maintained in later models. The seemingly most profound survey-design effect identified in the initial model is associated with survey mode. The manner in which the survey prior to the

PAGE 103

94 respondents current survey is conducted is a strong predictor of whether a respondents interview during the current wave will resu lt in nonresponse. Specifically, respondents whose previous time-in-sample interview is over the telephone ( b = -1.52) or in person ( b = -1.64) are less likely to have their cu rrent interview result in a nonresponse than respondents who do not participat e in the interview during the previous wave. However, differences between the telephone and face-to-f ace interview coefficients produced by the model reveal no significant difference. Th e apparent influence of survey mode on nonresponse therefore has less to do with the type of interview in which a respondent participates prior to their current intervie w and more to do with whether or not the respondent participates during th eir previous interview. The current perspective also seeks answers to the question, Do social environment factors effect nonresponse in se lf-report victim surveys, independent of other factors? Table 12 provides results from the second survey-weighted logistic regression model. Findings s how that absent other fact ors not related to social environment, home ownership has a negative effect on nonresponse. That is, respondents who own their homes are less likely ( b = -.26) not to participate th an respondents who rent their homes. Results also show that the type of res pondents dwellings effects their decision to participate in self -report victim surveys. Res pondents who reside in singleunit structures are more likely ( b = .55) not to participate than respondents whose homes are located in a multi-unit structure. Finally, urbanicity is a determinant of nonresponse. Respondents whose homes are located in rural areas are more likely ( b = 28) not to participate than respondents whos e homes are in urban areas.

PAGE 104

95 Table 12. Partially specified survey-weighted logistic regre ssion using social environment factors to predicting nonresponsea over multiple waves of interviews. Variables b SE Wald Exp(b) Social environment variables Household income (dummy variables) Less than $20,000 (reference) $20,000 to $34,999 -0.02 0.12 0.04 0.98 $35,000 to $49,999 -0.12 0.14 0.79 0.88 $50,000 to $74,999 0.03 0.12 0.05 1.03 $75,000 and over -0.11 0.14 0.70 0.89 Home ownership Rents (reference) Owns -0.26 0.14 3.55 0.77 Single-structure home No (reference) Yes 0.55 0.16 11.95 1.73 Home business No (reference) Yes -0.24 0.17 2.04 0.78 Urbanicity Urban (reference) Rural 0.28 0.11 6.40 1.32 Constant -3.04 0.16 356.06 0.05 -2 Log-Likelihood -2522.75 Nagelkerke R-squared 0.00* Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aNonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. Unweighted n = 10,338 p < .05 The third research question considers whet her household attribut es are predictors of nonresponse in self-report vict im surveys, independent of ot her factors? Results show that many of the factors associated with the household exert significant effects on a decision to participate (Table 13). For exampl e, there is positive correlation between the number of adults residing in a sampled household and nonresponse ( b = .29). The more adults in a household, the more likely a subj ects interview will re sult in nonresponse. On the other, the more children that reside in a household, the le ss likely a subjects

PAGE 105

96 Table 13. Partially specified su rvey-weighted logistic regression using household attributes to predicting nonresponsea over multiple waves of interviews. Variables b SE Wald Exp(b) Household(er) attribute variables Adults Household members 12 years and older 0.29 0.04 64.40 1.34 Children Household members younger than 12 years -0.11 0.05 4.89 0.90 Age -0.01 0.00 11.75 0.99 Gender Male (reference) Female 0.36 0.09 16.40 1.44 Race (dummy variables) White non-Hispanic (reference) 1.00 Black non-Hispanic 0.37 0.14 7.03 1.44 Other non-Hispanic -0.13 0.22 0.34 0.88 Hispanic, any race 0.09 0.15 0.32 1.09 Marital status (dummy variables) Married (reference) Never -0.11 0.14 0.66 0.89 Widowed -0.38 0.28 1.92 0.68 Divorced -0.55 0.22 6.64 0.57 Separated -0.25 0.31 0.63 0.78 Educational attainment 0.00 0.01 0.00 1.00 Constant -3.04 0.32 90.25 0.05 -2 Log-Likelihood -2418.79 Nagelkerke R-squared 0.02 Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aNonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. Unweighted n = 10,338 p < .05 interview will result in nonresponse ( b = -.11). Age also demonstrates a negative effect on nonresponse ( b = -.01). Younger persons are more likely not to participate in selfreport victim surveys than older respondents, abse nt of other factors believed to influence nonresponse. Gender exerts a sign ificant effect on nonresponse ( b = .36), demonstrating that nonresponse is more likely among female than male respondents. Net of other individual demographic characte ristics, black non-Hispanics ( b = .37) are more likely than white non-Hispanics to refuse to participate in self-report victim surveys

PAGE 106

97 administered overall multiple waves. And findings presented in Table 13 demonstrate that divorced ( b = -.55) respondents are less likely to refuse to participate than respondents who are reportedly married at the time of their interview. Models presented in Tables 12 and 13 de monstrate the predictive power of social environment factors and household attributes on nonresponse measured in self-report victim surveys that are administered over mu ltiple waves. If survey-design effects are suspected of producing respondent fatigue that manifests as nonresponse in contemporary longitudinal self-report victim surveys, then tests of survey-design effects should include these theoretically relevant variables in their models (see Groves & Couper, 1998). Therefore, these factors are inco rporated in the models used to answer the third and final research question: What is the relative infl uence of survey-design, social environment and household attributes on nonresponseove r multiple waves of interviewswhen considered together? Table 14 presents output from a survey -weighted logistic regression model containing survey-design, social environmen t, and household attributes variables as indicators of individual n onresponse during multiple wave self-report victim surveys. Again, while the overall model produces a si gnificant proportional re duction in error, a minimal amount of variance in nonresponse is explained (Nagelke rke R-squared = .04).33 Nevertheless, findings show th at once theoretically relevant factors are considered, neither the number of prior interviews nor prior reported victimization impacts the for likelihood of subsequent individu al nonresponse. In short, these findings offer no support 33 Again, a more comprehensive discussion of the models explained variance is presented in the final chapter.

PAGE 107

98 Table 14. Survey-weighted logistic regression predicting nonresponsea over multiple waves of interviews. Variables b SE Wald Exp(b) Survey-design variables Prior interviews (dummy variables) 1 (reference) 2 0.18 0.15 1.44 1.20 3 0.09 0.15 0.33 1.09 4 0.01 0.17 0.00 1.01 5 -0.29 0.17 2.72 0.75 6 -0.24 0.20 1.38 0.79 Prior victimizations (dummy variables) 0 (reference) 1 -0.01 0.13 0.00 0.99 2 0.28 0.21 1.71 1.32 3 or more 0.41 0.27 2.36 1.51 Survey modea (dummy variables) Non-interview (reference) Telephone -1.24 0.15 71.60 0.29 Face-to-face -1.41 0. 12 144.38 0.24 Social environment variables Household income (dummy variables) Less than $20,000 (reference) $20,000 to $34,999 -0.07 0.13 0.35 0.93 $35,000 to $49,999 -0.30 0.15 3.78 ** 0.74 $50,000 to $74,999 -0.22 0.13 2.72 0.80 $75,000 and over -0.38 0.16 5.68 0.68 Home ownership Rents (reference) Owns -0.13 0.15 0.71 0.88 Single-structure home No (reference) Yes 0.39 0.17 5.42 1.47 Home business No (reference) Yes -0.28 0.17 2.58 0.76 Urbanicity Urban (reference) Rural 0.20 0.11 3.02 ** 1.22 Household attribute variables Adults Household members 12 years and older 0.27 0.04 46.78 1.31 Children Household members younger than 12 years -0.11 0.05 4.83 0.90

PAGE 108

99 Table 14 (continued). Age -0.01 0.00 10.41 0.99 Gender Male (reference) Female 0.31 0.09 11.21 1.37 Race/ethnicity (dummy variables) White non-Hispanic (reference) Black non-Hispanic 0.17 0.14 1.41 1.19 Other non-Hispanic -0.30 0.21 1.98 0.74 Hispanic, any race -0 .02 0.15 0. 01 0.98 Marital status (dummy variables) Married (reference) Never -0.20 0.14 2.02 0.82 Widowed -0.40 0.28 2.14 0.67 Divorced -0.62 0.22 8.12 0.54 Separated -0.36 0.34 1.15 0.70 Educational attainment 0.01 0.01 0.53 1.01 Constant -2.04 0.36 32.65 0.13 -2 Log-Likelihood -2318.66 Nagelkerke R-squared 0.04* Note: Data file is 1996 to 1999 longitudinally linked National Crime Victimization Surveys. aNonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. Unweighted n = 10,338 p < .05 ** p < .10 either of the first two resear ch hypotheses. Participation in previous interviews, on the other hand, provides meaningful insight into whether a respondents current interview will result in nonresponse. Net of other fact ors, fewer social environment variables are predictors of nonresponse when considered in the final model than when assessed independently of other factors. Specifical ly, there is a positive relationship between respondents who live in a single-unit structure ( b = .39) and the likelihood that they will not participate in self-report victim surveys. Furthermore, there is a slightly positive relationship between urba nicity and nonresponse ( b = .20; p < 10). Respondents who

PAGE 109

100 live in rural areas are somewhat more likely not to participate than respondents residing in urban areas, net of other factors. The impact of other household attributes on nonresponse is also observed in the final model. For example, the effect that the number of household members 12 years and older has on nonres ponse is positive ( b = .27), whereas the impact that the number of household members under 12 years has on nonresponse is negative ( b = -.11). This means that households with more adults are more likely not to participate in interviews than households with fewer adults; and hous eholds with more children are less likely not to participate in inte rviews than households with fewer children. Despite an absence of evidence supp orting survey-design effects producing nonresponse, some demographic factors still predict nonresponse when considered in conjunction with household attrib ute variables and factors asso ciated with survey design. Results from Table 14 show that both age and gender remain predictors of nonresponse, net of other theoretically rele vant factors. As age increa ses, the likelihood that an interview will result in a nonresponse decreases ( b = -.01). Younger respondents remain more likely to refuse to participate in self-report victim surveys than are older respondents. And females are still more likely not to participate during multiple waves of self-report victim surveys than are males ( b = 31). Findings also suggest that divorced respondents are still less likely not to participate in self-repo rt victim surveys than are respondents who are currently married ( b = -.62). Collectively, important conclusions can be drawn from these results.

PAGE 110

101 Conclusions The objective of the third and final perspective on respondent fatigue was to examine the effect of contemporary self-re port victim survey design on nonresponse, controlling for theoretically si gnificant factors th at influence participation in household surveys. Based on results produced from the mo dels above, we fail to reject the first null hypothesis in favor of the alternative. Th at is, no relationship exists between the likelihood that a respondent will participate in an interview and the number of prior interviews in which a respondent participat ed previously, while controlling for other relevant predictors of nonresponse. Furthermor e, based on these results we fail to reject the second null hypothesis in favor of its altern ative: No relationship exists between the likelihood that a respondent will participate during current interviews and the number of previously reported victimizations, while cont rolling for other relevant predictors of nonresponse. Both of these findings are importa nt in that they pr ovide no support for the notion that respondent fatigue manifests as nonresponse in contemporary self-report victim surveys. The lack of support for the respondent fati gue argument is the key finding from this perspective. However, other important findings are observed that have implications for the victim-survey methodology. Results fr om the previous chapter suggested that survey mode influences individual nonrespons e during the first two waves of surveys. However, findings from this study suggest that it is not how respondents prior interviews are conducted that matters, but whether respondents participate in prior interviews. Understanding the relationship between pa st nonresponse and future nonresponse is important and can help survey administrato rs develop strategies to reduce survey

PAGE 111

102 nonresponse. For example, Groves and Couper (1998) argue that if some information about the respondent, his/her so cial setting, or other househol d attributes can be obtained during initial contact despite a noninterview then follow-up contacts can be tailored in ways to increase the likelihood of participation in subsequent interview attempts. In these instances, they argue that letters sent to householde rs after an unsuccessful first contact would be more successful when the letter acknowledged the householders comments, expressed an understanding for their legitimacy, and then provided counterarguments tailored to them (Groves & Couper, 1998, p. 309). Finally, like victimization in general, so me demographic characteristics such as age and gender are related to survey nonr esponse. As noted above, if demographic characteristics are linked to both nonresponse and victimization, victimization estimates may be underestimated for certain subgroups. In these instances, th e error associated with crime estimates is not attributable to sp ecific survey design feat ures. Rather, it is due to the fact these subgroups ar e more likelihood to be victimized and less likelihood to participate in victim surveys. By identifyi ng the effects of demographics on nonresponse, specific efforts can be made to retain these i ndividuals in future data collection efforts. Longitudinal victim-surveys can be tailed to address the specific reasons that certain subgroups that are more likely to be vict imized have for not participating. Although findings from the current study are informative, they fall short of being comprehensive. Results suggest the need for additional research on respondent fatigue. The current research borrowed heavily on hous ehold nonresponse theory as a theoretical guide. However, an important component identified by Groves a nd Couper (1998) could not be incorporated into the final model given specific data lim itations. Groves and

PAGE 112

103 Couper demonstrate the impact that interviewer characteri stics have on nonresponse. Unfortunately, data from the NCVS Longit udinal Data File do not contain this information. Interviewer characteristics such as socio-demographic factors, experience, and expectations are strong influences on survey participation. The inability to include such factors in the current study was unavoida ble. Future research into respondent fatigue associated with self-re port victim surveys should strive to assess the nature and extent of the relationship between in terview characteristics and nonresponse. Each of the three perspectives presente d herein provide important information about respondent fatigue as a potential sour ce of nonsampling error in contemporary selfreport victim surveys. However, the information from each is presented independent of one another. The final chapter provides a di scussion of the findings produced from each perspective, collectively.

PAGE 113

104 Discussion For more than three decades, the National Crime Victimization Survey (NCVS) and its predecessor the National Crime Survey (NCS) have been used to generate national estimates of crime victimization. While bei ng developed, the self-re port victim survey methodology benefited from a great deal scient ific scrutiny. For example, research was conducted that identified the best way to ask probing questions that re veal victimization; studies were conducted that helped determine the ideal length for a reference period; and research was undertaken to assess the va lidity of reported vict imization (see Skogan 1981). Efforts were also unde rtaken to investigate whet her longer inte rviews, which resulted from respondents answering affirmativ ely to certain cue questions, resulted in a decrease in reported victimiza tion during subsequent intervie ws. Initial results provided some support for the idea that certain survey -design features caused respondent fatigue (see Biderman et. al, 1967; see Lehnen & Reiss, 1978a, 1978b; see also Skogan 1981). Despite improvements in available data analytic softwa re and significant modifications to the way in which national self-report victim-survey data is collected, initial findings of respondent fatigue believed to be asso ciated with survey-design features of self-report victim surveys ha ve not been revisited. The current study examined this issue from three perspectives. A discussion of the findings associated with each follows.

PAGE 114

105 Respondent fatigue and survey-design effects The initial study examined respondent fa tigue by focusing on the relationship between survey-design features of self-report victim surveys and their effects on reported victimization. Results provided mixed support for the fatigue-bias argument. That is, respondents exposed to more than 1 prior interview were less likely to report victimization than respondents who are expos ed to only 1 prior interview; however, the relationship between prior re ported victimization and vict imization reported during a current interview was less supportive of a fatigue bias argument. The mixed results might be partially explained by the data used in the analyses. Unbounded interviews were excluded from the data used in the initial study. Including unbounded interviews would have ra ised initial victimization estimates and called into question the conclusions reached about subsequent reported victimization. Respondents first bounded interviews were used as the reference category to assess the relative effect of the number of prior interviews on the likelihood a respondent would report victimization. However, a systematic sh ift in survey mode ha s taken place by the respondents second interview (i.e., their first unbounded inte rview). This shift has important consequences that could have masked the effect that prior reported victimizations has on respondent fatigue. The survey mode of about 85% of the cases used in the initial study was the telephone (see Table 1). The di sparity between the number of telephone and face-to-face interviews is due to NCVS protocol. Interviewers are trained to conduct every initial NCVS interview with the household respondent in person. During the initial interview, the household respondent is asked if subsequent interviewsand interviews with other

PAGE 115

106 members of the household not available at th e time the household respondents interview is completedcan be completed over the te lephone. Most house hold respondents agree to the change in mode. After excluding unbounded interviews, findings from the first perspective show that respondents are less likely to report victimization if the interview is conducted in person. Therefore, NCVS pr otocol could be pr oducing an overall underestimate of fatigue since it creates a redu ction in the type of interview that is associated with less reported victimization. Despite possibly underestimating a fatigue effect, findings reveal an important relati onship between reported victimization during previous interviews and the likelihood vi ctimization is reported during a current interview, which goes against the grain of a fatigue-bias argument. This finding is meaningful and raises two important questions. First, the relationship between victimiza tion reported during prior interviews and victimization reported during current interviews demonstrates that crime is not distributed evenly across individuals (see Sampson & Lauritsen, 1994). Relativ ely few individuals account for most reported victimizations. Du ring initial developments of a national survey to measure crime, different approach es were discussed (see National Research Council, 1976). Some research ers recommended a measure of propensity for victimization, while others argued for a measure of prevalence Findings from the first perspective, combined with the decrease in victimization prevalence measured over the last decade suggest that a new perspective on crime may be worthwhile. Current findings beg the question: Has the time come to suppl ement current measures of victimization prevalence with measures of victimization propensity?

PAGE 116

107 Second, the initial investigation into re spondent fatigue combines all types of victimization in the dependent variable.34 It is possible that a response effect associated with prior reported victimization might manifest for certain types of crime and not others. By considering all types of crim e together, a fatigue effect th at may manifest for a certain type of crime might be masked by other type s that do not produce a similar effect. If more types of victimization produce a rapport e ffect than a fatigue effect when reported, for example, it could explain why the rela tionship in the first study between prior reported victimizations and the likelihood vict imization would be reported in a current interview is observed. The question then becomes, are current findings that are associated with victimization reporting pa tterns, which fail to support a fatigue-bias argument, a byproduct of not considering differe nt forms of victimi zation independent of one another? The answer to this question is beyond the scope of the current study, but future research should attempt to address it. Again, when viewed collectively results from the first perspective on respondent fatigue are somewhat conflicting. Survey-des ign effects such as the number of prior interviews and survey mode support the notion that responden t fatigue may manifest in contemporary self-report victim surveys; however, the effect of prior reported victimization is less persuasive. The anal ytic approach employed to investigate the relationship between survey-design effects and respondent fatigue and the corresponding negligible amount of explained variance produc ed by the models might be contributing to the confusion. Both are addresse d below in greater detail. 34 See footnote 4 on page 19.

PAGE 117

108 Analytic methods employed during the ini tial perspective may explain some of the apparent inconsistent result s that emerged in the initial pe rspective. As noted above, crime is a rare phenomenon. This is a clai m that is well illustrated by the frequency distribution of the dichotomous dependent va riable used in the analyses. Logistic regression techniques for anal yzing rare events data have been recently developed (King & Langche, 2001). King and Langche ar gue that normal logistic regression techniques produce significant underestimates of the probability of rare events, such as reported victimization. In their rese arch, King and Langche demonstrate how underestimations can be as much as the pr obability estimates produced by models not employing rare events logistic regression te chniques. While survey-weighted logistic regression is available in STATA, survey-weighted rare events logistic regression is not. The extent to which survey-weighted rare events logistic re gression would have improved the probability estimates produced by the models therefore is unclear. Until a rare events technique is developed that in cludes a component that controls for complex sampling methods, its full potential cannot be realized with th ese data. Nevertheless, the current analytic method (i.e., survey-weighted logistic regression) may not be the most appropriate method for these data and may be a contributing factor to the seemingly inconsistent findings produced in the first perspective on respondent fatigue.35 The limited amount of explained variance produced by the models may also be a source of confusion. 35 This issue applies to all the models used in this study, since all employ survey-weighted logistic regression and not rare events logistic regression.

PAGE 118

109 In multivariate linear regression, R-s quared is used to quantify a models goodness of fit and indicates the proportion of variation in Y e xplained by all the independent variables (Lewis-Beck, 1980, p. 53). Obviously, researchers strive to produce models that generate large R-squared values. While the model presented in Table 4 creates a significant proportional redu ction in error, only 3% of variance in reported victimization is e xplained. The models explai ned variance is estimated by Nagelkerke R-squared, which is an approxi mation of the R-squared value produced in linear regression (Nagelkerke, 1991). Its co rresponding low value a ssociated with the model presented in Table 4 may be explained by the skewed distribution of the dependent variable. A dichotomous dependent variables vari ance is directly tied to its frequency distribution. Variance for a di chotomous dependent variable is at a maximum when one half of its observed values fall within one of the categories and the other half fall within the other category (see Cox & Snell, 1989; see also Nagelkerke, 1991). Conversely, variance for a dichotomous dependent variable decreases as the split of its values moves farther away from fifty-fifty. Table 1 reve als that respondents do not report victimization in approximately 94% of all cu rrent interviews. This means that the variance associated with the dichotomous dependent variable pres ented in Table 4 is extremely low, which would make explaining the variance more diffi cult than it would be had the distribution of cases been closer to a fifty-fifty sp lit. So while the observed R-squared value associated with the model represented in Tabl e 4 is much lower than desired, it may be a

PAGE 119

110 product of the nature of the dichotom ous dependent variables distribution. 36 It may not necessarily reflect a poorly constructed model. Combined, the analytic technique empl oyed (i.e., survey-weighted logistic regression) and the skewed distribution of th e dependent variable mi ght be factors that contribute to the tendency of some to vi ew the findings produced from the first perspective with caution. Ne vertheless, the initial inves tigation produced meaningful results and provided an appropr iate platform from which to expand the respondent fatigue study. In an attempt to add to the knowle dge produced from the first approach, a subsequent investigation into respondent fatigue and self-report victim surveys was undertaken. Modifying the operational me asure of respondent fatigue The second perspective examined respondent fatigue in self-report victim surveys using a more conceptually a ppealing measure of fatigue: nonresponse The survey utilized only initial and subsequent waves of interviews Unlike the findings produced in the initial approach, results fa iled to demonstrate support for the idea that a link between survey design and respondent fatigue exists once individual correla tes to victimization are taken into account. However, results suggested that systematic nonresponse is associated with certain individual demographics. Some of the links between nonresponse and individual characteristics can potentially bias victimization estimates dow nward for some populations. For example, minorities are more likely to refuse to part icipate during the second wave of self-report 36 This issue also applies to all the subsequent models used after the first perspective, since the dependent

PAGE 120

111 victim surveys than are non-Hispanic whites. Minorities are victimized at disproportionately higher rates than non-Hispanic whites. Combined, this could produce victimization rates that are underestimated fo r minorities. Similarl y, after their initial exposure to a survey, men are more likely to refuse to particip ate than women; and younger respondents are more likely to refuse to participate than older respondents. Men are more likely to be victimized than are women and age and victim ization is inversely correlated. Again, if men and younger respondents refuse to particip ate in self-report victim surveys at rates that are systematically different than their counterparts, then estimates produced from victim-surveys fo r each group could be downwardly biased. Modifications to current self -report victim survey met hodology could improve overall victimization estimates, especi ally for some populations. Current methodology could be tailored in a way that addresses individual correlates to nonresponse and victimization. For example, Hispanics are more likely to refuse to participate during the second wave of interviews than white, non-Hispanics if the initial interview is co nducted in-person (see Table 9) A similar pattern of nonresponse between Hispanics and non-Hispanic whites does not emerge when the initial survey is conducted over the telephone. Research shows that Hispanics trust the police less than white, non-Hispanics; and re port some crimes to the police at lower levels than their white, non-Hispanic c ounterparts (Hart & Renni son, 2003; Ong & Jenks, 2004; Rennison, forthcoming; Skogan & Hartne tt, 1997; Thomas & Burns, 2005). It is possible that Hispanics see official victim-s urvey interviewers as authoritarian figures associated with the criminal justice system, and during in-person interviews their distrust variable used for each is heavily skewed.

PAGE 121

112 facilitates a decision not to participate. Perhaps one a pproach to reducing nonresponse among Hispanics would be to conduct more initial interviews over the telephone. Additional interviewer training could also be provided to survey-interviews that focus on respondents that are character istically more likel y to refuse to participate. Taking a proactive approach that targets groups more likely to refuse to participate could help to ultimately produce more accurate estimates of victimization especially for those groups that are both mo re likely not to participate and who are also more likely to be victims of crime. Certai nly any modification to established self-report victim survey methodology like those associ ated with the NCVS would be costly; nevertheless, the second study demonstrates the important impact nonresponse has on the production of victimization estimates. It also provides support for considering changes to the current methodology. Finally, the second pe rspective raises an important question: would the patterns of nonresponse observed hol d true over multiple waves of surveys? Assessing respondent fatigue over multiple waves of self-report victim surveys Examining respondent fatigue over multip le waves of interviews provided additional insight into this pot ential source of nonsampling error. Survey-design effects were assessed to determine whether they influence respondents decisions not to participate in multiple waves of victim su rveys, while controlling for factors that contribute to household nonres ponse (Groves & Couper, 1998). Overall, survey-design effects failed to produce nonresponse in cont emporary longitudinal self-report victim surveys. Although findings from the final perspective did not s upport the notion that prior number of interviews or prior reported victimizations predict nonresponse, they do

PAGE 122

113 point to ways that systematic nonresponse in self-report victim surveys can be reduced thereby improving victimization estimates. Results indicated that participants in se lf-report victim surv eys tend to continue participating, whereas those who fa il to participate tend to continue not participating. Researchers have focused on introductory comments made by interviewers and their effects on nonresponse as one area that could af fect respondents decisions to initially participate in surveys (see Groves & Lyber g, 1988). However, studies undertaken to examine the effects of introductory statemen ts on nonresponse are inconclusive (Dillman, et al., 1976; ONeil, Groves & Cannell, 1979). Nevertheless interviewers and survey administrators must do all they can to obtain an initial interview, given the pattern that emerged in the final perspective. Contem porary national victim-survey interviewers undergo extensive training, includ ing being provided with scri pted introductions for both in-person and telephone surveys. However, information about what is actually said during the surveys introduction, along with ot her information regarding the interaction between interviewer and respondent, is not co llected. Until it is, assessments about the effects of introductory statements on in itial survey nonresponse cannot be made. Social environment and household attrib ute effects on individual nonresponse were also examined; and findings provide insi ght into ways to improve overall estimates of victimization produced by self-report survey s. Lauritsen and Schaum (2004) identify family structure as an important determinant of victimization. Victimization is less likely to be recorded in households comprised of a single woman than in households comprised of a single woman with children. Moreover, victimization is less likely to be recorded in households comprised of a married couple (s ee Lauritsen & Schaum, 2004). Results

PAGE 123

114 from the final perspective reveal that re spondents living in homes comprised of more adults and homes comprised of more children are both less likely to participate in selfreport victim surveys. If victimization is corre lated to the number of adults and children in a sampled household in one direction and nonresponse is correlated to similar household attributes in the opposite direction, then victim ization estimates for these groups could be downwardly biased. A lthough nothing can be done to change the composition of sampled households, steps can be taken to improve the strategies for obtaining interviews among respondents living in homes comprised of several adults or of several children. Improving interviewer training is one possible solution. Other correlates to nonresponse that are associated with house hold attributes are evident. For example, respondents age a nd gender predict nonresponse. As noted during the discussion of finding produced in the second perspectiv e, if men and younger respondents systematically refuse to particip ate in self-report victim surveys conducted over multiple waves, estimates produced from victim-surveys will be downwardly biased. Attempts should be made to encourage pa rticipation among thes e subpopulations in multiple-wave victim surveys. Otherwise, the validity of victimization estimates like those produced by contemporary victim survey s, for certain subgroups of the population, is questionable. Summary Does nonsampling error, produced by respondent fatigue, manifest in contemporary self-report victim surveys? Th e answer to this seemingly straightforward question is, It depends. As the findings of this study collectively demonstrate, it

PAGE 124

115 depends on how respondent fatigue is operationaliz ed. If respondent fatigue is defined in terms of response bias (i.e., re ported victimization), then th ere is limited support for the argument that it does. On the other hand, if fatigue is defined in terms of nonresponse bias (i.e., non-participation), then the argumen t that it does is far less convincing. With regard to being defined in terms of nonres ponse, it also depends on the degree to which available data is able to construct suffici ent models to gauge fatigue. Due to data limitations the current research is unable to assess the ro le that vital components of Groves and Coupers (1998) theory of household nonresponse play in producing nonresponse (see Figure 5). Future research must incorporate information regarding interviewers (i.e., interviewer experience, expectations, and demographics) as well as information concerning householder-interviewe r interactions into models predicting nonresponseif a more complete understanding of fatigue bias (that might manifest in terms of nonresponse) is to be realized. Until then the full effect of respondent fatigue in contemporary self-report victim su rveys cannot be fully realized.

PAGE 125

116 References Addington, L. (2005). Disentangling the e ffects of bounding and mobility on reports of criminal victimization. Journal of Quantitative Criminology, 21 (3), 321-343. Athey, K. R., Coleman, J. E., Reitman, A. P. & Tang, J. (1960). Two experiments showing the effect of interviewer s racial background on responses to questionnaires concerning racial issues. Journal of Applied Psychology, 44 4446. Ayidiya, S. A. & McClendon, M. J. (1990) Response effects in mail surveys. Public Opinion Quarterly, 54 229-247. Bachman, R. (1994). Violence and theft in the workplace Washington, DC: U.S. Bureau of Justice Statistics: Governme nt Printing Office. (NCJ 148199). Bachman, R. & Saltzman, L. E. (1995). Violence against women: Estimates from the redesigned survey Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 154348). Bachman, R. & Taylor, B. M. (1994). The m easurement of family violence and rape by the redesigned NCVS. Justice Quarterly 11 499-512. Bailey, L, Moore, T. F. & Bailar, B. (1978). An interview variance study for the eight impact cities of the national crime surveys cities sample. Journal of the American Statistical Association, 73 16-23. Balvanz, B. (1979). The effects of the National Survey of Crime Severity on the victimization rates determined from the National Crime Survey Unpublished memorandum, U.S. Department of Co mmerce, Bureau of the Census, Washington, DC. Bastian, L. D. & Taylor, B. M. (1991). School crime, 1991 Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 131645). Bastian, L. D. & Taylor, B. M. (1994). Young black male victims Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 147004). Baumer, E., Horney, J, Felson, R. & Laurit sen, J. L. (2003). Neighborhood disadvantage and the nature of violence. Criminology, 41 (1), 39-72.

PAGE 126

117 Belson, W. A. (1968). The extent of steali ng by London boys and some of its origins. Advancement of Science, 25 171-184. Benton, J. E. & Daly, J. L. (1991). A question order effect in a local government survey. Public Opinion Quarterly, 55 640-642. Berk, M. L, Mathiowetz, N. A., Ward, E. P. & Wite, A. A. (1987). The effect of prepaid and promised incentives: Results of a controlled experiment. Journal of Official Statistics, 3 449-457. Biderman, A. D. (1967). Surveys of population samples for estimating crime incidence. The Annals of the American Academy of Political and Social Sciences, 374 1633. Biderman, A. D. (1970). Time distortions of victim ization and mnemonic effects Unpublished memorandum, Bureau of Social Science Research, Washington, DC. Biderman, A. D. & Cantor, D. (1984). A longitudinal analysis of bounding, respondent conditioning and mobility as sources of panel bias in the National Crime Survey Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Biderman, A. D., Cantor, D. & Reiss, A. J., Jr. (1982). A quasi-experimental analysis of personal victimization reporting by hous ehold respondents in the National Crime Survey Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Biderman, A. D., Cantor, D. & Reiss, A. J., Jr. (1984). Procedural biases and conceptual incongruities in operationalizations of the distinction between household and personal victimization Unpublished memorandum, Bureau of Social Science Research, Washington, DC. Biderman, A. D., Johnson, L. A., McIn tyre, J. & Weir, A. W. (1967). Field Survey I: Report on a pilot study in the District of Columbia on victimization and attitudes toward law enforcement Presidents Commission on Law Enforcement and Administration of Justice. Washingto n, DC: U.S. Government Printing Office. Biderman, A. D. & Lynch, J. P. (1981). Recency bias in data on self-reported victimization Paper presented at the proceedings of the Social Statistics Section, American Statistical Association, Alexandria, VA. Biderman, A. D. & Lynch, J. P. (1991). Understanding crime inci dence statistics: Why the UCR diverges from the NCVS New York: Springer-Verlag.

PAGE 127

118 Biderman, A. D. & Reiss, A. J. (1967). On exploring the dark figure of crime. The Annals of the American Academy of Political and Social Sciences, 374 1-15. Blackmore, J. (1974). The relationship betw een self-reported deli nquency and official convictions amongst adolescent boys. British Journal of Criminology, 14 172176. Blau, P. M. (1964). Exchange and power in social life NY: John Wiley and Sons. Blumberg, H. H., Fuller, C. & Hare, A. P. (1974). Response rates in postal surveys. Public Opinion Quarterly, 38 113-123. Blumstein, A. (2000). Disaggregating the vi olence trends. In A. Blumstein & J. Wallman (Eds.), The crime drop in America Cambridge, MA: University Press. Blumstein, A. & Wallman, J. (2000). The recent rise and fall of American violence. In A. Blumstein & J. Wallman (Eds.), The crime drop in America Cambridge, MA: University Press. Booth, A., Johnson, D. R. & Choldin, H. M. (1977). Correlates of city crime rates: Victimization surveys versus official statistics. Social Problems, 25 (2), 187-197. Bradburn, N. M. (1983). Response effects. In P. H. Rossi, J. D. Wright & A. B. Anderson (Eds), Handbook of survey research Academic Press: New York, NY. Bradburn, N. M., Sudman, S., Blair, E. & St ocking, C. B. (1978). Question threat and response bias. Public Opinion Quarterly, 42 221-234. Brame, R., Paternoster, R., Mazerolle, P. & Pi quero, A. (1998). Testing for the equality of maximum-likelihood regression coe fficients between two independent equations. Journal of Quantitative Criminology, 14 (3), 245-261. Braukmann, C. J., Kirigin, K. A. & Wolf, M. M. (1979). Social learning and social control perspectives in group hom e delinquency treatment research Paper presented at the Annual Meeting of the American Society of Criminology, Philadelphia, PA. Bureau of Justice Statistics (2002). Nationa l Crime Victimization Survey Longitudinal File, 1996-1999 [Computer file]. Conducte d by U.S. Department of Commerce, Bureau of the Census. Bushery J. M. (1978). NCS noninterview rates by time-in-sample. Unpublished memorandum, U.S. Department of Co mmerce, Bureau of the Census, Washington, DC.

PAGE 128

119 Bushery, J. M. (1981). Results of the NCS referen ce period research experiment Unpublished memorandum, U.S. Departme nt of Commerce, Bureau of the Census, Washington, DC. Butz, W. P. (1985). Data confidentiality and public perceptions: The case of the European census Paper presented at meeti ng of American Statistical Association, Washington, DC. Campbell, B. (1981). Race-of-interviewer effects among southern adolescents. Public Opinion Quarterly, 45 231-244. Cantor, D. & Lynch, J. P. (2000). Self-Report surveys as measures of crime and criminal victimization. In D. Duffee (Series ed.), Criminal justice 2000: Vol. 4. Measurement and analysis of crime and justice (pp. 85-138). Washington, DC: Government Printing Office. Cantor, D. & Lynch, J. P. (2005). Explori ng the effects of changes in design on the analytical uses of the NCVS data. Journal of Quantitative Criminology, 21 (3), 293-320. Carmines, E. G. & Zeller, R. A. (1979). Reliability and validity assessment Beverly Hills, CA: Sage. Carpenter, E. H. (1974-1975). Persona lizing mail surveys: a replication and reassessment. Public Opinion Quarterly, 38 614-620. Catania, J., Gibson, D., Chitwood, D. & Coates T. (1990). Methodological problems in AIDS behavioral research: Influences on measurement error and participation bias in studies of sexual behavior. Psychological Bulletin, 180 339-362. Catania, J., Gibson, D., Marin, B. Coates, T. & Greenblatt, R. (1990). Response bias in assessing sexual behaviors rele vant to HIV transmission. Evaluation and Program Planning, 13 19-29. Catalano, S. M. (2005). Criminal victimization, 2004 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 210674). Catalano, S. M. (2004). Criminal victimization, 2003 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 205455). Chilton, R. & Jarvis, J. (1999). Victims and offenders in two ci ties statistics programs: A comparison of the National Incident-Bas ed Reporting System (NIBRS) and the National Crime Victimization Survey (NCVS). Journal of Quantitative Criminology, 15 (2), 193-205.

PAGE 129

120 Chromy, J. and Horvitz, D. (1978). The use of monetary in centives in national assessment household surveys. Journal of the American Statistical Association, 73 473-478. Church, A. H. (1993). Estimating the effect of incentives on mail su rvey response rates: A meta-analysis. Public Opinion Quarterly, 57 62-79. Cohen, L. E. & Cantor, D. (1981). Residentia l burglary in the United States: Life-style and demographic factors associated with the probability of victimization. Journal of Research in Crime and Delinquency, 18 (1), 113-127. Cohen, L. E. & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44 588-608. Combs, L. C. & Freedman, R. (1964). Use of telephone interviews in a longitudinal fertility study. Public Opinion Quarterly, 28 112-117. Cotter, P. R., Cohen, J. & Coulter, P. B. ( 1982). Race-of-interviewer effects in telephone interviews. Public Opinion Quarterly, 46 287-284. Couch, A. & Keniston, K. (1960). Yeasayer and naysayers: Agreeing response set as a personality variable. Journal of Abnormal and Social Psychology, 60 151-174. Couper, M. P. & Groves, R. M. (1991). The role of interviewer in survey participation Paper presented at the annual conference of the American Asso ciation for Public Opinion Research, Phoenix, AZ. Cowan, C. D., Murphy, L. R. & Wiener, J. ( 1979). Effects of s upplemental questions on victimization estimates from the National Crime Survey. In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bur eau of Justice Statistics: Government Printing Office. (NCJ 90307). Craven, D. (1996). Female victims of violent crime Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 162602). Craven, D. (1997). Sex differences in viol ent victimiz ation, 1994 Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 164508). Crowne, D. P. & Marlow, D. (1964). The approval motive: Studies in evaluative dependence NY: Wiley. Cox, D. R. & Snell, E. J. (1989). The analysis of binary data (2nd ed). London: Chapman and Hall.

PAGE 130

121 Davis, D. W. (1997). Nonrandom measurem ent error and race of interview effects among African Americans. Public Opinion Quarterly, 61 183-207. de Leeuw, E. & de Heer, W. (2002). Surv ey nonresponse in design, data collection, and analysis. In R. M. Groves, D. A. Dillma n, J. L. Eltinge & R. J. A. Little (Eds.), Survey nonresponse New York, NY: Wiley. Decker, S. H. (1980). Criminalization, victimization and st ructural correlates of twentysix American cities Century Twenty-One P ublishing: Saratoga, CA. DeVoe, J. F., Peter, K., Kaufman, P., Ruddy, S. A., Miller, A. K., Planty, M., et al. (2003). Indicators of school crime and safety, 2003 Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 201257). Dillman, D. A. (1978). Mail and telephone surveys: The total design method NY: Wiley Interscience. Dillman, D. A. (2000). Mail and internet surveys: The tailored design method NY: John Wiley & Sons. Dillman, D. A., Eltinge, J. L., Groves, R. M. & Little, R. J. A. (2002). Survey nonresponse in design, data collection, and analysis. In R. M. Groves, D. A. Dillman, J. L. Eltinge & R. J. A. Little (Eds.), Survey nonresponse New York, NY: Wiley. Dillman, D. A., Gallegos, J. G. & Frey, J. H. (1976). Reducing refusal rates for telephone interviews. Public Opinion Quarterly, 40 66-78. Dillman, D. A., Singer, E., Clark, J.R. & Treat, J. B. (1996). Effects of benefits appeals, mandatory appeals, and variations in st atements of confidentiality on completion rates for census questionnaires. The Public Opinion Quarterly, 60 376-389. Dodge, R. W. (1970). Victim recall pretest Unpublished memorandum, U.S. Department of Commerce, Bureau of the Census, Washington, DC. Dodge, R. W. (1975). Series Vi ctimization: What is to be done? In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bur eau of Justice Statistics: Government Printing Office. (NCJ 90307). Dodge, R. W. (1977a). A Preliminary Inquiry into Series Victimizations. In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307).

PAGE 131

122 Dodge, R. W. (1977b). Comparison of victimiza tions as reported on the screen questions with their final classification: 1976. In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Dodge, R. W. & Lentzner, H. R. (1978). Patt erns of personal series incidents in the National Crime Survey. In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Dodge, R. W. & Turner, A. G. (1971). Me thodological foundations for establishing a national survey of victimization. In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume I: Current and historical perspectives Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 75374). Dohrenwend, B. S. (1965). Some effects of open and closed questions on respondents answers. Human Organization, 24 175-184. Dohrenwend, B. S., Colombotos, J. & Dohren wend, J. (1968-1969). Social distance and interviewer effects. Public Opinion Quarterly, 32 410-422. Dugan, L. (1999). The effect of crimin al victimization on a households moving decision. Criminology, 37 (4), 901-930. Dugan, L., Nagin, D. & Rosenfeld, R. (1999). Explaining the decline in intimate partner homicide: The effects of changing dome sticity, women's status, and domestic violence resources. Homicide Studies, 3 (3), 187-214. Dugan, L., Nagin, D. & Rosenfeld, R. (2003). Exposure reduction or retaliation? The effects of domestic violence resour ces on intimate partner homicide. Law and Society, 27 (1), 169-198. Duhart, D. T. (2000). Urban, suburban and rural victimization, 1993-1998 Washington, DC: U.S. Bureau of Justice Statistics : Government Printing Office. (NCJ 182031). Duhart, D. T. (2001). Violence in the workplace, 1993-99 Washington, DC: U.S. Bureau of Justice Statistics: Governme nt Printing Office. (NCJ 190076). Farrington, D. P., (1973). Se lf-reports of deviant behavi or: Predictive and stable? Journal of Criminal Law and Criminology, 64 99-110.

PAGE 132

123 Federal Bureau of Inve stigations (2004). Crime in the United States, 2003 Washington, DC: Government Printing Office. Finkel, S. E., Guterbock, T. .M. & Borg, M. J. (1991). Race-of-interviewer effects in a preelection poll: Virginia 1989. Public Opinion Quarterly, 55 313-330. Finkelhor, D., Asdigian, N. & Dziuba-Leatherman, J. (1995) Victimization prevention programs for children: A follow-up. American Journal of Public Health, 85 1684-1689. Fisher, B. S., Cullen, F. T. & Turner, M. G. (2000). The sexual victimization of college women Washington, DC: U.S. National Institut e of Justice: Government Printing Office. (NCJ 182369). Freeman, J. & Butler, E. W. (1976). Some s ources of interview variance in surveys. Public Opinion Quarterly, 40 79-91. Gibbs, J. J. (1979). Crimes against persons in urban, suburban, and rural areas: A comparative analysis of victimization rates Washington, DC: National Criminal Justice Information & Statistics Service: Government Printing Office. (NCJ 53551). Godwin, R. K. (1979). The consequences of large monetary incentives in mail surveys of elites. Public Opinion Quarterly, 52 378-387. Gottfredson, M. R. & Hindelang, M. J. (1977) A consideration of memory decay and telescoping biases in victimization surveys. Journal of Criminal Justice, 5 202216. Goyder, J. C. (1987). The silent majority: Nonr espondents on sample surveys Boulder, CO: Westview Press. Greenfeld, L. A. & Smith, S. K. (1999). American Indians and crime Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 173386). Greenfeld, L. A., Rand, M. R., Craven, D., Klau s, P., Perkins, C. A., Ringel, C., et al. (1998). Violence by intimates: Analysis of da ta on crimes by current or former spouses, boyfriends, and girlfriends Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 167237). Groves, R. M. (1977). A comparison of national tele phone and personal interview surveys: Some response and nonresponse differences Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA.

PAGE 133

124 Groves, R. M., Cialdini, R. B. & Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56 475-495. Groves, R. M. & Couper, M. P. (1992). Correlates of nonresponse in personal visit surveys Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Groves, R. M. & Couper, M. P. (1993). Multivariate analysis of nonresponse in personal visit surveys Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Groves, R. M. & Couper, M. P. (1998). Nonresponse in household interview surveys New York, NY: Wiley. Groves, R. M., Dillman, D. A., Eltinge J. L. & Little, R. J. A. (2002). Survey nonresponse New York, NY: Wiley. Groves, R. M. & Kahn, R. L. (1979). Surveys by telephone New York: Academic Press. Groves, R. M. & Lyberg, L. E. (1988). An overview of nonresponse issues in telephone surveys. In R. M. Groves, P. P. Biem er, L. E. Lyberg, J. T. Massey, W. L. Nicholls & J. Waksberg (Eds.), Telephone Survey Methodology New York, NY: Wiley. Groves, R. M. & Mathiowetz, N. A. (1984). Computer assisted telephone interviewing: Effects on interviewers and respondents. Public Opinion Quarterly, 48 356-369. Guerry, A. M., Whitt, H. P. & Reinking, V. W. (2002). A translation of Andre-Michel Guerry's essay on the moral statistics of France: A sociological report to the French Academy of Science Lewiston, N.Y.: Edwi n Mellen Press. Hanson, R. H. & Marks, E. S. (1958). Influe nce of the interviewe r on the accuracy of survey results. Journal of the American St atistical Association, 53 635-55. Harris-Kjoetin, B. A. & Tucker, C. (1998). Longitudinal nonresponse in the current population survey (CPS). ZUMA Nachtrichen Spezial, 4 263-272. Harrison, P. M. & Beck, A. J. (2005). Prison and jail inmates at midyear 2004 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 208801). Harrison, P. M. & Karberg, J. C. (2004). Prison and jail inmates at midyear 2003 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 203947).

PAGE 134

125 Hardt, R. H. & Petersen-Hardt, S. (1977). On determining the quality of the delinquency self-report method. Journal of Research in Crime and Delinquency, 14 247-261. Hart, T. C. (1998). Causes and Consequences of Juvenile Crime and Violence: Public Attitudes and Question-Order Effect. American Journal of Criminal Justice, 23 (1), 129-143. Hart, T. C. (2003). Violent victimization of college students 1995-2000 Washington, DC: U.S. Bureau of Justice Statistics : Government Printing Office. (NCJ 196143). Hart, T. C. & Reaves, B. A. (1999). Felony defendants in large urban counties, 1996 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 176981). Hart, T. C. & Rennison, C. M. (2003). Reporting crime to the police, 1992-2001 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 195710). Hatchett, S. & Schuman, H. (1975-1976). White respondents and race of interview effects. Public Opinion Quarterly, 39 523-528. Hathaway, R. S., Monachesi, E. D. & Young, L. A. (1960). Delinquency rates and personality. Journal of Criminal Law, Cr iminology, and Police Science, 50 433440. Hayes, D. P. (1964). Item order and Guttman Scales. American Journal of Sociology, 70 51-58. Henson, R., Roth, A. & Cannell, C. F. (1974). Personal vs. telephone interviews on reporting of psychiatric symtomology Ann Arbor: University of Michigan. Hindelang, M. J. (1978). Race and involvement in crimes. American Sociological Review, 43 93-109. Hindelang, M. J., Gottfredson, M. R. & Garofalo, J. (1978). Victims of personal crime: An empirical foundation for a theo ry of personal victimization Cambridge, MA: Ballinger. Hindelang, M. J., Hirschi, T. & Weis, J. G. (1981). Measuring delinquency Beverly Hills, CA: Sage. Hochstim, J. R. (1967). A crit ical comparison of three strate gies of collecting data from households. Journal of the American Statistical Association, 62 976-989.

PAGE 135

126 Homans, G. C. (1961). Social behavior: As elementary forms NY: Harcourt, Brace and World. Hosmer, D. W. & Lemeshow, S. (2000). Applied Logistic Regression (2nd ed.). New York, NY: Wiley. Hubble, D. L. & Wilder, B. E. (1988). Preliminary results from the National Crime Survey (NCS) CATI experiment Paper presented at the proceedings of the Survey Research Methods Section, American Sta tistical Association, Alexandria, VA. Huizinga, D. & Elliott, D. S. (1986). Rea ssessing the reliability and validity of selfreport delinquent measure. Journal of Quantitative Criminology, 2 (4), 293-327. Hyman, H. A. (1954). Interviews in social research Chicago, University Press. James, J. M. & Bolstein, R. (1990). The effect of monetary incentives and follow-up mailings on the response rate and re sponse quality in mail surveys. Public Opinion Quarterly, 54 346-361. James, J. M. & Bolstein, R. (1992). Large monetary incentives and their effect on mail survey response rates. Public Opinion Quarterly, 56 443-453. Johnson, T. P. (1988). The social environment and health Unpublished doctoral dissertation, University of Kentucky, Lexington. Jonsson, C. O. (1957). Questionnaires and interviews: Expe rimental studies concerning concurrent validity on well-motivated subjects Stockholm: The Swedish Council for Personnel Administration. Kalish, C. B. (1974). Crimes and victims: A report on the Dayton-San Jose pilot survey of victimization Washington, DC: Law Enforcement Assistance Administration. Government Printing Office. (NCJ 13314). Katz, D. (1942). Do interviewers bias poll results? Public Opinion Quarterly, 6 248268. Kindermann, C., Lynch, J. P. & Cantor, D. (1997). Effects of the redesign on victimization estimates Washington, DC: U.S. Bur eau of Justice Statistics: Government Printing Office. (NCJ 164381). King, G. & Langche, Z. (2001) Logistic regression in rare events data. Political Analysis, 9 (2), 137-163. Kish, L. (1962). Studies of interviewer variance for attitudinal variables. Journal of the American Statistical Association, 57 92-115.

PAGE 136

127 Klaus, P. (1999). Crimes against persons age 65 or older, 1992-97 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 176352). Klaus, P. (2002). Crime and the nations households, 2002 Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 201797). Klaus, P. & Rennison, C. M. (2002). Age patterns in violen t victimization, 1976-2000 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 190104). Knudsen, D. D., Pope, H. & Irish, D. P. ( 1967). Response differences to questions on sexual standards: An interv iew-questionnaire comparison. Public Opinion Quarterly, 31 290-297. Kulik, J. A., Stein, K. B. & Sarbin, T. R. (1968). Disclosure of delinquent behavior under conditions of anonymity and non-anonymity. Journal of Consulting and Clinical Psychology, 32 506-509. LaFree, G. & Drass, K. A. (1993). Coun ting crime booms among nations: Evidence for homicide victimization rates, 1956 to 1988. Criminology, 40 (4), 769-799. Landon, E. L., Jr. (1971). Order bias, the ideal ratings and the semantic differential. Journal of Marketing Research 8 375-378. Laub, J. H. & Lauritsen, J. L. (1993). Violen t criminal behavior over the live course: A review of the longitudinal and comparative research. Violence and Victims, 8 (3), 235-252. Lauritsen, J. L. (2001). Social ecology of violent victimiza tion: Individual and contextual effects in the NCVS. Journal of Quantitative Criminology, 17 (1), 3-32. Lauritsen, J. L. (2003). How Families and Communities Influence Youth Victimization Washington, DC: U.S. National Institute of Justice: Government Printing Office. (NCJ 201629). Lauritsen, J. L. & Schaum, R. J. (2004). Th e social ecology of viol ence against women. Criminology, 42 (2), 323-357. Lauritsen, J. L. & White, N. A. (2001). Putti ng violence in its place : The influence of race, ethnicity, gender, and pl ace on the risk for violence. Criminology and Public Policy, 1 (1), 37-60. Lauritsen, J. L. & Quinet, K. F. D. (1995) Repeat victimizatio n among adolescents and young adults. Journal of Quantitative Criminology, 11 (2), 143-166.

PAGE 137

128 Lehnen, R. G. & Reiss, A. J., Jr. (1978a). Re sponse effects in the National Crime Survey. Victimology, 3 (1-2), 110-122. Lehnen, R. G. & Reiss, A. J., Jr. (1978b). Some response effects of the National Crime Survey Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Lehnen, R. G. & Skogan, W. G. (Eds.) (1984). The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Lepkowski, J. M. & Couper, M. P. (2002) Nonresponse in the second wave of longitudinal household surveys. In R. M. Groves, D. A. Dillman, J. L. Eltinge & R. J. A. Little (Eds.), Survey nonresponse New York, NY: Wiley. Lewis-Beck, M. S. (1980). Applied regression: An introduction Beverly Hills, CA: Sage. Linsky, A. S. (1975). Stimulating responses to mailed questionnaires: A review. Public Opinion Quarterly, 39, 82-102. Lockerbie, B. & Borrelli, S. A. (1990). Qu estion wording and public support for contra aid, 1983-1986. Public Opinion Quarterly, 54 195-208. Lynch, J. P. (2001). Trends in juvenile violent o ffending from 1980 to 1998: The NCVS perspective Washington, DC: U.S. Office of Juvenile Justice and Delinquency Prevention: Government Prin ting Office. (NCJ 191052). Lynch, J. P. & Cantor, D. (1992). Ecologica l and behavioral influences on property victimization at home: Implicat ions for opportunity theory. Journal of Research in Crime and Delinquency, 29 (3), 335-362. Lynch, J. P., Berbaum, M. L. & Planty, M. (1998). Investigating repeated victimization with the NCVS: Final report Washington, DC: U.S. National Institute of Justice: Government Printing Office. (NCJ 193415). Madans, J. H., Kleinman, J. C., Cox, C. S., Barbano, H. E., Feldman, J. J., Cohen, B., et al. (1986). 10 years after NHANES I: Report of initial follow-up, 1982-84. Public Health Reports, 101 465-473. Maltz, M. D. (1999). Bridging gaps in police crime data Washington, DC: U.S. Bureau of Justice Statistics: Governme nt Printing Office. (NCJ 176365). McFarland, S. G. (1981). Effects of question order on survey responses. Public Opinion Quarterly, 45 208-215.

PAGE 138

129 Mize, J. S., Fleece, E. L. & Roos, C. (1984). Incentives for increasing return rates: Magnitude levels, response bias and format. Public Opinion Quarterly, 48 794799. Mooney, W. H., Pollack, B. R. & Corsa Jr., L. (1968). Use of te lephone interview to study human reproduction. Public Health Reports, 83 1049-1060. Murphy, L. R. & Cowan, C. D. (1976). Effects of bounding on telescoping in the National Crime Survey. In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Murphy, L. R. & Dodge, R. W. (1970). Report on the household survey of victims of crime: The second pret est, Baltimore, MD Unpublished memorandum, U.S. Department of Commerce, Bureau of the Census, Washington, DC. Nagelkerke, N. J. D. (1991). A note on a general definition of the coefficient of determination. Biometrika, 78 (3), 691-692. Narayan, S. & Krosnick, J. A. (1996). Edu cation moderates some response effects in attitude measurement. Public Opinion Quarterly, 60 58-88. National Research Council. (1976). Surveying crime B. K. Eidson-Penick & M. E. B. Owens. Panel for the Evaluation of Cr ime Surveys. Washington, DC: National Academy of Sciences. National Research Council. (2003). Measurement issues in criminal justice research: Workshop summary. J. V. Pepper and C. V. Petrie. Committee on Law and Justice and Committee on National Statistics Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. Nederhof, A. J. (1993). The effect s of mail incentiv es: two studies. Public Opinion Quarterly, 47 103-111. ONeil, M. J., Groves, R. M. & Cannell, C. F. (1979). Telephone interview introductions and refusal rates: Experiments in increasing respondent cooperation. Proceedings of the Section on Survey Re search Methods, American Statistical Association. Oksenberg, L., Coleman, L. & Cannell, C. F. (1986). Interviewers voices and refusal rates in telephone surveys. Public Opinion Quarterly, 50 97-111. Ong, M. & Jenks, D. A. (2004). Hispanic Perceptions of community policing: Is community policing working in the city? Journal of Ethnicity in Criminal Justice 2 53-66.

PAGE 139

130 Orne, M. T. (1969). Demand characteristi cs and the concept of quasi-controls In R. Rosenthal & R. L. Resnow (Eds.), Artifact in behavior research New York: Academic Press. Paez, A. L. & Dodge, R. W. (1982). Criminal victimization in the U.S, 1979-80 changes, 1973-80 trends Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 80838). Perkins, C. A. (2003). Weapon use and violent crime, 1993-2001 Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 194820). Persely, C. (1996). The National Crime Victimization survey redesign: Measuring the impact of new methods Paper presented at the proceedings of the Survey Research Methods Section, American Statis tical Association, Alexandria, VA. Presidents Commission on Law Enforcement a nd Administration of Justice. (1967). The challenge of crime in a free society Washington, DC: U.S. Government Printing Office. Rainville, G. & Reaves, B. A. (2003). Felony defendants in large urban counties, 2000 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 202021). Rand, M. R., Lynch, J. P. & Cantor, D. (1997). Criminal victimization, 1973-1995 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 163069). Rand, M. R. & Rennison, C. M. (2002). True crime stories? Acc ounting for differences in our national crime indicators. Chance, 15 (1), 47-51. Rand, M. R. & Rennison, C. M. (2004). How mu ch violence against wo men is there? In B. Fisher (Ed.), Violence against women and family violence: Developments in research, practice, and policy Washington, DC: Government Printing Office. Rand, M. R. & Rennison, C. M. (2005). Bigger Is Not Necessarily Better: An Analysis of Violence Against Women Estimates Fr om the National Crime Victimization Survey and the National Violence Against Women Survey. Journal of Quantitative Criminology, 21 (3), 267-291. Rantala, R. R. (2004). Cybercrime against businesses: P ilot test results, 2001 Computer Security Survey Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 200639). Rasinski, K. A. (1989). The effect of ques tion wording on public support for government spending. Public Opinion Quarterly, 53 388-394.

PAGE 140

131 Reaves, B. A. (2001). Felony defendants in large urban counties, 1998 Washington, DC: U.S. Bureau of Justice Statistics : Government Printing Office. (NCJ 187232). Reaves, B. A. & Hart, T. C. (2000). Law enforcement management and administrative statistics, 1999: Data fo r individual state and local agencies with 100 or more officers Washington, DC: U.S. Bureau of Jus tice Statistics: Government Printing Office. (NCJ 184481). Reaves, B. A. & Hickman M. J. (2004). Law enforcement management and administrative statistics, 2000: Data for individual state and local agencies with 100 or more officers Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 203350). Reiss, A. J. (1977a). Analytical studies of victimiza tion by crime using National Crime Survey panel data: Final report Washington, DC: U.S. Law Enforcement Assistance Administration: Govern ment Printing Office. (NCJ 49663). Reiss, A. J. (1977b). Summary of series and nonseries incident reporting, 1972-1975. In R. G. Lehnen & W. G. Skogan, (Eds.), The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Reiss, A. J. (1982). Panel study of victimizati on by crime: Final report Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 88148). Rennison, C. M. (unpublished manuscript). Re porting Violence Agains t Hispanics to the Police. Rennison, C. M. (2001a). Criminal victimization, 2001: Changes 200-2001 with trends 1993-2001 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 194610). Rennison, C. M. (2001b). Violent victimizati on and race, 1993-98 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 176354). Rennison, C. M. (2002). Hispanic victims of violent crime, 1993-2000 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 191208). Rennison, C. M. (2003). Intimate partner violence, 1992-2001 Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 197838). Rennison, C. M. & Planty, M. (2003). Nonl ethal intimate partner violence: Examining race, gender, and income patterns. Violence and Victims, 18 (4), 433-443.

PAGE 141

132 Rennison, C. M. & Rand, M. R. (2003a). Criminal victimization, 2002 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 199994). Rennison, C. M. & Rand, M. R. (2003b). Nonl ethal intimate partner violence against women: A comparison of three age cohorts. Violence Against Women, 9 (12), 1417-1428. Rennison, C. M. & Welchans, S. (2000). Intimate partner violence Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 178247). Rojeck, D. G. (1983). Social status and deli nquency: Do self-reports and official reports match? In G. P. Waldo (Ed.). Measurement issues in criminal justice Beverly Hills: Sage. Rover, L. (1965). The great response bias myth. Psychological Bulletins, 63, 129-156. Sampson, R. J. & Lauritsen, J. L. (1990). De viant lifestyles, proxim ity to crime, and the offender-victim link in personal violence. Journal of Research in Crime and Delinquency, 27 (2), 110-139. Sampson, R. J. & Lauritsen, J. L. (1994) Violent victimization and offending: Individual-, situational-, a nd community-level risk factors. In Albert J. Reiss & Jeffrey A. Roth (Eds). Understanding and Preventing Violence: Social Influences on Violence vol. 3. Washington, DC: National Research Council: National Academy Press. Schaffer, N. (1980). Evalua ting race-of-interviewer effects in a national survey. Sociological Methods and Research, 8, 400-419. Schuman, H. & Converse, J. (1971). The e ffects of black and white interviewers on black respondents in 1968. Public Opinion Quarterly, 35 44-68. Schuman, H. & Presser, S. (1978). Open v. cl osed questions in attitude surveys. Paper delivered at the annual meeting of the Am erican Association of Public Opinion Research. Roanoke, VA. Schuman, H. & Presser, S. (1996). Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context Thousand Oaks, CA: Sage. Schneider, A. L. (1977). The Portland forwar d records check of crime victims: Final report. Eugene, OR: Institute for Policy Analysis. Segall, M. (1959). The effects of attitude and experience of contr oversial statements. Journal of Abnormal and Social Psychology, 58 61-68.

PAGE 142

133 Singer, E. (1993). Informed consent and su rvey response: A summary of the empirical literature. Journal of Official Statistics, 9, 361-365 Singer, E., Hippler, H. & Schw arz, N. (1992). Confidential ity assurances on surveys: Reassurances or threat. International Journal of P ublic Opinion Research, 4 256-268. Singer, E., Mathiowetz, M. & Couper, M. (1993 ). Privacy and confidentiality as factors in survey participation: The case of the 1990 Census. Public Opinion Quarterly, 57, 465-482. Singer, E., Von Thurn, D. R. & Miller, E. R. (1995). Confidentia lity assurances and response: A quantitative review of the experimental literature. Public Opinion Quarterly, 59 66-77. Skogan, W. G. (1981). Issues in the measurement of victimization Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 74682). Skogan, W.G. & Hartnett, S. M. (1997). Community policing, Chicago style New York: Oxford University Press. Sliwa, G. E. (1977). Analysis of nonresponse rates for various household surveys Unpublished memorandum, U.S. Departme nt of Commerce, Bureau of the Census, Washington, DC. Stata Statistical Software (2003). Rel ease 8.0. StataCorp, College Station,TX. Stock, J. S. and Hochstim, J. R. (1951). A method of measuring inte rview variability. Public Opinion Quarterly, 51 322-34. Sudman, S. & Bradburn, N. M. (1974). Response effects in surveys: A review and synthesis Chicago: Aldine. Taylor, B. M. (1989). New directions for the National Crime Survey Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 115571). Taylor, B. M. & Rand, M. R. (1995). The National Crime Victimization Survey Redesign: New Understandings of Vic timization Dynamics and Measurement Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Tedin, K. L. & Hofstetter, C. R. (1982). The effect of cost and im portance factors on the return rate for single and multiple mailings. Public Opinion Quarterly, 46 ,122128.

PAGE 143

134 Thibaut, J. W. & Kelley, H. H. (1959). The social psychology of groups NY: John Wiley and Sons. Thomas, M. O. & Burns, P. F. (2005). Re pairing the divide: An investigation of community policing and citizen attitude toward the police by race and ethnicity. Journal of Ethnicity in Criminal Justice 3 71-90. Thornberry, T. P. & Krohn, M. D. (2003). Compar ison of self-report and official data for measuring crime. In National Academy of Sciences (Ed.), Measurement problems in criminal justi ce research: Workshop summary. National Academies Press, Washington DC. Thornberry, O. & Scott, H. D. (1973 November). Methodology of a health interview survey for a population of one million Paper presented at the 101st Annual meeting of the American Publish Health Association, San Francisco, CA. Tourangeau, R. & McNeeley, M. (2003). Measuring crime and crime victimization: Methodological issues. In Na tional Research Council (Ed.), Measurement problems in criminal justice research, National Academies Press, Washington, DC. Tucker, C. (1983). Interviewer effects in telephone surveys. Public Opinion Quarterly, 47 84-95. Turner, A. G. (1972). San Jose methods test of known crime victims Washington, DC: U.S. Law Enforcement Assistance Admini stration: Government Printing Office. (NCJ 6869). Turner, A. G. (1976a). Report on 12and 13-ye ar-old interviewing experiment. In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Turner, A. G. (1976b). The effect of memo ry bias on the design of the National Crime Survey. In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Gove rnment Printing Office. (NCJ 90307). Turner, A. G. (1977). An experiment to compare three interview procedures in the National Crime Survey. In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307).

PAGE 144

135 Turner, C. Lessler, J. & Devore, J. (1992). Effects of mode administration and wording on reporting drug use. In Turner, C. F., Lessler, J. & Gfroerer, J. (Eds), Survey measurement of drug use: Methodological studies Rockville, MD: National Institute on Drug Abuse. U.S. Department of Justice. (1989). Redesign of the National Crime Survey Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 111457). U.S. Department of Justice. (1994). National Crime Victimiz ation Survey redesign: Technical background Washington, DC: U.S. Bur eau of Justice Statistics: Government Printing Office. (NCJ 151172). U.S. Department of Justice. (2003a). Criminal victimization in the United States, 2002: Statistical tables Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office: (NCJ 200561). U.S. Department of Justice. (2003b). The nations two crime measures Washington, DC: U.S. Bureau of Justice Statistics : Government Printing Office. (NCJ 122705). Warchol, G. (1998). Workplace violence, 1992-96 Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 168634). Weisberg, H. F. (2005). The total survey error approach: A guide to the new science of survey research University of Chicago Press: Chicago, IL. Williams, J. A., Jr. (1964). Interviewer-res pondents interaction: a study of bias in the information interview. Sociometry, 27, 338-52. Willimack, D. K., Schuman, H., Pennell, B. & Lepkowski, J. .M. (1995). Effects of a prepaid nonmonetary incentive on response ra tes and response quality in a faceto-face survey. Public Opinion Quarterly, 59, 78-92. Woltman, H. (1975). Recall bias and telescoping in the NCS Unpublished memorandum, U.S. Department of Co mmerce, Bureau of the Census, Washington, DC. Woltman H. F. & Bushery J. (1977a). Resu lts of a study to determine the optimum times to retain a panel in sample. In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307).

PAGE 145

136 Woltman H. F. & Bushery J. (1977b). Re sults of the NCS maximum personal visit maximum telephone interview experiment. In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working pa pers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Jus tice Statistics: Government Printing Office. (NCJ 90307). Woltman, H. F., Bushery, J. & Carstensen, L. (1975). Recall bias and telescoping in the National Crime Survey. In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Woltman, H. F. & Cadek, G. (1977). Are memo ry biases in the National Crime Survey associated with the characteristics of the criminal incident? In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working papers. Volume II: Methodological studies. Wash ington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307). Ybarra, L. & Lohr, S. L. (2000). Effects of attrition in the National Crime Victimization Survey Paper presented at the proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Ybarra, L. & Lohr, S. L. (2002). Estimates of repeat victimization using the National Crime Victimization Survey. Journal of Quantitative Criminology, 18(1), 1-21. Yost, L. R. & Dodge, R. W. (1970). House hold survey of victims of crime: Second pretestBaltimore, Maryland. In R. G. Lehnen & W. G. Skogan, (Eds.). The National Crime Survey: Working papers. Volume II: Methodological studies Washington, DC: U.S. Bureau of Justice Statistics: Government Printing Office. (NCJ 90307).

PAGE 146

137 Appendices

PAGE 147

138 Appendix A: NCVS-1 Basic Screen Questionnaire

PAGE 148

139 Appendix A (Continued)

PAGE 149

140 Appendix A (Continued)

PAGE 150

141 Appendix A (Continued)

PAGE 151

142 Appendix A (Continued)

PAGE 152

143 Appendix A (Continued)

PAGE 153

144 Appendix A (Continued)

PAGE 154

145 Appendix A (Continued)

PAGE 155

146 Appendix A (Continued)

PAGE 156

147 Appendix A (Continued)

PAGE 157

148 Appendix A (Continued)

PAGE 158

149 Appendix A (Continued)

PAGE 159

150 Appendix A (Continued)

PAGE 160

151 Appendix A (Continued)

PAGE 161

152 Appendix A (Continued)

PAGE 162

153 Appendix A (Continued)

PAGE 163

154 Appendix A (Continued)

PAGE 164

155 Appendix A (Continued)

PAGE 165

156 Appendix A (Continued)

PAGE 166

157 Appendix A (Continued)

PAGE 167

158 Appendix A (Continued)

PAGE 168

159 Appendix A (Continued)

PAGE 169

160 Appendix A (Continued)

PAGE 170

161 Appendix A (Continued)

PAGE 171

162 Appendix A (Continued)

PAGE 172

163 Appendix A (Continued)

PAGE 173

164 Appendix A (Continued)

PAGE 174

165 Appendix A (Continued)

PAGE 175

166 Appendix A (Continued)

PAGE 176

167 Appendix A (Continued)

PAGE 177

168 Appendix B: NCVS-2 Crime Incident Report

PAGE 178

169 Appendix B (Continued)

PAGE 179

170 Appendix B (Continued)

PAGE 180

171 Appendix B (Continued)

PAGE 181

172 Appendix B (Continued)

PAGE 182

173 Appendix B (Continued)

PAGE 183

174 Appendix B (Continued)

PAGE 184

175 Appendix B (Continued)

PAGE 185

176 Appendix B (Continued)

PAGE 186

177 Appendix B (Continued)

PAGE 187

178 Appendix B (Continued)

PAGE 188

179 Appendix B (Continued)

PAGE 189

180 Appendix B (Continued)

PAGE 190

181 Appendix B (Continued)

PAGE 191

182 Appendix B (Continued)

PAGE 192

183 Appendix B (Continued)

PAGE 193

184 Appendix B (Continued)

PAGE 194

185 Appendix B (Continued)

PAGE 195

186 Appendix B (Continued)

PAGE 196

187 Appendix B (Continued)

PAGE 197

188 Appendix B (Continued)

PAGE 198

189 Appendix B (Continued)

PAGE 199

190 Appendix B (Continued)

PAGE 200

191 Appendix B (Continued)

PAGE 201

192 Appendix B (Continued)

PAGE 202

193 Appendix B (Continued)

PAGE 203

194 Appendix C: NCVS-551 Rotation Chart Form NCVS-551 U.S. DEPARTMENT OF COMMERCE (3-10-98) BUREAU OF THE CENSUS 1998JAN11121314151611 FEB21222324252621 MAR31323334353631 APR41424344454641 MAY51525354555651 JUNE61626364656661 JULY12131415161112 AUG22232425262122 SEPT32333435363132 OCT42434445464142 NOV52535455565152 DEC62636465666162 1999JAN13141516111213 FEB23242526212223 MAR33343536313233 APR43444546414243 MAY53545556515253 JUNE63646566616263 JULY14151611121314 AUG24252621222324 SEPT34353631323334 OCT44454641424344 NOV54555651525354 DEC64656661626364 2000JAN15161112131415 FEB25262122232425 MAR35363132333435 APR45464142434445 MAY55565152535455 JUNE65666162636465 JULY16111213141516 AUG26212223242526 SEPT36313233343536 OCT46414243444546 NOV56515253545556 DEC66616263646566 2001JAN11121314151611 FEB21222324252621 MAR31323334353631 APR41424344454641 MAY51525354555651 JUNE61626364656661 JULY12131415161112 AUG22232425262122 SEPT32333435363132 OCT42434445464142 NOV52535455565152 DEC62636465666162January 1998 -December 2001 NCVS ROTATION CHARTJ19J20J21 Year/Month

PAGE 204

195 About the Author Timothy Hart received a B achelors Degree in Criminal Justice from the University of Florida in 1992 and a M. A. in Criminal Justice from the University of Memphis in 1997. Upon graduation, he wa s awarded a two year Presidential Management Fellowship with federal government. He continued in ci vil service for four years following the fellowship, working for the Bureau of Justice Statistics as well as the Drug Enforcement Administration. Mr. Hart entered the Ph.D. program at the University of South Florida in 2003. While in the Ph.D. program, Mr. Hart activ ely developed his rese arch interests in survey research, applied statistics, and geogr aphic information systems (GIS). He coauthored an article published in the Journal of Quantitative Criminology designed and administered two community surveys for the Hillsborough County Sheriffs Office, and was awarded a grant from the American Stat istical Association to study response effects in the National Crime Victimization Survey.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001789589
003 fts
005 20070605114241.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 070605s2006 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001456
040
FHM
c FHM
035
(OCoLC)137840183
049
FHMM
090
HV6025 (ONLINE)
1 100
Hart, Timothy C.
0 245
Respondent fatigue in self-report victim surveys :
b examining a source of nonsampling error from three perspectives
h [electronic resource] /
by Timothy C. Hart.
260
[Tampa, Fla] :
University of South Florida,
2006.
3 520
ABSTRACT: Survey research is a popular methodology used to gather data on a myriad of phenomena. Self-report victim surveys administered by the Federal government are used to substantially broaden our understanding of the nature and extent of crime. A potential source of nonsampling error, respondent fatigue is thought to manifest in contemporary victim surveys, as respondents become "test wise" after repeated exposure to survey instruments. Using a special longitudinal data file, the presence and influence of respondent fatigue in national self-report victim surveys is examined from three perspectives. Collectively, results provide a comprehensive look at how respondent fatigue may impact crime estimates produced by national self-report victim surveys.
502
Dissertation (Ph.D.)--University of South Florida, 2006.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 195 pages.
Includes vita.
590
Adviser: Thomas Mieczkowski, Ph.D.
653
National Crime Victimization Survey.
Fatigue bias.
Nonresponse.
Survey research.
Research methods.
690
Dissertations, Academic
z USF
x Criminology
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1456