USF Libraries
USF Digital Collections

Assessment of organizational readiness to change and an intervention program

MISSING IMAGE

Material Information

Title:
Assessment of organizational readiness to change and an intervention program
Physical Description:
Book
Language:
English
Creator:
Richards, Kimberly H
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
organization development
organizational change
readiness to change
Dissertations, Academic -- Psychology -- Doctoral -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: This purpose of the study was twofold: to create and assess factors affecting organizational preparation for change and to assist the USF College of Medicine's administrators in developing and implementing an initiative in order to comply with regulations of the accreditation board. Randomly selected program directors participated in three training modules between September and November 2002. The training was targeted toward the development and implementation of learning objectives for medical residents. A panel evaluated the learning objectives developed by both trained and control directors to see whether the training resulted in the development of superior objectives. Additionally, program directors, residents and faculty were surveyed to determine if there was any impact of changes in learning objectives. More specifically, the three groups were surveyed before and after the development of the learning objectives on perceptions of organizational readiness to change and satisfaction with the current resident evaluation system. Respondents included 20 program directors, 56 residents in training and approximately 52 faculty members in the various programs at the USF medical school. Three sets of analyses were conducted. The first of the analyses concerned the immediate outcome of the training. This analysis was based on an expert panel's judgments of the quality of learning objectives generated by the program directors. The second and third analyses concerned more distal outcomes of the training, and focused on (a) perceptions of organization readiness to change and attitudes about resident evaluation, and (b) perceptions of whether any change actually occurred. For both readiness to change and perceptions of resident evaluation, the design was a 2X2X2X3 mixed ANOVA design. A single factor (trials, pre and post intervention) was within participants. The two main factors of interest for the study were between participants; the first between factor was the training program (experimental vs. control group); the second between factor was time pressure (facing more time pressure vs. facing less). The last independent variable, position, was included in the analyses to reduce error from the individual's position with the organization (i.e., program director, faculty, resident). The dependent variables included attitudes concerning resident evaluation procedures and organization readiness for change. For the third analysis, perceptions of whether any changes actually occurred served as the dependent variable. Because such perceptions could only be taken meaningfully at posttest, the design was a 2X2 between participants analysis in which the independent variables were training (trained vs. control) and time pressure (more vs. less). Results indicated that there was no difference in the quality of learning objectives between trained and control groups and no difference in the changes that were reported by residents, faculty and program directors. The training intervention did not have the intended effect as attitudes toward resident evaluations and perceptions of readiness to change did not improve as a function of the treatment. Time pressure did have an effect on perceptions of readiness to change but in the opposite direction from what was hypothesized; programs under less pressure had more positive perceptions of readiness to change.There was a change from time 1 to time 2 based on position; residents perceptions of readiness to change improved over the course of the study while faculty perceptions became more negative and program directors remained the same.
Thesis:
Thesis (Ph.D.)--University of South Florida, 2004.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Kimberly H. Richards.
General Note:
Includes vita.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 86 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001461867
oclc - 54937876
notis - AJQ2279
usfldc doi - E14-SFE0000224
usfldc handle - e14.224
System ID:
SFS0024920:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001461867
003 fts
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 040406s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000224
035
(OCoLC)54937876
9
AJQ2279
b SE
040
FHM
c FHM
1 100
Richards, Kimberly H.
0 245
Assessment of organizational readiness to change and an intervention program
h [electronic resource] /
by Kimberly H. Richards.
260
[Tampa, Fla.] :
University of South Florida,
2004.
502
Thesis (Ph.D.)--University of South Florida, 2004.
504
Includes bibliographical references.
500
Includes vita.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Title from PDF of title page.
Document formatted into pages; contains 86 pages.
520
ABSTRACT: This purpose of the study was twofold: to create and assess factors affecting organizational preparation for change and to assist the USF College of Medicine's administrators in developing and implementing an initiative in order to comply with regulations of the accreditation board. Randomly selected program directors participated in three training modules between September and November 2002. The training was targeted toward the development and implementation of learning objectives for medical residents. A panel evaluated the learning objectives developed by both trained and control directors to see whether the training resulted in the development of superior objectives. Additionally, program directors, residents and faculty were surveyed to determine if there was any impact of changes in learning objectives. More specifically, the three groups were surveyed before and after the development of the learning objectives on perceptions of organizational readiness to change and satisfaction with the current resident evaluation system. Respondents included 20 program directors, 56 residents in training and approximately 52 faculty members in the various programs at the USF medical school. Three sets of analyses were conducted. The first of the analyses concerned the immediate outcome of the training. This analysis was based on an expert panel's judgments of the quality of learning objectives generated by the program directors. The second and third analyses concerned more distal outcomes of the training, and focused on (a) perceptions of organization readiness to change and attitudes about resident evaluation, and (b) perceptions of whether any change actually occurred. For both readiness to change and perceptions of resident evaluation, the design was a 2X2X2X3 mixed ANOVA design. A single factor (trials, pre and post intervention) was within participants. The two main factors of interest for the study were between participants; the first between factor was the training program (experimental vs. control group); the second between factor was time pressure (facing more time pressure vs. facing less). The last independent variable, position, was included in the analyses to reduce error from the individual's position with the organization (i.e., program director, faculty, resident). The dependent variables included attitudes concerning resident evaluation procedures and organization readiness for change. For the third analysis, perceptions of whether any changes actually occurred served as the dependent variable. Because such perceptions could only be taken meaningfully at posttest, the design was a 2X2 between participants analysis in which the independent variables were training (trained vs. control) and time pressure (more vs. less). Results indicated that there was no difference in the quality of learning objectives between trained and control groups and no difference in the changes that were reported by residents, faculty and program directors. The training intervention did not have the intended effect as attitudes toward resident evaluations and perceptions of readiness to change did not improve as a function of the treatment. Time pressure did have an effect on perceptions of readiness to change but in the opposite direction from what was hypothesized; programs under less pressure had more positive perceptions of readiness to change.There was a change from time 1 to time 2 based on position; residents perceptions of readiness to change improved over the course of the study while faculty perceptions became more negative and program directors remained the same.
590
Adviser: Brannick, Michael T.
653
organization development.
organizational change.
readiness to change.
690
Dissertations, Academic
z USF
x Psychology
Doctoral.
090
BF121
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.224



PAGE 1

Assessment Of Organizational Readiness To Change And An Intervention Program by Kimberly H. Richards A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Psychology College of Arts and Sciences University of South Florida Major Professor: Michael T. Brannick, Ph.D. Walter C. Borman, Ph.D. Geoff P. Fabri, M.D. Carnot E. Nelson, Ph.D. Douglas Rohrer, Ph.D. Toru Shimizu, Ph.D. Date of Approval: March 17, 2004 Keywords: organizational change, organization development, readiness to change Copyright 2004 Kimberly H. Richards

PAGE 2

DEDICATION Throughout the six years that I have devoted to accomplishing this goal, there has been one person who has traveled this journey with me, each step of the way. He felt the difficulties and pain and experienced the joy as intimately as I have. He sacrificed his own goals and needs to not only support me but sometimes drag me through this process; all without complaint, resentment or a harsh word. When the times got tough, he was there to listen, to encourage, to push me forward. When the times were good, he stood back in the shadows to let me shine; happy knowing that I was becoming the person, the professional that he always knew I was. I have achieved what I set out to, not because of my perseverance or ambition but because my friend, my partner unselfishly devoted all of his strength and love to ensuring that I would. For these reasons, this work, with all the energy, time and passion that was put into it, is dedicated to my best friend, Brian Richards, who became my husband during this journey. Without your undivided atte ntion, constant support, understanding and unconditional love, I would not have been able to make this dream a reality. Now, we have all the time in the world to enjoy this. This is yours as much as it is mine because we walked the road together. Thank you. I love you.

PAGE 3

ACKNOWLEDGEMENTS The current study was undertaken with the tremendous support and dedication of many members of the University of South Florida’s College of Medicine. Many people contributed their time, energy, thoughts and hard work to make this project a successful one. I would specifically like to acknowledge the incredible amount of time and support that Dr. Fabri and Eric Anderson have provided for more than a year on this project; without their steady assistance and heavy involvement, I would not have been able to finish my dissertation. Others at the College of Medicine who deserve recognition are: Beth Hyde-Hood, Anne Con eely, Stacy Silverstein, Elaine Adams, program directors, faculty and residents; Drs. Goldman, Nelson, and Flareau. Thank you for your efforts. Of course, I would like to acknowledge the contributions of my committee and in particular, my advisor, Mike Brannick. Mike wore many hats including guide, editor, consultant and motivator; he was able to provide the right mix of guidance, autonomy and support to help me reach my goals even though I was literally almost 1,000 miles away. Finally, this project could not have been completed in the timeframe it was without the support and cooperation I received from my supervisors and teammates at work. Thank you for supporting both my personal and professional goals.

PAGE 4

i Table of Contents List of Tables iii List of Figures iv Abstract v Chapter One Introduction 1 Definition 1 Brief History 2 Models of the change process 3 Lewin’s three-stage model 3 Action research model 4 Purpose and description of study 5 Factors affecting planned change 7 Organizational readiness to change 7 Intervention and Evaluation 11 Intervention 11 Interventions in Organization Development 12 Evaluation of intervention 13 Chapter Two 16 Participants 16 Procedure 16 Intervention 17 Learning Objectives 18 Surveys 19 Chapter Three 22 Learning Objectives 22 Surveys 23 Tests of Hypotheses 25 Readiness to change 25 Attitudes toward resident evaluations 26 Subsequent Analyses 28

PAGE 5

ii Chapter Four 31 Summary and Interpretation of Findings 31 Learning Objectives 32 Changes to resident evaluations 33 Attitudes and perceptions of readiness to change 36 Implications for Graduate Medical Education 38 Study Limitations 40 Future Research 41 Summary and Conclusions 42 References 57 Appendices 60 Appendix A: Assignments to Treatment Condition and Time Pressure 61 Appendix B: Agenda for Modules 63 Appendix C: Workshop Evaluation 64 Appendix D: Survey of Non-Attendees 65 Appendix E: Learning Objectives Rating Form 66 Appendix F: Resident Evaluation Pre-Survey 69 Appendix G: Resident Evaluation Post-Survey Items 72 Appendix H: Correlation between Items on Pre-Survey 73 Appendix I: Mean Global and Composite Ratings by Program 75 Appendix J: Means and Standard Deviations for Pre-Survey Items 76 Appendix K: Means and Standard Deviations for Post-Survey Items 77 About the Author End Page

PAGE 6

iii List of Tables Table 1 Mean ratings for global and composite variable by treatment condition 45 Table 2 Factor pattern matrix between the attitude items and readiness to change Items 46 Table 3 Interrater agreement on attitude and perception of readiness scales 47 Table 4 Mean ratings for pre and post composite scores on readiness to change and attitudes toward resident evaluation methods 48 Table 5 Mean readiness scores on pre and post surveys as a function of treatment condition and time pressure 49 Table 6 Mean attitudes and readiness scores by position and time trial 50 Table 7 Mean attitudes on pre and post surveys as a function of treatment condition and time pressure 51 Table 8 Ratings of the extent to which changes have been in resident evaluation methods 52

PAGE 7

iv List of Figures Figure 1. Illustration of pre-post nature of questionnaire administration 53 Figure 2. Effects of interaction between training and time pressure on readiness to change 54 Figure 3. Effects of interaction between training and time pressure on changes made to resident evaluation methods 55

PAGE 8

v Assessment of Organizational Readiness to Change and an Intervention Program Kimberly H. Richards ABSTRACT This purpose of the study was twofold: to create and assess factors affecting organizational preparation for change and to assist the USF College of Medicine’s administrators in developing and implementing an initiative in order to comply with regulations of the accreditation board. Randomly selected program directors participated in three training modules between September and November 2002. The training was targeted toward the development and implementation of learning objectives for medical residents. A panel evaluated the learning objectives developed by both trained and control directors to see whether the training resulted in the development of superior objectives. Additionally, program directors, residents and faculty were surveyed to determine if there was any impact of changes in learning objectives. More specifically, the three groups were surveyed before and after the development of the learning objectives on perceptions of organizational readiness to change and satisfaction with the current resident evaluation system. Re spondents included 20 program directors, 56 residents in training and approximately 52 faculty members in the various programs at the USF medical school. Three sets of analyses were conducted. The first of the analyses concerned the immediate outcome of the training. This analysis was based on an expert panel’s judgments of the quality of learning objectives generated by the program directors. The second and third analyses concerned more di stal outcomes of the training, and focused on (a) perceptions of organization readiness to change and attitudes about resident evaluation, and (b) perceptions of whether any change actually occurred.

PAGE 9

vi For both readiness to change and perceptions of resident evaluation, the design was a 2X2X2X3 mixed ANOVA design. A si ngle factor (trials, pre and post intervention) was within participants. The tw o main factors of interest for the study were between participants; the first between factor was the training program (experimental vs. control group); the second between factor was time pressure (facing more time pressure vs. facing less). The last independent variable, position, was included in the analyses to reduce error from the individual’s position with the organization (i.e., program director, faculty, resident). The dependent variables included attitudes concerning resident evaluation procedures and organization readiness for change. For the third analysis, perceptions of whether any changes actually occurred served as the dependent variable. Because such perceptions could only be taken meaningfully at posttest, the design was a 2X2 between participants analysis in which the independent variables were training (trained vs. control) and time pressure (more vs. less). Results indicated that there was no difference in the quality of learning objectives between trained and control groups and no difference in the changes that were reported by residents, faculty and program directors. The training intervention did not have the intended effect as attitudes toward resident evaluations and perceptions of readiness to change did not improve as a function of the tr eatment. Time pressure did have an effect on perceptions of readiness to change but in the opposite direction from what was hypothesized; programs under less pressure had more positive perceptions of readiness to change. There was a change from time 1 to time 2 based on position; residents perceptions of readiness to change improved over the course of the study while faculty perceptions became more negative and program directors remained the same.

PAGE 10

1 Chapter One Introduction Definition Organization development is a field that is continuously evolving but still amorphous; its boundaries and parameters are ill-defined (Waclawski & Church, 2002). The field itself is not based entirely on theory but has evolved largely through practice. Organization development refers to both a process and a collection of activities aimed at improving organizational functioning, as well as individual well-being (McKenna, 1994). Although many different definitions of orga nization development have been offered, there is agreement on basic principles as well as certain characteristics that are typical of organization development. These include that the process is based on behavioral science principles, involves planned change, is based on values, use of feedback and action research as well as having the goal of improving organizational effectiveness. Cummings and Worley (1997) defined organization deve lopment as “a systemwide application of behavioral science knowledge to the planned development and reinforcement of organizational strategies, structures, and processes for improving an organization’s effectiveness” (p. 2). Organization development (OD) implies that change will take place, because change from the status quo is necessary to improve functioning and effectiveness. Burke (1982), however, draws a distinction between these areas. In order for the two to mean the same thing, Burke (1982) requires that the intervention must lead to a fundamental shift in the organization culture or the nature of the organization. Cummings and Worley (1997) clarified this notion by suggesting that organization change is usually in response to external pressure or an event; it is broader in scope than OD because it refers to any kind of change within the organization. The OD process and interventions can be used to manage change but are targeted at bringi ng about planned change such that knowledge

PAGE 11

2 and skills are transferred to organizational members. Members then use the new knowledge and skills to build capabilities to reach organizational goals and solve problems. For the purposes of this study, the organizational development activities undertaken were aimed at bringing about improved effectiveness and represented the beginning of a fundamental shift in culture for the organization in question. Brief History Rothwell, Sullivan and McLean (1995) noted the field has experienced a gradual evolution that can be viewed from two different perspectives: philosophical and methodological. The theoretical underpinnings can be traced to the Human Resources school of organization theory, which itself e volved in reaction to the classical and neoclassical schools of thought. In Classical theori es, employees were viewed as a part in a system, and jobs were broken down into tasks in order to increase efficiency. This view, however, ignored the social dynamics that influenced employee behaviors that were discovered to have a substantial impact on performance during the Hawthorne Studies (Robbins, 1996). The Hawthorne studies c oncluded that employees responded with increased productivity despite changes in work environment; after an extended period of observation, researchers realized that workers responded more to how they were treated than to the environment. The Human Resources movement emphasized the idea that humanism and applied social science were important tenets in any organization theory. Other scholars also have acknowledged the Hawthorne studies as a major influence in the development of organization development and change (Burke, 1982; Cummings and Worley, 1997). The methodological perspective emphasizes the influence of applied social science experiments in the evolution of the field of organization development and change. Many OD techniques were introduced from these early experiments, including survey research and feedback, which highlighted the use of information collected from organization members to be used as a basi s of problem solving and action planning, lab training (a precursor to team building), which focused attention on dynamics of group interactions, and sociotechnical systems theory, which emphasized the social subsystems in organizations (Cummings & Worley, 1997; Burke, 1982; Rothwell, et. al, 1995).

PAGE 12

3 Taken together, these two perspectives comprise the theoretical and methodological underpinnings of the organization development and change field. Now that the reader is familiar with the definition and history of Organization Development and change, subsequent sections present information relevant to the current study. In the next section, models of the change process are described, as these form the basis for the process of change used in the study. Next, the purpose and descriptive details of the study are presented in order to provide a basis for understanding the importance of the factors affecting planned change, enumerated in the following section. Finally, the intervention is described as well as the process of evaluating OD efforts. Models of the Change Process A fundamental goal of organization developm ent is to bring about planned change that improves organization effectiveness, therefore, a majority of approaches and interventions rely on theories of planned change (Waclawski & Church, 2002; Cummings & Worley, 1997). Two models served as the foundation for all others that have been developed and used in the OD process. These are Lewin’s three-stage model and the Action research model. Lewin’s Three-Stage Model Lewin’s model is based on three broad steps: unfreezing, change, refreezing (Waclawski & Church, 2002; Cummings & Worley, 1997; Burke, 1984). The underlying premise of this three-stage process is that underlying organizational forces are at work within a system that keep member behaviors stable. In order to change or develop an organization, those forces must be modifi ed, however, there are two opposing sets of forces that must be considered. The first set are those striving to maintain the status quo and the second are those advocating change; to change, an organization must increase or decrease one or the other set of forces. In other words, the balance of these forces must be upset in some fashion. According to Lewin, the most effective strategy for successful change is to lessen those forces maintaining the status quo, rather than strengthening those forces advocating for change (Rothwell, et. al, 1995). During the unfreezing stage, the objective is to discover and disseminate information that highlights the discrepancy between current behaviors within the

PAGE 13

4 organization and the desired future state. The purpose in doing this is to create dissatisfaction with the status quo, in order to create readiness for change within the organization and promote the motivation to change member behaviors. In the change stage, the change agent and other key members of the organization are working to develop new behaviors and attitudes or to modify old ones to bring the organization closer to its desired state. And finally, during the refreezing stage, support mechanisms are established for the new behaviors and attitudes so change will be long-lasting. Unfortunately, this stage is rarely completed or there is little time for it to take effect before the organization is forced to change again (Waclawski & Church, 2002). This three-stage process is purposely broad and is meant as a general guideline, as it does not focus on specific OD steps or interventions used to implement organization development and change. Action Research Model The Action Research model was also developed, in part, by Lewin, as a cyclical process that uses information uncovered from research within the organization as a basis for action. The emphasis in this model is on data gathering and diagnoses as a means to guide and continuously refine future steps or implementation of change. The model implies a data gathering process that occurs over an extended period of time, collecting and using information regarding similar variables. This multiple measurement process was used in the current study. This model is the basis for most current approaches to OD or planned change (Cummings & Worley, 1997). Additionally, there are contemporary adaptations to these models that include more emphasis on organizational members, not just the change agent or top management, learning about organizati on development and change process in an effort to facilitate the process and outcome. There is also more emphasis on developing internal change agents who are organization members. These internal change agents learn or gain competence in change processes to act as a resource to continually change the organization, even after the consultant has withdrawn from a particular project (Cummings & Worley, 1997). Unfortunately, the process is not as ordered or delineated

PAGE 14

5 as the above models would suggest and the nature and efficacy of planned change varies widely between organizations and among differing circumstances. Principles of both of these models were used in guiding the study and in providing information to the organization that assisted members in planning for next steps in the planned change process. First, the study concentrated on the unfreezing stage in Lewin’s three-stage model, in an effort to prepare the organization and its members for change. This effort was attempted, in part, by providing information and guidance to key members of the organization, with the intent for them to act as internal change agents and leaders of the process of planned change. Additionally, in using the principles of the Action Research Model, information was gathered during the unfreezing process, and reported back to the organization’s administration and decision-makers, in an effort to provide the necessary knowledge to continue this change initiative, even after the completion of this particular study. Purpose and Description of Study Given the dual purposes of the study, both a pragmatic effort designed to assist a functioning organization and a scientific investigation undertaken to advance knowledge and understanding of the field, it is necessary to describe the purpose of the study at this point and to explain the circumstances involved. The study was conducted in the USF College of Medicine, in cooperation with key administrators and led by Dr. Fabri, Associate Dean, Graduate Medical Education. Currently, the Medical profession is undergoing major changes in educational and training philosophies. More specifically, they are in the process of changing focus from a structure and process-based educational and training system to a competency and outcomes based format (Carraccio, et al., 2002). The American Medical Association (AMA), the Association of American Me dical Colleges (AAMC), and Accreditation Council for Graduate Medical Education (ACGME) are driving two projects, one to develop a standardized core curriculum for all residents and another, the Outcomes Project, to evaluate the efficacy of training procedures. The project began in 2001 and is to be fully implemented by 2006; accreditation of all medical schools will be dependent on the development and implementation of a core curriculum and defining and measuring

PAGE 15

6 learning objectives using evaluation tools duri ng resident training. The University of South Florida initiated this effort with a workshop for all program directors to learn basic tools in developing and defining learning obj ectives and evaluation standards. The workshop took place June 4, 2002, presented by Dr. Fabri, Associate Dean, Graduate Medical Education and John Clements, Statistical Research Coordinator. During this workshop, all program directors were charged with the goal of developing learning objectives and new evaluation methods for residents, as a first step in preparation for compliance with new accreditation standards. Program Directors were asked to complete these products in January 2003. The change initiative in the USF College of Medicine was imposed from a force external to the organization and thus, the initiative has faced extreme difficulty in being accepted and implemented by organizational members. Certain graduate medical programs at the USF College of Medicine faced more pressure to implement changes mandated by the ACGME, as 24 of 46 programs were due for site visits within two years of the beginning of this study (between 7/ 1/02-6/30/04), while the remaining 22 programs did not face as much time pressure, given their site visits fell between 7/1/04 and 6/30/07. Thus, for the success of the long-term change project, it was critical to begin the unfreezing process, as described by Lewin, and to create organizational readiness to change. Additionally, it was also necessary to provide a preliminary intervention to help prepare for the longer term project, through the learning of change management principles, but also to reach the short term goals for this first step in the process (create measurable and acceptable learning objectives to guide resident evaluations). Although Lewin identified the unfreezing process in planned change as critical to the success of development and change initiatives, few studies have addressed the process of creating organizational readiness to change and the subsequent effects on the efficacy of interventions. The premise of the study, therefore, was to investigate the creation of organizational readiness for change to prepare the organization and its members for a long-term OD and change effort. Additionally, there were two compatible goals. The first was to investigate attitudes of program directors, residents and faculty of the USF Medical School toward learning objec tives and resident evaluations. And the

PAGE 16

7 second objective was to discover the efficacy of the intervention modules in achieving the objectives stated above. The intervention (administered in three modules over a time period of approximately 90 days) focused not only on providing guidance in managing organizational change but primarily in developing learning objectives and a more effective evaluation procedure for residents to help meet the standards of the ACGME for accreditation purposes. The intended effect of the modules, therefore, was not only to assist program directors in creating measurable learning objectives and to improve the current resident evaluation system but also to create organizational readiness to change through the increased satisfaction among users (including faculty and residents) with and improved attitudes toward the resident evaluation system. With the above background information, the following section explores f actors that affect the efficacy of planned change efforts in more detail and that provided the foundation for this study. Factors Affecting Planned Change OD is a long-term process that requires all employees to change their attitudes and behaviors regarding current work processes and/or systems. Unfortunately, this prospect is difficult to achieve, at best. The failure of change programs to achieve their intended results is often attributed to employees’ resistance to change (Bovey & Hede, 2001), while many others acknowledge that positive employee attitudes toward change are critical to achieving organization goals (Eby et. al., 2000; Weber & Weber, 2000). Organizational Readiness to Change One such attitude or perception that can impact organizational change activities is organizational readiness to change. Organizational readiness can be defined as organizational members’ perceptions, beliefs, attitudes and expectations of the extent to which the organization is ready to and capable of introducing and implementing changes in order to improve performance (Armenakis, Harris, & Mossholder, 1993; Pond, Armenakis & Green, 1984; Weber & Weber, 2000). Organizational readiness to change is similar to Lewin’s unfreezing concept (Armenakis et al., 1993; Eby et al., 2000), which is a “process by which organizational members’ beliefs and attitudes about a pending change are altered so that members perceive the change as both necessary and likely to be successful” (Armenakis, et al., 1993, p.422). As Lewin pointed out, unfreezing is

PAGE 17

8 necessary to prepare the organization and its members for the change initiative, therefore, creating organizational readiness for change is a critical initial step in the change process (Rashford & Coghlan, 1994). Mirvis (1983) stated that predictors of the adoption of change programs include leader and staff perceptions of problems at the beginning of change and their attitudes toward it. Member buy-in to the change process is critical as attitudes toward change impact the efficacy of OD interventions (Eby et al., 2000; Pond et al., 1984). The reason is that resistance to change is a common reaction and this resistance must be overcome in order to achieve any change (Eby et al., 2000). The lack of organizational readiness can be a precursor to resistance to change and, therefore, it is critical to understand employee perceptions of organizational readiness both to comprehend and prevent resistance to change and also as a step in successful implementation of change (Armenakis et al., 1993; Church, Margiloff, & Coruzzi, 1995; Eby et al., 2000; Weber & Weber, 2000). Unfortunately, many researchers do not assess the perceptions, attitudes and expectations of organizational members concerning organizational readiness to change prior to planning and implementing change efforts (Pond, et al., 1983; Weber & Weber, 2000). Additionally, other studies on organizational readiness lacked certain desirable design features such as a control group (Weber & Weber, 2000) or longitudinal data to determine how these perceptions may change over time (Eby, et al., 2000; Weber & Weber, 2000). This study incorporated t hose design features and thus, provided important new information concerning the role of organizational readiness in change efforts. The creation of organizational readiness to change, then, is a necessary component of the unfreezing process. Only after organizational readiness to change is established, will the change program have an opportunity to be successful. The question is how to create this readiness to change. Rashford and Coghlan (1994) stated that effective unfreezing requires three elements: disconfirmation of the present state of organizational functioning (or dissatisfaction with the status quo), a need to arouse anxiety to levels that are sufficient to motivate people toward new behavior, and the provision of support and direction to help members change attitudes and behavior.

PAGE 18

9 Others emphasize the first element, a need to create cognitive dissonance between the present and desired state of the organization, or dissatisfaction with the way things are done in the early stages of change so that organizational members are not satisfied with what they know, which will cause discomfort and provide a motivation to learn new behaviors or approaches (Anderson, 2000; Spiker & Lesser, 1995). Spiker & Lesser (1995) suggested that employees must understand the need for change within the organization and the consequences for continuing to do business in the normal way before dissatisfaction will arise. Otherwise, organizational members will have no incentive for expending energy and risking personal loss in changing the way they accomplish their work. Certainly, dissatisfaction with the status quo is one step toward preparing the organization for change, however, there are specific prescriptions for creating organizational readiness for change. First, Armenakis et al. (1993) suggested the primary mechanism for creating readiness is creating dissatisfaction with the status quo. The change agents must sell a message to members that illustrates the discrepancy between the present and desired states of the organization as well as bolster the collective efficacy for change. In fact, a study conducted by Coch & French (1948) showed that proactive, frank discussions about the need for readiness to change were necessary to help change the attitudes and behaviors of organizational members (Armenakis, et al., 1993). All of these strategies to create readiness for change point to a program that presents members with information regarding how the current functioning of the organization is not achieving its maximum, the logic and rationale for the change, why and how this change will result in improved functioning, discussions as to the benefits to groups and individuals within the organization (and consequently, how their personal risks will be rewarded) and the need for being prepared to deal with the changes. In fact, there is empirical support that shows after employees have been trained and shown how the change effort will impact them, they demonstrated more understanding and support for the change effort (Weber & Weber, 2000). In turn, this increases the chances that subsequent change efforts will be successful because it leads to improved self-efficacy among members, according to Bandura, and thus make expectancy of success at later

PAGE 19

10 stages more likely, therefore, leading to readiness to change (Pond, et al., 1984; Latham, 2001). Further, when efforts were made to clarify specific goals regarding the change efforts, employees exhibited more positive attitudes, which led Weber & Weber (2000) to conclude that goal clarity may be one key to creating organizational readiness to change. The experimental modules, in this study, included discussions of how the current functioning of the residency program was not achieving its potential, a logic and rationale for implementing learning objectives and corresponding changes to the resident evaluations, and reasons for and ways that the change would result in improved functioning. In turn, this should have affected the perceptions of residents and faculty (Weber & Weber, 2000). Given the above: H1: The perceptions of organizational readiness to change will improve in programs whose directors participate in the training modules. And H2: Attitudes toward resident evaluations will improve in the programs whose directors participate in the modules The reason is that program directors who participated in the modules were likely to produce superior learning objectives and to make corresponding changes to resident evaluation systems based on the information learned within the modules. If this was the case, changes in the resident evaluation systems should result in the above difference in attitudes. In addition to the above characteristics, experiencing a sense of urgency is also useful to creating organizational readiness and would have differing effects on change conditions, as well as the nature of the readiness program (Armenakis, et al., 1993; Spiker & Lesser, 1995). Given that there were two groups within the experimental condition, one group facing urgent time pressure to implement changes and the other facing little time pressure to implement changes, it followed that: H3: Perceptions of organizational readiness to change will experience greater improvement in programs that face more time pressure as compared to perceptions in program that face less time pressure.

PAGE 20

11 H4: Attitudes toward evaluation systems will experience greater improvement in programs that face more time pressure as compared to programs that face less time pressure. H5: There will be an interaction between training and time pressure on readiness to change. Training is expected to produce disproportionately large readiness ratings in the high time pressure group. And H6: There will be an interaction between training and time pressure on perceptions of resident evaluation. Training is expected to result in disproportionately satisfied ratings in the high time pressure group. Intervention and Evaluation Intervention The USF College of Medicine, and the entire medical education field, began steps toward a paradigm shift from a process-based educational model to a process and outcomes focused model. This process, however, was slated to require approximately four years to complete, which is not unusual in many OD efforts of this magnitude. Although the pressures for change were external to the organization, mandated through the ACGME’s accreditation requirements, the specific change efforts were led by key organizational members with USF’s College of Medicine. The efforts took place over the past 18 months, to develop and implement learning objectives for each rotation required of medical residents, and to define the core competencies all residents must master, represented only one phase in the overall change process. The focus of the study, therefore, was only on the beginning phase and not the entire process of the change initiative, which is the domain of future research. The program proposed for this phase of the process was only one in a series of continuing efforts designed to, collectively, result in the organizational culture and paradigm shift to a new educational model. The purpose of the change program was not to achieve the fundamental change called for to be effective in 2006 but rather to begin the change process by preparing the organization and its members for change and by assisting key members with the technical tasks required to begin the change efforts. This was but one

PAGE 21

12 program, in many, collectively considered an intervention, which, according to Cummings and Worley, is a “sequence of activities, actions and events intended to help an organization improve its performance and effectiveness” (p.141). Interventions in Organization Development Although there is little specific knowledge or research concerning how to design specific interventions within organizations, there are three general criteria of effective interventions (Cummings & Worley, 1997), which were used as a guideline in designing and implementing the program in this study. Those criteria include: intervention that is relevant to organizational members and fits the need of the organization, degree to which the intervention is based on knowledge that is expected to lead to specific outcomes, and the extent to which the program transfers knowledge on how to manage change to organizational members. As stated previously, the program in this study had two objectives; to provide technical assistance in writing and implementing reasonable, understandable and measurable learning objectives and to provide program directors with knowledge in how to manage change within their departments. The design of this program, specifically the three modules within it, incorporated elem ents of the three criteria for effective interventions stated above and therefore, it was hypothesized that: H7: Program directors who participate in the modules will produce better quality learning objectives than those who do not participate, as measured on the criteria established by the USF College of Medicine H8: Changes are more likely to have been made in the resident evaluation system in those programs whose directors participated in the modules than in programs whose directors did not participate in the modules. Given that 24 of 46 programs faced significant time pressure to create and use these learning objectives, to satisfy accreditation requirements, H9: Program directors facing more time pressure are more likely to implement change in resident evaluations And

PAGE 22

13 H10: There will be an interaction between training and time pressure on perceptions of changes in resident evaluation. Evaluation of Intervention In order to test these hypotheses, it was necessary to conduct an evaluation of the change program. One of the critical components that was hypothesized to determine the effectiveness of the program was the attitudes of residents and faculty toward the evaluation systems but also attitudes regarding organizational readiness to change. Several scholars note that complete assessments of programs should include an evaluation of organizational members’ attitudes toward the program and intended results, as attitudes, perceptions and beliefs can influence behavior (Cammann, Fichman, Jenkins, & Klesh, 1983; Lawler, Nadler & Mirv is, 1983; Mirvis, 1983). Further, shared beliefs among organizational members and social norms can weaken or support intentions toward action and these attitudes offer leaders the opportunity to know if the program will be adopted or not (Lawler et al., 1983; Mirvis, 1983). Although it was only one piece of a larger process, it was still necessary to conduct an evaluation of the effectiveness of this program, which goes beyond measuring the attitudes of organizational members. Evaluation of a program is a planned event that gathers information to analyze and provide feedback to those responsible for the change and to organizational members about the effe ct of the progress on the change effort (Beckhard & Harris, 1977; Cummings & Worley, 1997). Program evaluation is an activity that is rarely conducted in OD efforts and this assertion is supported by research (Hanson & Lubin, 1995; Martineau & Preskill, 2002). The reasons for this are numerous, including a lack of time, effort and resources but it is also due to several difficulties, including a need for complex designs, difficulty in finding measurable items that reflect the appropriate goals and a difficulty in knowing or predicting when results will surface (Hanson & Lubin, 1995). This study attempted to provide a systematic evaluation of an early stage intervention program.

PAGE 23

14 Evaluation was essential to this particular change effort because this program was only one of several steps that must be taken in order to cause the fundamental change discussed earlier. Evaluating interventions not only provides information that tells of the value of the program but indicates what is and is not working, thereby, indicating the need for modification as the change effort progresses (Cummings & Worley, 1997; Martineau & Preskill, 2002). Beckhard & Harris (1977) and Cummings & Worley (1997) stated two purposes of evaluation; the first is a total system performance review to assess the overall impact of a change program and the second purpose is to monitor the effects of specific interventions in order to guide the implementation process. The former focuses on outcomes of the change effort and comparing those to the goals of the change effort and desired conditions that were established prior to the initiation of the effort. The latter refers to the assessment of specific actions and whether these have produced the outcome intended. In this study, the focus is on the second purpose of evaluation, as the entire change process will not be completed for several years after the completion of this study. The projected length of the entire change process precluded gathering information about the effect of the change on performan ce, however, as stated above, the purpose of the evaluation for this study was primarily formative. Interventions are intended to affect changes that result in specific outcomes or goals, however, progress toward these goals is indicated through achievement of intermediate goals (Mirvis, 1983). One intermediate goal that was measured in this study was the unfreezing of the organization in preparation for the change stage. Several researchers agree that the most effective evaluation plans are designed in phases to collect data at multiple points in time and at short intervals to provide updates and feedback of the progression of change efforts (Cummings & Worley, 1997; Martineau & Preskill, 2002). By doing so, this provides formative feedback that is critical to guide further action and plan for the next step in implementation of change programs. In this study, it was crucial to know if the change program created organizational readiness for change before the execution of any further interventions, or these were likely to fail. In addition, summative feedback was provided using the data

PAGE 24

15 from the survey concerning resident and faculty satisfaction with the evaluation systems of residents. A comparison of attitudes of residents and faculty between those who were under the management of program directors w ho were trained or not trained should help to determine the efficacy of the intervention described within. As a review, program directors who participated in the modules were hypothesized to produce superior quality learning objectives and to be more likely to implement changes to resident evaluation procedures, therefore, resulting in more positive attitudes of resident and faculty toward those procedures. In conclusion, the study focused on the topic of Organization Development and change in the medical education field. Specifically, it was designed to fill a gap in the literature about factors that affect the change process, namely organizational readiness to change. The USF College of Medicine was in the beginning stages of change and thus, the first priority was to unfreeze the organization in preparation for the next phase in the change cycle. The training program described within was intended to accomplish that objective through the teaching of principles of managing change to key organizational members who were the defacto leaders of the change. At the same time, another objective was to provide technical assistance necessary to begin the change process. The following section describes the method in which this happened.

PAGE 25

16 Chapter Two Method Participants The population of potential participants consisted of 44 program directors overseeing 46 programs in the University of South Florida College of Medicine; approximately 500 residents in training and approximately 450 faculty members in various programs at the USF College of Medicine. All program directors and faculty members had medical degrees but varied in their areas of specialty. The length of time in residence for students varied from one to six years. Although all of the directors, faculty and residents were invited to participate, a total of 35 program directors completed the first survey; although 22 completed the second, only 20 completed surveys at time 1 and time 2; 119 faculty members completed the first survey; although 63 completed the second, only 52 completed surveys at both time 1 and time 2; 120 residents completed the first survey; although 74 completed the second, only 56 completed surveys at time 1 and time 2. Twenty-six residents, who had completed a pre-survey finished their residency and left the USF College of Medicine before they completed a post-survey. There were 128 total participants who indicated their attitudes at both time points during the study; 15% were program directors, 41% were faculty and 44% were residents; 32% female and 67% male. Procedure The study was a pre-posttest survey design in which attitudes toward resident evaluation methods and perceptions of readiness to change were assessed over a period of one academic year. Each program was scheduled for a site visit to review accreditation status over a period of five years beginning in July of 2002; 24 of 46 programs were due for site visits within two years of the start of the 2002 academic year (7/1/02); the remaining 22 were due for review between 7/1/04 and 6/30/07. The time pressure to implement learning objectives and changes to resident evaluation methods for

PAGE 26

17 accreditation purposes was hypothesized to affect the effectiveness of the training program and the attitudes of residents and faculty toward any changes (or lack of them) taking place. Therefore, a blocked design was used in which half of programs that were due for review within two years were randomly assigned to the experimental condition and the other half were assigned to the control condition. The same procedure was used for programs due for review after two years. After blocking for time pressure, programs were randomly assigned to condition; therefore, 12 programs in the experimental condition (50%) faced significant time pressure regarding accreditation, while the other half in the experimental condition did not. Of the 24 programs in the control group, 12 (50%) faced significant time pressure regarding accreditation requirements. Please see Appendix A for a listing of programs and their assigned conditions for training and time pressure. Intervention The intervention consisted of three modules or workshops, presented once a month beginning in September 2002 and running through November 2002. Each module addressed the development of learning objectives for resident training as well as incorporated discussions of the necessity for change to graduate medical instruction as mandated by the accreditation requirements of the ACGME. See Appendix B for an agenda of each module. Of the 22 programs assigned to the experimental condition, program directors or representatives from 14 programs attended the first module, 14 programs were represented at the second module and 8 at the last module; therefore, a total of 17 of the 22 programs assigned to experimental condition actually participated in at least one of the workshops and therefore, participated in the experimental treatment group. The purpose of the first module was to review the goals set out at a workshop in June 2002, led by Dr. Fabri, to introduce, review and discuss the ACGME requirements and changes for accreditation. As this module set the tone for the others, one main purpose was to secure buy-in from the program directors by focusing their attention on the benefits of developing learning objectives. Further, the importance of learning objectives for adult education was discussed; additionally three qualities of objectives

PAGE 27

18 that make them effective were highlight ed. During each module, program directors discussed and debated the necessity of the changes to the graduate medical education programs. Originally, time was allotted for program directors to practice writing learning objectives during the training modules, however, in order to allow more time for the discussions mentioned above, practice time was eliminated from all three modules. The second module illustrated the ways that objectives are linked to and used for performance evaluation. Specifically, the role of goals and feedback was explored, which set the stage for introducing various evaluation tools and behaviorally based selfassessments in the third and final module. Program directors were also introduced to the criteria against which the learning objectives were measured by the expert panel. Finally, the focus in the third module was on implementation of changes to resident evaluation based on learning objectives. Material was presented on how to link measurable objectives to specific evaluation tools. In addition, strategies and tips for managing resistance to change were presented and discussed. Many of the hypotheses were based on the logic that the training modules would produce a difference in quality of learning objectives. In an effort to determine if the program directors believed the training modul es had provided valuable information, an evaluation survey was sent to module participants, see Appendix C. Additionally, a brief survey was sent to all who did not attend the training modules, but who were invited, see Appendix D. Unfortunately, no evaluations were returned during the course of the study. Learning Objectives All program directors were asked to develop rotation and level-specific objectives to meet the ACGME accreditation standards. The USF College of Medicine requested these be completed by January 2003 for all programs regardless of scheduled ACGME site visits, as this was a new requirement for accreditation status. The College of Medicine began collecting these objectives in January 2003, however, the majority were not completed and turned in until April 2003; for three programs including Anesthesiology-Pain management, Cardiovascular disease, and Geriatric medicine these objectives were not completed during the course of the study. Two of these programs were in the experimental condition, and two were in the more

PAGE 28

19 time pressure condition. In four programs, including OB/GYN-oncology, Otolaryngology-Head and Neck, Surgery-Hand and Surgery-Vascular, objectives for subspecialties were embedded within the major program. For example, Hand is a subspecialty of Surgery. Surgery-Hand objectives were embedded within the Surgery objectives. As the two programs share a program director, the objectives for Internal Medicine-Pediatrics and Pediatrics were combined into one document. For the analyses, these program or subspecialties were not included because separate ratings could not be determined for the subspecialties. After the majority of objectives were completed and received, the objectives were rated according to predetermined criteria. See Appendix E for the rating forms and definitions of the criteria. Each of the criteria were discussed in detail with the program directors who attended the workshops. A panel of five subject matter experts rated each of the major programs, 14 in total. Due to resource and time constraints, random pairs of the five members rated the subspecialties. Each panel member was paired with each of the others at least two times. Four of five panel members were practicing physicians and members of the faculty at USF College of medicine. One member works for the USF College of Medicine and is an expert in educational principles. In order to aid the rating panel in understanding the criteria and to familiarize them with the scales, the researcher conducted a one-hour training session with four of the five members. The fifth member received the materials and training from a colleague. Surveys In accordance with the design of the study, initial surveys were administered between September 2002 and March 2003; post training surveys were completed between May and August 2003. The initial wave of surveys reflected respondents’ attitudes of resident evaluation methods prior to the completion of learning objectives; therefore, they represented attitudes prior to any opportunity for programs to make changes to resident methods of evaluation. The second wave of surveys reflected respondents’attitudes after program directors completed learning objectives and had the opportunity to implement any changes to resident evaluation methods or rotations. For example, program directors may have created the learning objectives and then provided those to residents at the

PAGE 29

20 beginning of the rotation. Or program direct ors may have designed a more customized evaluation procedure to determine if residents demonstrated the necessary mastery components to be considered proficient in the target area of the rotation. Participants were asked to indicate their agreement with 23 items representing attitudes toward resident evaluation methods and 5 items concerning perceptions of organizational readiness to change. The follow-up survey contained the same material as the initial survey, with the addition of four items regarding any changes that had been implemented concerning evaluation methods within the six months prior to completing the second wave survey. Please see Appendixes F and G for a copy of the pre and post surveys. Program directors who attended training workshops completed the surveys at the workshop prior to the beginning of the first module; those in the control group or who did not attend the workshop completed surveys either via paper or electronic methods during the same time period. Faculty and resident surveys were collected via paper methods beginning in September 2002 and ending in March 2003. See Figure 1 for a summary of when and what each group of people (i.e., pr ogram directors, faculty and residents) completed as part of the study. Data collection in the first wave continued longer than expected due to low response rates, however, this did not jeopardize the design of the study as all pre-surveys were collected prior to the learning objectives being turned in by program directors. Program directors were not able to provide learning objectives to residents or implement changes based on objectives prior to actually writing those objectives, therefore, the time period for collecting data was longer than expected but appropriate to the design. Due to the extended nature of the data collection at time 1, the timeframe for data collection at time 2 was also shifted. Surveys for time 2 were collected primarily through electronic means, although paper surveys were also distributed and some were returned in this fashion. All program directors, faculty and residents received multiple invitations to complete second wave of surveys, regardless of whether or not they completed the first survey. Data collection began in May 2003 and continued through August 2003 in order to gather enough second surveys to complete the analyses.

PAGE 30

21 Chapter Three Results Learning Objectives Due to the existence of the large number of learning objectives per program, it was not possible to rate each individual objective, therefore, learning objectives were rated as a collective set in each program or subspecialty. A panel of five experts rated the learning objectives on five criteria for each of the 14 general programs. First, panel members rated the extent that each objective included the following components: performance, conditions and a criterion in each objective. The presence of these three components was critical to establishing the extent to which the learning objectives resulted in performance standards that were measurable. Subsequently, the three ratings were combined into a composite criterion, labeled ‘measurable’. Next, panel members rated objectives on the extent to which (a) they were understandable to residents, (b) were reasonable to expect given the level of proficiency of the resident and (c) related to residents’ subsequent abilities to practice medicine. A global rating addressed the extent to which the set of learning objectives for a particular program or subspecialty provi ded a base for appropriate evaluation of residents. A composite variable was the average of all other ratings described previously. Reliability estimates (Cronbach’s alpha in which judges were considered items and sets of objectives were considered targets or ratees) were calculated for each scale: alpha = .67 for measurable, .24 for understandable, .55 for reasonable, -.14 for related to subsequent ability to practice, .69 for global rating and .76 for the composite ratings for 5 judges across 14 programs. For the 31 subspecialties, panel members were randomly paired with each other. Each person was paired with all other raters at least two times and therefore, each pair rated at least two and not more than three of the remaining subspecialties. Due to the fact

PAGE 31

22 that each pair only rated together two or three times, intraclass correlations were estimated for the pairs of raters from the data on the 14 general programs in which all five judges rated all programs. The estimates for the intraclass correlations coefficients (ICC2,2) were .26 for measurable, .07 for understandable, .25 for reasonable, -.07 for related to subsequent ability to practice, .34 for global, and .28 for composite ratings. Appendix I details the mean global and composite ratings for each program. For the one hypothesis concerning learning objectives, the dependent variables consisted of a) global measure of the learning objectives for each program and b) composite variable that consisted of all ratings on the learning objectives. The hypothesis that the training would have a positive effect on the quality of learning objectives was not supported. When looking at the global dependent variable, there was not a main effect for training using the intended treatment groups F (1, 36) = 1.16, MSE = .967 nor the actual treatment group F (1, 36) = .023, MSE = 1.00. When considering the composite dependent variable, there was not a main effect for training using the intended treatment group F (1, 36) = 1.05, MSE = .318 nor the actual treatment group F (1, 36) = .078, MSE = .327; there was no statistical difference in the quality of learning objectives between the training and control groups. Please recall that means for each group are displayed in Table 1. Surveys The survey was an original scale constructed to measure two main constructs, attitudes toward resident evaluation methods and perceptions of organizational readiness to change. The items concerning perceptions of readiness to change were developed through a search of the literature on the topic. Of the few studies addressing readiness to change, none actually published the scale that was used to assess those perceptions. Other studies were geared toward the wrong audience (e.g., excessive drinkers) and therefore, the scale was not appropriate for this study. Given this challenge, the researcher devised an original five-item scale appropriate for this particular audience. Fox et al. (1988) provided a description of the scale they used which was consistent with

PAGE 32

23 others on the topic, therefore, the scale used in this study was modeled from that description. Two subject matter experts reviewed the content prior to administration to assure coverage of the relevant domain of attitudes. Each expert reviewed the content separately and concurred that the survey was thorough in its coverage of attitudes toward resident evaluation. Additionally, they sorted each item into one of two scales and agreed on 24 of the 28 items. To establish construct validity, an exploratory factor analysis was computed on the first round of surveys (N=226) to determ ine the underlying factor structure of the data; the maximum likelihood extraction method followed by an oblimin rotation with Kaiser normalization was used. The results were consistent with the hypothesized structure of the data. Factor analysis showed two separate factors: organizational readiness to change and evaluation toward evaluation procedures. The correlation between the two factors was .48. After examining the correlation matrix (see Appendix H), item 15 was removed because it did not have a significant relationship with the other items and could be interpreted ambiguously. Item 11 also was not significantly correlated with many other items, however, it was important to the study conceptually and so was retained in the analysis. The preceding analysis supported the existence of two factors and thus the survey was divided into two scales, see Table 2 for the pattern matrix. The reliability for the 22item attitude scale was alpha = .94 when figured using all pre-surveys; .95 when using only the data on those who completed both pre and post surveys, and .95 when using only post surveys completed. The reliability for the 5-item scale concerning perceptions of organizational readiness to change was .82 when figured using all pre-surveys; .84 when using only data on those who completed both pre and post surveys, and .85 when using only post surveys completed. To determine if the constructs were based on individual differences or differences due to position within the university (program director, faculty or resident), rwg was computed for the whole group and for each of the positions described above in several

PAGE 33

24 different ways. Interrater agreement on the attitude construct was above .90 for all groups, see Table 3. Descriptive information for items is provided using the data of those participants who responded to both the pre and the post survey (N=128), please see Appendixes J and K. For further analyses, two composite variables were created for each respondent. The composite attitude variable was created by averaging the 22 items that comprised the attitude scale, the same formula was used to compute the composite readiness score for each individual. Descriptive information for each composite variable is reported in Table 4. Tests of Hypotheses Hypotheses concerned three general areas: (a) learning objectives, (b) perceptions of readiness to change and attitudes toward resident evaluation and (c) whether any change occurred in resident evaluation methods. All hypotheses were tested both according to the intended treatment group or how each program was assigned to condition and also with regard to actual treatment group. Some program directors in the experimental condition did not attend any traini ng modules and therefore, for the sake of the second analyses were grouped into the control condition. For the six hypotheses concerning perceptions of readiness to change and attitudes toward resident evaluation methods, only data from those respondents who completed both the pre and post surveys were used. Three hypotheses concerned the main effects of and interaction between time pressure and treatment condition on perceptions of organizational readiness to change. The position factor, the individual’s view with in the organization (i.e., program director, faculty, or resident), was included in the analyses in order to reduce error. Readiness to Change When analyzed using the intended treatment group data, hypothesis 1 was not supported because there was not a main effect for trials F (1, 113) = .015, MSE = .09 and there was no interaction between treatment condition and trials F (1, 113) = .825, MSE = .09. There was no difference in perceptions of readiness to change from time 1 to time 2 or whether in the intended training or control group. See Table 5 for group means.

PAGE 34

25 Hypothesis 3 was not supported. Although there was a main effect for time pressure F (1, 113) = 3.90, MSE = .25, p = .05, it was in the opposite direction from what was hypothesized. Individuals in programs facing less time pressure ( M = 3.00, SE = .05) had more positive perceptions of organizational readiness to change than those facing more time pressure ( M = 2.86, SE = .05). Finally, Hypothesis 5 was not suppor ted. As hypothesized, there was an interaction between treatment group and time pressure F (1, 113) = 3.94, MSE = .248, p = .05, however it was not in the direction hypothesized. There was an interaction between the treatment group and time pressure such that perceptions of readiness to change were most positive in the control group members who faced less time pressure ( M = 3.05, SE = .07) as compared to the control group facing more time pressure ( M = 2.77, SE = .07). The training group means were the same regardless of time pressure ( M = 2.95, SE = .07), see Figure 2. Although not hypothesized, there was an interaction between trial and position F (2, 113) = 3.59, MSE = .09, p = .031. Changes in perceptions of readiness to change from time 1 to time 2 were dependent on position. Program directors did not change significantly but faculty perceptions became significantly less positive while resident perceptions became significantly more positive. See Table 6 for both means of attitudes and readiness perceptions by position and time trial. When the same analyses were performed using the actual treatment groups instead of the intended treatment groups, no significant effects were found. There was not a main effect for trials, F (1, 114) = .004, MSE = .09 and no interaction between treatment condition and trials F (1, 114) = .000, MSE = .09. There was not a main effect for time pressure F (1, 114) = .88, MSE = .264 and no interaction between actual treatment condition and time pressure F (1, 114) = .47, MSE = .264. Please see Table 5. Attitudes toward Resident Evaluations The following three hypotheses specifically concerned the effect of training group, trials (pre and post surveys) and time pressure on attitudes toward resident evaluation methods. The analyses were first performed with the intended treatment

PAGE 35

26 groups and then the actual treatment groups. Also, the position factor was included in these analyses as well, to reduce error. Hypothesis 2 stated that attitudes toward resident evaluations would improve in programs whose directors participated in the training modules. It was not supported because there was not a main effect for trials (pre, post) F (1, 116) = .012, MSE = .144; attitudes toward resident evaluation systems did not change from time 1 to time 2 ( M = 2.70, SD = .47 and M = 2.71, SD = .62). There was no interaction between trials and training group F (1, 116) = .084, MSE = .144 (for pre and post training group, M = 2.74, SD = .41 and M = 2.74, SD = .61; for pre and post control group, M = 2.68, SD = .53 and M = 2.68, SD = .63); there was no difference between treatment groups on their attitudes toward resident evaluation methods F (1, 116) = .00, MSE = .456, see Table 7. Hypothesis 4 was not supported as there was not a main effect for time pressure F (1, 116) = 2.13, MSE = .456 ( M = 2.81, SE = .07 for less time pressure; M = 2.67, SE = .07 for more time pressure) nor an interaction between trials and time pressure F (1, 116) = .008, MSE = .144 Hypothesis 6 was not supported as there was no interaction between training and time pressure F (1, 116) = 1.31, MSE = .456, see Table 7. The hypotheses were also tested using actual treatment groups, however, none of the hypotheses were supported using these groups. There was not a main effect for training F (1, 117) = .07, MSE = .47, nor a main effect for trials F (1, 117) = .015, MSE = .142 (see Table 4) nor an interaction between training and trial F (1, 117) = .08, MSE = .142. Concerning hypothesis 4, there was not a main effect for time pressure F (1, 117) = 1.45, MSE = .47 ( M = 2.99, SE = .06 for less time pressure; M = 2.87, SE = .07 for more time pressure) nor an interaction between time pressure and trials F (1, 117) = .034, MSE = .142. Finally, hypothesis 6 was not supported as there was not an interaction between training and time pressure F (1, 117) = .01, MSE = .47, see Table 7. Finally, three hypotheses concerned whether or not any changes were actually made to resident evaluation methods. One item on the post survey asked the extent to which any changes had been made in the previous six months; this was used as the dependent variable for the following hypotheses tests.

PAGE 36

27 Hypothesis 8 was not supported in that there was no difference between training or control programs in the changes reported to have been made to resident evaluation methods, please see Table 8. This was true using both intended F (1, 121) = 1.86, MSE = .900 and actual treatment groups F (1, 121) = .030, MSE = .879. Hypothesis 9 was not supported in that there was not a main effect for time pressure on changes made to resident evaluation systems F (1, 121) = .358, MSE = .900. Finally, Hypothesis 10 was supported in that there was an interaction effect between treatment condition and time pressure on changes to resident evaluation methods using the actual treatment group F (1, 121) = 4.78, MSE = .879, p < .05, see Table 8 and Figure 3. Using the intended treatment group F (1, 146) = .088, MSE = .892, however, there was not an interaction. Subsequent Analyses After analyzing the above results, the researcher concluded that further analyses were warranted to determine if there were any effects that may be present but not demonstrated through the above tests. First, the researcher analyzed the data of only those 70 respondents who indicated that some type of change had taken place in the resident evaluation system of their respective programs. Specifically, the effects of trials (pre and post), training condition, and time pressure were analyzed; position was included as a independent variable to reduce error as in the original analysis described above. For perceptions of readiness, there were no significant main effects, indicating that there was no difference from time 1 to time 2 in perceptions of readiness in those who had indicated a change F (1, 59) = .118, MSE = .09 (for pre and post readiness, M = 2.96, SD = 34 and M = 2.95, SD = .42). Additionally, there was no difference between the training and control groups F (1, 59) = .077, MSE = .198 ( M = 2.99, SE = .05 for control group; M = 3.00, SE = .08 for training group) nor any main effects for time pressure F (1, 59) = .321, MSE = .198 ( M = 3.04, SE = .06 for less time pressure; M = 2.93, SE = .07 for more time pressure). Finally, there was no interaction between training and time pressure conditions F (1, 59) = .872, MSE = .198 (for training groups under more and less pressure respectively, M = 2.96, SE = .13 and M = 3.02, SE = .10;

PAGE 37

28 for control groups under more and less pressure respectively, M = 2.92, SE = .06 and M = 3.05, SE = .08). For attitudes toward resident evaluation methods, the same analyses were run using the data of the 70 respondents who indicated that some change had taken place. There were no significant main effects, indi cating that there was no difference from time 1 to time 2 in attitudes toward evaluation methods F (1, 60) = 1.823, MSE = .139 (for pre and post attitudes, M = 2.74, SD = .41 and M = 2.79, SD = .55). Additionally, there was no difference between the training and control groups F (1, 60) = 2.60, MSE = .321 ( M = 2.74, SE = .06 for control group; M = 2.91, SE = .10 for training group) nor any main effects for time pressure F (1, 60) = .12, MSE = .321 ( M = 2.90, SE = .08 for less time pressure; M = 2.72, SE = .08 for more time pressure). Finally, there was no interaction between training and time pressure conditions F (1, 60) = .001, MSE = .321 (for training groups under more and less pressure respectively, M = 2.74, SE = .17 and M = 3.03, SE = .12; for control groups under more and less pressure respectively, M = 2.72, SE = .08 and M = 2.76, SE = .10). Also in the interest of exploring all the information that would help inform the research and the USF College of Medicine, correlations were run between the global and composite learning objective ratings for each program and composite attitude and readiness scores at time 2 for each individual. Results indicated no significant correlations except between the mean global and composite ratings of learning objectives ( r = .88, p = .00). Not only were the correlations between the two measures of learning objective quality and attitudes ( r = -.126, p = .31 with global; r = -.075, p = .55 with composite) and readiness ( r = -.087, p = .49 with global; r = -.093, p = .45 with composite) non-significant the relationship was almost zero. From an Organization Development perspective, this is an interesting relationship to have explored because most of the hypotheses were based on the reasoning that the quality of the learning objectives would lead to a greater likelihood of making changes, which in turn, would influence attitudes toward resident evaluations and perceptions of readiness to change. Finally, due to the fact that the number of respondents completing both pre and post surveys was substantially smaller than the number who completed pre-surveys only,

PAGE 38

29 the researcher decided to determine if a difference in attitudes and perceptions of readiness existed between these two groups. Results of the ANOVA demonstrated that no significant differences existed between those who had completed only the pre-survey and those who had completed both the pre and post-surveys. For attitudes toward resident evaluation methods at time 1, F (1, 223) = 1.43, MSE = .195 ( M = 2.68, SD = .41 for pre-respondents only; M = 2.74, SD = .47 for pre and post respondents). For perceptions of readiness to change at time 1 F (1, 259) = 1.57, MSE = .194 ( M = 2.83, SD = .47 for pre-respondents only; M = 2.88, SD = .40 for pre and post respondents).

PAGE 39

30 Chapter Four Discussion Organizational development and change is becoming increasingly critical to the successful adaptation of organizations to their environment. Unfortunately, resistance to change is a significant barrier to making these changes accomplish the intended goals. Lewin suggested that the architects of change often attempt to impose new procedures before the organization and its members are ready for that change. Thus, his theories stated that the forces upholding the st atus quo must first be lessened before widescale change can be introduced, implemented, and accomplished. This study attempted to foster readiness to change in members of the University of South Florida’s College of Medicine in anticipation of a mandated change by the accrediting council for graduate medical education. Summary and Interpretation of Findings The study focused on two main factors that may have influenced both perceptions of readiness to change and attitudes toward resident evaluation methods. First, the training was aimed at providing program directors with knowledge and skill in developing measurable and specific learning objectives for each resident rotation. In developing the learning objectives, each program would define specific and measurable goals that residents needed to accomplish in order to demonstrate mastery and competence in a particular rotation. By delineating such goals, objective assessments of relevant accomplishments could be developed that would be specific and unique to each rotation at each level. The development and implementation of such learning objectives and new resident evaluation methods was the change that was expected to help alter both attitudes and perceptions of readiness to change. Additionally, the ACGME mandated that learning objectives be written and recorded for each rotation prior to each program’s next site visit to determine

PAGE 40

31 accreditation. Therefore, a few programs experienced more time pressure to develop the learning objectives. The increased sense of urgency in some programs was expected to result in more motivation to make changes. Learning Objectives First, we will explore the results for the learning objectives. There was no statistical difference in the mean quality of the learning objectives between programs that received training in writing objectives and thos e who did not receive such training. The lack of difference between the groups is a first indication that the intervention was not effective. Reasons that may explain the lack of difference between the groups are described next. The quality of learning objectives is indicative of the ease with which the objectives could be translated into objective and customized evaluation methods for each rotation. The quality also indicates the amount of time and effort that the program directors put into creating them as well as indicates their recognition that learning objectives are important and useful to achieving the programs’ educational goals. One possible explanation of such a result is that the program directors who attended the training workshops were not convinced of the need or benefit of having specific learning objectives. In considering the organizational structure, it is also likely in at least some cases that the program directors did not write the learning objectives. Instead, faculty or other College of Medicine members may have written them without the benefit of the knowledge and training provided in the three workshops. An alternative explanation is that the workshops were not effective in teaching the program directors to create learning objectives that were easily translatable into evaluation methods. Due to time constraints, there were no practice sessions for actually writing learning objectives during the workshops as originally designed. Perhaps the quality of learning objectives would be greater in the trained group if they had more opportunity to practice writing the objectives and receiving individualized feedback and assistance. In scrutinizing the judges’ evaluations of the learning objectives, however, the reliability of the data is suspect and therefore, could call the results into question. One of

PAGE 41

32 the five judges was consistently more severe in his ratings of the objectives and there was little variability in two of the items that comprised the dependent variables. Such characteristics of the judges and items could have contributed to the lack of reliability. In an attempt to help increase the reliability of the judges’ evaluations, additional judges were considered. The Spearman-Brown formula was used to determine the number of judges needed to reach a more acceptable reliability coefficient of .75. The calculations revealed that an additional 13 judges would be required. Unfortunately, this option was not feasible because there were not enough additional raters with the medical expertise required in the College of Medicine w ho would not also have some type of bias. Consequently, two subject matter experts reviewed each program’s objectives a second time. During this review, they were looking for qualitative differences in the quality of the objectives between the trained and control programs. They were specifically looking for evidence that the programs in the trained group had objectives that were more measurable, specific and appropriate as a base for objective resident evaluations. Unfortunately, this qualitative review revealed that there did not seem to be a pattern of greater quality objectives among the programs that participated in the training. With that information, we were able to proceed with the analyses, confident that the lack of significant differences in the quality of the objectives was not solely a result of the lack of statistical reliability of the judges’ evaluations. Therefore, although contrary to the intended effects of the study, some confidence should be placed in the conclusion that the training had little effect, if any, on the quality of learning objectives produced. Changes to Resident Evaluations The hypotheses suggested that those in the trained group would produce better quality learning objectives and be more likely to actually make changes to the rotations (i.e., provide residents with learning objectives before the rotation began). The reasoning is because better quality learning objectives were defined as those that were measurable, reasonable, and understandable to the audience; the trained group was given instruction concerning how to write quality objectives as described above. They were also given instruction around how measurable learning objectives could be used as the foundation

PAGE 42

33 for more specific but objective evaluations of resident knowledge and skill at the end of each rotation. The trained group, therefore, should have produced better quality learning objectives by way of the training received and because the objectives would have been measurable, specific, and understandable, instructors would have provided these to residents at the beginning of the rotation. In addition to this change, directors could have gone a step further and instituted evaluation methods that measured the standards set forth in the learning objectives. The results, however, showed that there was no statistical difference between trained and control groups in the changes that were reported to be made in programs. A chi-square test was non-significant, however, the results were in the predicted direction. In examining the frequencies, 55% (52 out of 94) of those in the control group indicated that some type of change in the rotation had occurred which was less than the expected outcome of 53.4, while 61% in the trained gr oup (19 out of 31) indicated some change and this number was higher than the expected outcome which was 17.6. The trend is for the actual trained group to have made more changes in rotations. Perhaps there was not enough time for more changes to have been made prior to the second survey period and if surveyed again, the results may show that the experimental group made more actual changes in the rotations. Another explanation is that the effect may have been a small one and the study may have lacked the power to detect it at a significant level. In examining attitudes toward evaluation methods and perceptions of readiness to change from time 1 to time 2 in the study, attitudes were basically neutral although edging toward positive and remained that way through the duration of the study. Perceptions of readiness to change were not significantly or meaningfully different from time 1 to time 2 but were just barely on the positive side. More specifically on the 4point scale used, the average attitude score was on the positive side of neutral ( M =2.70 and 2.71, respectively) even though a neutral option was not given and a ‘3’ represented ‘agree’ with the item. For readiness to change items, the same 4-point scale was used without a neutral option. Both pre and post ( M =2.90 and 2.88, respectively) mean readiness composites were very close to a ‘3’ or agree on the scale.

PAGE 43

34 As a group, the respondents indicated that they believed the USF College of Medicine was capable and willing to make necessary changes to improve resident education, however, the feelings were not strongly positive. Although as a whole there was not a difference over time in attitudes or perceptions, there was a significant difference over time by position (i.e., program director, faculty or resident) in perceptions of readiness to change. Residents became significantly more positive in their perceptions while faculty became significantly less positive and program directors did not change in their perceptions over time. Resident perceptions improved from being closer to neutral to very close to positive. During the same time, faculty perceptions were just at positive, but decreased to be closer to neutral. Interestingly, faculty and residents crossed such that resident perceptions at time 1 matched faculty perceptions at time 2 and vice versa. Although significant, there does not seem to be a plausible explanation for why perceptions of readiness would change by position in the manner that they did with only the passage of time. The same pattern of time by position interaction was not evident when considering the effects of the training or time pressure; therefore, although changes did take place in perceptions of readiness to change by position, they did not change as an effect of the intervention. Given that a 4-point scale was used, per ceptions of readiness could only improve one point which would mean that most or all respondents would have to feel strongly that USF College of Medicine was ready, willing and able to make organizational changes. Therefore, the baseline perceptions of readiness to change were high enough to make it difficult to show significant changes from time 1 to time 2 in the study. Given that 57% of respondents indicated that a change in resident evaluations had occurred, it would seem logical that a corresponding improvement in attitudes or perceptions of readiness would also have occurred. The time period between actual change and the time participants were surveyed may not have been sufficient for the change to affect attitudes significantly. Another reason that perceptions of readiness to change did not increase may have been that the items were targeted toward the whole College of Medicine. Due to the fact that residents and faculty tend to specialize and

PAGE 44

35 spend the majority of their time interacting with others in the same program, it may have been more appropriate to target these items toward the specific program. Attitudes and Perceptions of Readiness to Change Unfortunately, as was discovered in the analyses, the intervention did not have the intended effect. The intervention employed to foster readiness to change and improve attitudes toward resident evaluation methods was a series of workshops targeted for program directors. The intervention had two main purposes a) to discuss and highlight the aspects of the current resident evaluation methods that were not aligned with educational goals of the ACGME and b) to provide instruction and guidance in creating objective and measurable learning objectives. If program directors recognized how the current evaluation methods were not meeting the educational goals of the College of Medicine, then they would be more inclined to share this with their faculty or residents, as well as, be more inclined to make changes to the evaluation methods in their programs. From this, faculty and residents would see these changes in methods and therefore, have a more positive perception of the organization’s willingness and ability to make positive changes, which would improve the perceptions of readiness to change and attitudes toward evaluation methods. As described earlier, there was no significant difference in the quality of the learning objectives and no statistical difference in the changes that were made to programs between trained and control groups. Logically, it follows that no differences in attitudes or perceptions of readiness were observed between the trained and control groups. After careful consideration of the training, there are a few reasons that may explain why it did not appear to have the intended effect. First, sample size was a challenge in all aspects of this study and therefore, as with other results, there may not have been enough power to detect an effect if it were a small one. In conjunction with that, the magnitude of the changes made in programs may not have been enough to cause a difference in attitudes. In fact, when looking at the data for those programs that were reported to have made changes, the changes were reported to be slight most of the time, rather than extensive or even moderate.

PAGE 45

36 On the other hand, the training may not have involved the right people or fostered a sufficient amount of involvement. The program directors were the focus of the study as the change agents, however, the communication quality or frequency between program directors, faculty and residents may have been insufficient for the directors to exhibit enough influence on attitudes to cause the changes expected. As members of the organization, perhaps the workshops should have included both faculty and residents to discuss and highlight the aspects of current resident evaluation methods that were not optimal. This may have lead to more of an acceptance of the need for some change and a greater willingness to accept the changes mandated by the ACGME. More importantly, however, program directors were not necessarily convinced of the need to make changes to the educational program for residents when the study began. Due to time constraints and limited accessibility to certain members of the organization, the workshops did not allow sufficient amount of time to fully explore the reasons to make a change to the educational method. Instead, this change was forced upon the College of Medicine by the accrediting agency, an outside force, which already made the program directors more susceptible to resisting the change. During the course of the discussions in the workshops, it became clear that a substantial amount of resistance already existed within the program directors, many of whom were not persuaded that there was a need to make a change. Given this information, perhaps it was too far into the change process to try to create readiness to change. Although the training does not appear to have been effective in influencing attitudes and perceptions of readiness to change alone, time pressure did have some effect. For those programs facing less time pressure, perceptions of readiness to change were significantly more positive than those facing more time pressure, but only when analyzing the result using the intended treatment groups. Time pressure did not have an effect on attitudes toward resident evaluation methods. This effect is opposite of what was expected; specifically that increased time pressure would engender a sense of urgency that would be translated into a change in attitudes and perceptions. Perhaps this group did not feel forced to change and therefore, were more willing to consider the benefits of changing.

PAGE 46

37 More interestingly though was that the interaction between training and time pressure had an effect on the control group under less time pressure such that this group had more positive perceptions of readiness to change; again, when using the intended treatment group rather than the actual treatment group. The actual treatment group only included those programs whose directors actually attended the workshops. These were the directors who were trained in writing l earning objectives and who had discussed the need for change. Conversely, the intended treatment group included all programs originally assigned to training or no training; in analyses using intended groups, the trained group included programs whose directors did not participate in the modules; therefore, even though they were not trained in actuality, these directors did not receive the information provided at the workshops. The fact that the effect was not present when using the actual treatment group, however, suggests that the effect was not necessarily influenced by the training and may have been due to differences in individual programs. Implications for Graduate Medical Education This section is included to provide some practical recommendations for other graduate medical education administrators. Consequently, this section is based on a qualitative review of the circumstances, information and experiences throughout the study. As all of graduate medical education is continuing to undergo a paradigm shift in the nature of resident evaluation, the suggestions in this section could prove to be useful. First, if instituted in the same manner a program like this will also likely not be successful in creating the desired change; specifically a change to resident education methods and evaluations. There are a few recommendations that may help make the programs more successful. The ultimate barrier was that many of the program directors did not believe that a change to the educational model was necessary. The fact that the change was mandated by an outside agency without the buy-in of faculty and program directors made this resistance even stronger. In order to build any commitment to the change, workshops should focus on the need for the change in the educational model. Each session should focus on the circumstances and facts that have led to the paradigm shift. At the same

PAGE 47

38 time, these discussions should include opportunities for participants to explore their own opposition to such change. After exploring these concerns, they should be addressed and turned into solutions. To prevent the sessions from becoming unproductive, charge the participants with devising a plan together to implement the change by the end of the three to four month period. Also be certain to ask participants to share good reasons for the change, even in they are resisting. This will facilitate their consideration of the benefits of change. The sessions should include both program directors and faculty from all programs, however, to keep the sessions manageable, there should be only a few programs per session. Additionally, the sessions should be approximately 90 minutes in length to allow for more thorough discussion and should occur more frequently; approximately twice per month over three to four months. Including faculty in these sessions will help facilitate their buy-in a nd facilitate communication of changes to residents. After the series of workshops to devise a plan for implementing change to resident evaluation methods, then another se ries of workshops should address modifying the learning objectives to be measurable. During these workshops, the participants should also prepare these objectives to be shared with residents at the beginning of each rotation. Ideally, both program directors and faculty will be included in these sessions as well. Because both groups are extremely busy the responsibility should be shared among a group of people in any one program. After the program directors and faculty develop the learning objectives for each program, continuously encourage them to present these to residents at the beginning of each rotation. Instead of only surveying one time, after the introduction of learning objectives, continue to survey residents and f aculty after each “semester”. If the surveys begin to show improvement in attitudes, publicize these results to all in the College of Medicine so they know that there have been some successes as a result of the change. In connection to this, it is recommended that administrators identify at least a few metrics to measure the success of each program or utilize existing ones. Continue to track these metrics and over the course of surveying, attempt to establish a relationship, through

PAGE 48

39 statistics, between the changes made and the outcomes measured. This may also provide evidence of the success of the change and help to convince those who may still be resisting. Effecting a culture change, as is the challenge in graduate medical education, currently, is a process that requires time (often several years), persistence and continuous reinforcement. If administrators follow the suggestions contained within, their campaign to implement a similar change will likely be more successful than this experiment has been up to this point. Study Limitations There were a number of limitations in this research that may account for, at least in part, the lack of significant findings. Due to the fact that this was a field study, the researcher did not have explicit control over all factors and circumstances that could have affected the results. As program directors have demanding responsibilities and schedules, there was limited time and access to this group. Ideally, the workshops would have been both more extensive in each session and would have occurred more frequently. Increased time and exposure would have allowed greater exploration of both the need for change and opportunities for practicing the skill of creating learning objectives. In turn, the limited exposure may have decreased the chances that program directors would act as change agents and thus influence the faculty and residents. Further, it would have been ideal to meet with a representative cross-section of both faculty and residents to discuss the need for and benefits of change, as this may have provided more momentum and support for any effort program directors put into the objectives. Additionally, the faculty and residents may have acted as additional resources in writing learning objectives. Another reason that learning objectives were not differentiated in terms of quality may be due, in part, to insufficient time that program directors have to devote to such an effort in light of other responsibilities which may seem more critical, particularly in the short-term. Another critical limitation was the small number of programs and directors with which we had to work. Only 22 programs were in the experimental condition and only 17 of those program directors actually participated in at least one of the workshops.

PAGE 49

40 Additionally, the researcher encountered extreme difficulty collecting data from faculty and residents at both times during the study. At time 2, many program directors, faculty and residents did not complete the second survey. After further examination though, there were no significant differences in perceptions of readiness or attitudes between those who only completed the survey at time 1 versus those who completed the survey at both time 1 and time 2. In combination, this could have led to the limited ability, due to limited power, to detect what might have been small effects, particularly in the actual treatment group. However, this does not account for the significant effects that were found due to time pressure and the interaction between training and time pressure on actual changes made to programs. Related to the above, because data collection was extended after the end of the academic year, 26 residents who had completed the first survey finished their residency prior to having the opportunity to finish the second survey. Finally, the lack of acceptable reliability estimates on the learning objectives made it difficult to draw sound conclusions regarding the hypotheses. As mentioned, however, two experts who reviewed the learning objectives again, determined that a qualitative difference was not apparent in the quality of the learning objectives between the experimental and control groups. For this study, we were able to place some confidence in the analyses. These limitations led to ideas for further research in this area. Future Research Although this study did not produce the intended results, the creation of readiness to change is still an important area to pursue for further research. Future research should continue to incorporate several of the design elements used in the current study including a control group, the longitudinal design, an appropriate readiness scale and surveying all types of members of the organization. However, to improve the effectiveness of the intervention, future studies should incorporate the following suggestions. First, the sessions should occur more frequently and focus initially on an in-depth exploration of the reasons that the current system is not meeting the goals of the organization. In addition, participants should be asked to describe new methods or

PAGE 50

41 procedures that will address the concerns that are uncovered about the current method. Afterward, participants should have the opportunity to practice whatever new skill they are asked to learn, within the sessions. If that is not feasible, then researchers should consider implementing a pilot program with a small unit in order to highlight the effectiveness and utility of benefits that could be reaped by making the suggested changes. Coupled with the above, members from various positions or levels within the organization should be included in the intervention in order to incorporate more input and to use more communication channels and have more change agents. Finally, in order to maximize participation to get a clearer picture of the effect of the intervention, considerable attention should be focused up front on providing appropriate incentives to maximize participation. Also, multiple forms of data collection should be utilized simultaneously to increase response rates. Summary and Conclusions In summary, the purpose of the study was to “unfreeze” the organization or to create organizational readiness to change in preparation for a major cultural shift in graduate medical education and its subsequent effect on attitudes toward resident evaluation methods. The intervention did not have the intended effect and we have explored many factors that may account for this result. The most plausible explanation is related to Lewin’s three-stage model, specifically the unfreezing stage and is described below. According to Lewin’s model and others, any changes in attitude or perceptions of readiness were dependent on how the program directors in the trained condition acted as change agents and communicated with both faculty and residents. If they were not proponents of change, actively pointing out the need and benefits of change to faculty and residents, then little difference in attitudes could be expected. In addition, even if the new learning objectives and more objective evaluation methods were implemented, if the faculty and residents had little input concerning these changes or did not fully understand the reason for such changes, their attitudes may not have changed in the intended direction. As additional analyses illustrated the lack of direct relationship between the

PAGE 51

42 quality of learning objectives and attitudes toward resident evaluations and perceptions of readiness, this conclusion seemed even more plausible. This critical connection was also demonstrated when looking at the attitudes and perceptions of just those who reported changes to resident rotations. Even though these respondents indicated change in their programs, their attitudes toward resident evaluation methods and perceptions of organization readiness to change did not differ significantly from time 1; this also supports the conclusion above. Unfortunately, the extent to which the program directors acted as true change agents was the critical aspect that the researcher could not control. The need for change was imposed by an outside force, the ACGME, without the acceptance of key members within the organization. In order to help prepare the organization for change, it was necessary to help members within the organization understand and accept the reasons a change was necessary. To create readiness to change, many members of the organization must have been convinced of the need for change as well as persuaded that the chosen change was the appropriate one. First, while program directors were included in the workshops, other members who had influence were not included, such as faculty, residents, and administrators. The key in this study was to have the directors act as change agents and it became apparent in the workshops that at least some of the directors were not convinced that any change was needed. Next, the workshops were brief and there were only three, so there was little time or opportunity to explore the need for change and to build acceptance around the benefit of the change and a unified course of action to move the organization in that direction. In addition, not all program directors who were invited actually attended and even fewer attended all three sessions. Even if program directors were convinced of the need for change by the end of the workshops, in order to influence perceptions of readiness to change in faculty and residents, the directors would need to impart their ideas to them. As was indicated anecdotally, few, if any, spent much time discussing the need for change or their plans for change to resident evaluations with a large number of other organizational members. Therefore, the ideas and readiness to change sentiment did not filter down to residents or faculty.

PAGE 52

43 Further, because the workshops were split between two topics, there was little opportunity to practice the skill of developing measurable learning objectives. This lack of practice and lack of acceptance of the need for this change to graduate medical education procedures combined to prevent program directors from taking the time necessary to develop truly measurable objectives when any objectives would satisfy the accreditation requirements. Instead of commitment to change, there seemed to only be compliance for accreditation requirements. Although the intervention did not succeed in creating readiness to change or substantially improve attitudes toward evaluation methods among members of the USF College of Medicine, the study was able to contribute to the collective knowledge on this topic. First, the study reinforced the idea that change agents who are organizational members play a crucial role in the “unfreezing” process. These change agents must first explore and accept the need for change prior to assuming the role. Together, the change agents and other key members of the organization, must determine a “shared” plan or vision of what the new desired outcome or functioning of the system is to be and cultivate this vision among all other organizational members. The study also reinforced the idea that change imposed from an outside agency or organization seems to foster an increased resistance. With both the internal change agents and any external change agents, a large investment of time and energy is necessary up fr ont in more frequent meetings to explore the ideas and vision discussed above. Additionally, while it is important to have the support of those considered to be top management, it also appears important to recruit representatives throughout the organization to be change agents. In this study, the unfreezing process was entangled with an actual change, although originally not intended. The creation of learning objectives was a change in and of itself that first required the support of all program directors as a first step down the right path to organizational change. The study, therefore, taught us that it is important to focus solely on the creation of readiness to change; gaining acceptance around the idea that the traditional way of doing things is not working as well and that there is a new vision for organizational functioning. Additionally, readiness to change seems to be a

PAGE 53

44 fluid concept that is difficult to influence and to measure but is necessary to keep surveying over a long period of time. Perhaps, two times over the course of one year is not enough. For the purposes within, readiness to change was aimed at the organization level, the USF College of Medicine. Perhaps it should have been aimed more locally, at business units, departments or programs in this case. The specific organizational structure and culture must be taken into consideration when determining what level to focus on in readiness to change items. This idea is also supported by the fact that readiness to change does appear to be an individual differences variable rather than influenced solely by position within the organization. Finally, there are few studies dedicated to this topic, despite its practical and increasing importance in the world of business today. The investigation of the construct readiness to change is critically important to help organizations increase the chances for a successful cultural change in response to numerous business challenges. A major contribution was the scale development for both the attitudes toward resident evaluations and perceptions of organizational readiness to change. Prior to this study, there were few, if any, published readiness scales to help researchers and practitioners properly measure this construct. Given the high reliability of each scale and the factor analysis demonstrating the two constructs, both scales can be used in further research or in more practical applications. Additionally, there are few studies that involve the medical community and this study made a contribution to the field by applying I/O principles and scientific methods to the study of organizational development in a medical setting. As the attitude scale toward resident evaluation methods was construct valid and demonstrated acceptable reliability, graduate medical education programs across the country can use such it to assist them in efforts toward organization development and change. The lessons learned and acknowledged limitations in this study provided critical information for future research in this field and with this particular audience.

PAGE 54

45 Table 1 Mean ratings for global and composite variable by treatment condition Condition Global Composite ____________________________________ M SD M SD Intended Treatment Training 2.64 1.00 2.47 .65 Control 2.99 .97 2.66 .47 ______________________________________________________________________ Actual Treatment Training 2.86 .98 2.61 .61 Control 2.81 1.01 2.55 .55

PAGE 55

46 Table 2 Factor Pattern Matrix between the Attitude Items and Readiness to Change Items Items Attitude Factor Readiness Factor Pre1 0.73 0.06 Pre2 0.55 0.06 Pre3 0.76 0.03 Pre4 0.71 0.12 Pre5 0.73 0.11 Pre6 0.67 0.12 Pre7 0.68 0.13 Pre8 0.67 0.15 Pre9 0.79 0.00 Pre10 0.77 0.00 Pre11 0.16 -0.03 Pre12 0.37 -0.15 Pre13 0.72 0.09 Pre14 0.70 0.05 Pre16 0.52 0.08 Pre17 0.72 -0.11 Pre18 0.78 -0.05 Pre19 0.79 -0.02 Pre20 0.62 0.17 Pre21 0.61 0.13 Pre22 0.27 0.01 Pre23 0.77 0.10 Pre24 0.03 0.80 Pre25 0.14 0.75 Pre26 0.06 0.36 Pre27 0.00 0.72 Pre28 0.10 0.76 Note: The correlation between the two factors is .48

PAGE 56

47 Table 3 Interrater Agreement on Attitude and Perception of Readiness Scales Group All Pre Surveys All Post Surveys Pre and Post Surveys Pre Post Attitude Scale Items Whole Group .98 .97 .97 .96 Program Directors .98 .98 .97 .98 Faculty .97 .97 .97 .97 Residents .98 .98 .98 .98 ______________________________________________________________________ Readiness to Change Scale Items Whole Group .93 .93 .94 .93 Program Directors .93 .96 .93 .96 Faculty .94 .92 .94 .90 Residents .94 .94 .95 .94

PAGE 57

48 Table 4 Mean Ratings for Pre and Post Composite Scores on Readiness to Change and Attitudes Toward Resident Evaluation Methods Pre Post M SD n M SD n Attitude Composite 2.71 .47 128 2.70 .61 128 Readiness Composite 2.90 .40 125 2.88 .46 125

PAGE 58

49 Table 5 Mean Readiness Scores on Pre and Post Surveys as a Function of Treatment Condition and Time Pressure Condition Time Pressure Pre-Survey Post-Survey __________________________________________ M SD n M SD n Intended Treatment Training More Pressure 2.94 .36 34 2.94 .43 34 Less Pressure 2.94 .34 28 2.94 .35 28 ____________________________________________________ Control More Pressure 2.74 .41 38 2.72 .44 38 Less Pressure 3.06 .45 25 3.03 .52 25 Actual Treatment Training More Pressure 2.83 .26 14 2.97 .38 14 Less Pressure 3.02 .31 17 2.94 .34 17 ____________________________________________________ Control More Pressure 2.83 .42 58 2.79 .45 58 Less Pressure 2.98 .43 36 3.01 .48 36

PAGE 59

50 Table 6 Mean Attitudes and Readiness Scores by Position and Time Trial Position Pre-Survey Post-Survey _____________________________ M SD n M SD n Attitudes Program Directors 2.81 .42 20 2.78 .35 20 Faculty 2.76 .48 52 2.77 .79 52 Resident 2.63 .48 56 2.62 .49 56 ______________________________________________________________________ Perceptions of Readiness Program Directors 2.95 .35 20 3.00 .30 20 Faculty 2.96 .43 52 2.82 .52 52 Resident 2.83 .38 53 2.92 .41 53

PAGE 60

51 Table 7 Mean Attitudes on Pre and Post Surveys as a Function of Treatment Condition and Time Pressure Condition Time Pressure Pre-Survey Post-Survey _______________________________ M SD n M SD n Intended Treatment Training More Pressure 2.75 .42 34 2.72 .48 34 Less Pressure 2.72 .40 28 3.01 1.78 28 ____________________________________________________ Control More Pressure 2.56 .50 39 2.52 .50 39 Less Pressure 2.85 .53 27 2.89 .72 27 Actual Treatment Training More Pressure 2.67 .38 14 2.67 .59 14 Less Pressure 2.83 .40 17 2.83 .91 17 ____________________________________________________ Control More Pressure 2.64 .49 59 2.61 .47 59 Less Pressure 2.77 .50 38 2.82 .66 38

PAGE 61

52 Table 8 Ratings of the Extent to which Changes have been made in Resident Evaluation Methods Condition Time Pressure Mean Change Rating _________________________ M Std. Error Intended Treatment Training More Pressure 2.03 .16 Less Pressure 2.00 .18 _______________________________________________ Control More Pressure 1.87 .15 Less Pressure 1.69 .19 ______________________________________________________________________ Actual Treatment Training More Pressure 1.57 .25 Less Pressure 2.12 .23 ________________________________________________ Control More Pressure 2.03 .12 Less Pressure 1.72 .16

PAGE 62

53 Figure 1. Illustration of pre-post nature of questionnaire administration PRE-TEST QUESTIONNAIRES POST-TEST QUESTIONNAIRES Who When What Who When What All Program Directors August 2002 January 2003 Attitudes toward evaluation systems, learning objectives and readiness for change All Program Directors Februar y 2003 – April 2003 Attitudes toward evaluation systems, learning objectives, and any changes implemented All faculty and residents October 2002 – March 2003 Attitudes toward evaluation and learning objectives; readiness for change All faculty and residents April 2003 – Septemb er 2003 Same as in October 2002 with the addition of items regarding whether any changes took place and attitudes toward those changes

PAGE 63

54 Figure 2. Effects of Interaction between Tr aining and Time Pressure on Readiness to Change Training x Time Pressure Interaction on Readiness to Change1 1.5 2 2.5 3 3.5 4 Less More Time PressureReadiness to Change Control Trained

PAGE 64

55 Figure 3. Effects of Interaction between Tr aining and Time Pressure on Changes Made to Resident Evaluation Methods Training x Time Pressure Interaction on Changes to Resident Evaluations1 1.5 2 2.5 3 3.5 4 Less More Time PressureChanges to Resident Evaluation Control Trained

PAGE 65

56 References Anderson, M. (2000). Strategic change: Fa st cycle OD. Canada: South-Western College Publishing. Armenakis, A. A., Harris, S.G., & Mossholder, K.W. (1993). Creating organizational readiness to change. Human Relations 46(6), 681-704. Bovey, W. & Hede, A. (2001). Resistance to organizational change: the role of cognitive and affective processes. Leadership & Organization Development Journal 22(8), 372-382. Burke, W. W. (1982). Organization development: Principles and practices. Boston: Little, Brown and Company. Burke, W. W. (1994). Organization De velopment: A process of learning and changing (2nd ed.). Reading, MA: Addison-Wesley Publishing Company. Camman, C., Fichman, M., Jenkins, Jr., G. D. & Klesh, J.R. (1983). Assessing the attitudes and perceptions of organizational members. In S.E. Seashore, E.E. Lawler, III, P.H. Mirvis, C. Cammann (Eds.), Assessing organizational change: A guide to methods, measures and practices (pp. 71-138). New York: John Wiley & Sons. Carraccio, C., Wolfsthal, S.D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting paradigms: From flexner to competencies. Academic Medicine 77(5), 361-367. Church, A., Margiloff, A. & Coruzzi, C. (1995). Using surveys for change: An applied example in a pharmaceuticals organization. Leadership & Organization Development Journal 16(4), 3-11. Eby, C., Adams, D., & Russell, J. (2000). Perceptions of organizational readiness for change: Factors related to employees' reactions to the implementation of team-based selling. Human Relations 53(3), 419-442. Fox, D. G., Ellison, R. L., & Keith, K. (1988). Human resource management: An index and its relationship to readiness for change. Public Personnel Management 17(3), 297-302. Gallagher, C., Joseph, L. & Park, M. (2002). Implementing organizational change. In J. P. Hedge, E. (Ed.), Implementing organizational interventions: Steps, processes and best practices. San Francisco: Jossey Bass.

PAGE 66

57 Hanna, D. P. (1988). Designing organiza tions for high performance. Reading, MA: Addison-Wesley Publishing company. Hanson, P. G. L., B. (1995). Answers to questions most frequently asked about organization development. Thousand Oaks, CA: Sage Publications. Kotter, J. (1996). Leading change. Boston: Harvard Business School Press. Latham, G. (2001). The importance of understanding and changing employee outcome expectancies for gaining commitment to an organizational goal. Personnel Psychology 54(3), 707-716. Lawler, III., E. E., Nadler, D., & Mirvis, P. (1983). Organizational change and the conduct of assessment research. In S.E. Seashore, E.E. Lawler III, P.H. Mirvis, C. Cammann (Ed.), Assessing organizational change: A guide to methods, measures and practices (pp. 19-47). New York: John Wiley & Sons. Macy, B. A. P., M.F. (1983). Evaluating attitudinal change in a longitudinal quality of work life intervention. In S.E. Seashore, E.E. Lawler III, P.H. Mirvis, C. Cammann (Ed.), Assessing organizational change: A guide to methods, measures and practices. New York: John Wiley & sons. Martineau, J.W. & Preskill, H. (2002). Evaluating the impact of organization development interventions. In J. C. Waclawski, & A.H. Church (Ed.), Organization development: A data-driven approach to organizational change (pp. 286-301). San Francisco: Jossey-Bass. McKenna, E. (1994). Business Psychol ogy and Organisational Behaviour. Hillsdale: Lawrence Erlbaum Associates, Publishers. Mirvis, P. H. (1983). Assessing the process and progress of change in organizational change programs. In S.E. Seashore, E.E. Lawler III, P.H. Mirvis, C. Cammann (Ed.), Assessing organizational change: A guide to methods, measures and practices (pp. 417-451). New York: John Wiley & sons. Pond, S. B., Armenakis, A., Green, S. (1984). The importance of employee expectations in organizational diagnosis. Journal of Applied Behavioral Science 20(2), 167-180. Rashford, N. S. Coughlan, D. (1994). Th e dynamics of organizational levels: A change framework for managers and consultants. Reading, MA: Addison-Wesley Publishing company. Robbins, S. (1996). Organizational behavior: Concepts, controversies, applications (7th ed.). Englewood Cliffs: Prentice Hall.

PAGE 67

58 Rotchford, N. (2002). Performance Management. In J. P. Hedge, E. (Ed.), Implementing organizational interventions. San Francisco: Jossey-Bass. Rothwell, W. J., Sullivan, R., & McLean, G. (1995). Practicing organization development: A guide for consultants. San Francisco: Jossey-Bass. Smither, J. W., Wohlers, A.J., & London, M. (1995). A field study of reactions to normative versus individualized upward feedback. Group & Organization Management 20(1), 61-80. Spiker, B. L., E. (1995). Making change work. Communication World 12(1), 2326. Strebel, P. (1994). Choosing the right change path. California management review 36(2), 29-52. Vance, R. J., Brooks, S.M., Tesluk, P.E. & Howard, M.J. (1999). Longitudinal and multilevel influences on cynical climates and resistance to change. Paper presented at the Society for Industrial and Organizational Psychology, Atlanta. Waclawski, J. C., A. H. (2002). Introduction and overview of organization development as a data-driven approach for organizational change. In J. C. Waclawski, & A.H. Church (Ed.), Organization development: A data-driven approach to organizational change (pp. 3-26). San Francisco: Jossey-Bass. Weber, P. & Weber, J. (2001). Changes in employee perceptions during organizational change. Leadership & Organization Development Journal 22(6), 291300. Woodman, R. W. (1989). Evaluation research on organizational change: Arguments for a combined paradigm approach. In R. W. W. Pasmore (Ed.), Research in Organization Change and Development (Vol. 3, pp. 161-180): JAI Press Inc. Worley, T. G. & Cummings, C. G. (1997). Organization development and change (6th ed.). Cincinnati: South-Western College Publishing

PAGE 68

59 Appendices

PAGE 69

60 Appendix A: Assignments to Treatment Condition and Time Pressure Program Subspecialty Intended Treatment Group Actual Treatment Group Time Pressure Variable Anesthesiology Experimental Experimental More Anesthesiology Pain Management Control Control More Anesthesiology Critical Care Control Control More Family Medicine Experimental Experimental More Internal Medicine Control Control Less Internal Medicine Allergy & Immunology Experimental Experimental Less Internal Medicine Cardiovascular Experimental Experimental More Internal Medicine Dermatology Experimental Experimental Less Internal Medicine Endo Metabolism Control Control Less Internal Medicine Geriatric Experimental Experimental Less Internal Medicine Hemotology/OncologyExperimental Experimental Less Internal Medicine Infectious Disease Control Control Less Internal Medicine Nephrology Experimental Experimental More Internal Medicine Occupational Medicine Experimental Experimental Less Internal Medicine Pulmonary & Critical Care Experimental Control More Internal Medicine Rheumatology Control Control More Internal Medicine Gastroenterology Experimental Experimental Less Internal Medicine Pediatrics Control Control Less Neurology Experimental Control More Neurosurgery Control Control Less OB/GYN Control Control More OB/GYN Oncology Control Control Less Opthalmology Experimental Experimental Less

PAGE 70

61 Appendix A (Continued) Program Subspecialty Intended Treatment Group Actual Treatment Group Time Pressure Variable Otolaryngology Experimental Experimental More Otolaryngology Head & Neck Control Control More Pathology Control Control More Pathology Cytopathology Experimental Experimental More Pathology Forensic Control Control More Pathology Pediatric Control Control Less Pediatrics Control Control Less Pediatrics Neonatal-Perinatal Experimental Control Less Pediatrics Allergy & Immunology Control Experimental Less Pediatrics Allergy & Immunology Lab Experimental Experimental Less Physical Medicine Experimental Experimental More Physical Medicine Spinal Cord Control Control More Psychiatry Experimental Control Less Psychiatry Addiction Experimental Experimental More Psychiatry Childhood & Adolescence Experimental Control More Psychiatry Geriatric Medicine Control Control Less Psychiatry Psychosocial Control Control Less Radiology Control Control More Radiology Vascular & Interventional Control Control More Surgery Control Control More Surgery Hand Control Control Less Surgery Urology Control Control More Surgery Vascular Experimental Control More

PAGE 71

62 Appendix B: Agenda for Modules Module 1-Writing Learning Objectives 9:00-9:05am Introduction 9:05-9:10am Review goals and charge from workshop on June 4, 2002 Highlight plan for this session 9:10-9:20am Discussion of pros and cons of using learning objectives; changes to accreditation requirements 9:20-9:40am Present material on Writing Instructional Objectives (Mager, 1997) Importance of objectives Qualities of Useful Objectives Examples 9:40-9:55am Active workshop-write learning objectives; exchange these in small groups, critique 9:55-10:00am Wrap-up and preview of next module Module 2-Linking Objectives to Evaluation 9:00-9:05am Introduction and brief review of last module; highlight plan for this module 9:05-9:25am How objectives inform evaluation; Critical components of objectives The role of goals and feedback 9:25-9:55am Active workshop-write and critique learning objectives Identify how these objectives can change evaluation 9:55-10:00am Wrap-up and preview of next module Module 3-Implementation 9:00-9:05am Introduction; Brief review of last module; highlight plan for this module 9:05-9:30am Ideas concerning how to use learning objectives/Briefly review criteria for learning objectives Present list of possible evaluation toolsHow to link these with measurable objectives Use of behaviorally based self-assessments Discussion and ideas from participants 9:30-9:55am Factors affecting implementation of changes Discussion among participants Resistance to change Suggestions how to diffuse resistance 9:55-10:00am Wrap-up, review and evaluation

PAGE 72

63 If you have shared information with others or provided assistance, please indicate to which programs they belong: 1. These workshops were an effective use of my time Strongly disagree Disagree Agree Strongly agree 2. The information presented was useful to me in preparing instructional objectives for resident rotations Strongly disagree Disagree Agree Strongly agree 3. The information presented was useful in making improvements to resident evaluations for each rotation Strongly disagree Disagree Agree Strongly agree 4. Overall, I was satisfied with the content of these workshops Strongly disagree Disagree Agree Strongly agree 5. I would recommend these workshops to other program directors Strongly disagree Disagree Agree Strongly agree 6. I intend to use the information provided to make changes to resident evaluation systems Strongly disagree Disagree Agree Strongly agree The following two items have different scales than those above, please read carefully and indicate your response to the right. 7. To what extent did you share information or materials with colleagues (other program directors) who did NOT attend any training modules? Not at all Minimal Amount Average Amount Great Amount 8. To what extent did you provide assistance in writing learning objectives to colleagues who did NOT attend any training modules? Not at all Minimal Amount Average Amount Great Amount Appendix C: Workshop Evaluation USF COLLEGE OF MEDICINE-WORKSHOP EVALUATION If you attended anyone of the three training modules concerning learning objectives, please fill out the following questions, which will provide valuable information. Your input is critical to improving upon subsequent training modules that may be presented to Program Directors at the USF College of Medicine. Please fill out this form and send back to Dr. Fabri's office before December 16, 2002. Thank you for your time and consideration.

PAGE 73

64 Appendix D: Survey of Non-Attendees USF COLLEGE OF MEDICINE-WORKSHOP EVALUATION If you did not attend any of the workshops about constructing learning objectives and resident evaluations, please indicate below the reason. Providing this information will give the researcher and USF College of Medicine valuable information regarding the development of resources or tools in the future. Please fill out this form and send back to Dr. Fabri's office before December 16, 2002. Thank you for your time and consideration. In the space below, please tell us what could have been done differently to enable/encourage you to attend the workshops: "I was unable to attend the workshops due to the following reason(s)”: If more than one reason applies, please order them according to the most influential reason (1 being more influential). The scheduled days and times were incompatible with my schedule. I planned to attend one or more workshops but other obligations prevented me from attending at the time. I did not believe the workshops would provide information or tools useful to me I do not plan to make changes to the current resident evaluation system and therefore did not feel my attendance at the workshops was necessary.

PAGE 74

65 Appendix E: Learning Objectives Rating Form LEARNING OBJECTIVES RATING FORM Program/Subspecialty PROGRAM DIRECTOR RATER Please rate the extent to which the learning objectives for this particular program meet the following criteria: A. Measurable B. Reasonable to Expect C. Understandable D. Related to the subsequent ability of an individual to practice medicine A. Measurable All objectives should be measurable so that instructors, students and any other interested parties are able to determine if, when and how those objectives have been met and/or exceeded in order to be measurable, each learning objectiv e should contain all three of the following components, therefore, please rate the objectives on the following scales: 1. Performance : An objective should clearly state the observable behavior that a learner is expected to be able to perform or to produce in order to be considered competent (Mager, 1997). Please circle the rating (1-4) that most clo sely represents your assessment of this group of learning objectives as it relates to the above criterion. Rating 1 2 3 4 Category Rarely Sometimes Often Most of the time Definition Less than 25% of objectives contain performance 25%-49% of objectives contain performance 50%-74% of objectives contain performance More than 75% of objectives contain performance 2. Conditions: An objective should describe the conditions under which performance is expected (Mager, 1997). Please circle the rating (1-4) that most clo sely represents your assessment of this group of learning objectives as it relates to the above criterion. Rating 1 2 3 4 Category Rarely Sometimes Often Most of the time Definition Less than 25% of objectives contain conditions 25%-49% of objectives contain conditions 50%-74% of objectives contain conditions More than 75% of objectives contain conditions 3. Criterion: An objective should describe the crite ria of acceptable performance; it states specifically how well someone should be able to perform to be considered competent (Mager, 1997).

PAGE 75

66 Appendix E (Continued) Please circle the rating (1-4) that most clo sely represents your assessment of this group of learning objectives as it relates to the above criterion. Rating 1 2 3 4 Category Rarely Sometimes Often Most of the time Definition Less than 25% of objectives contain criteria 25%-49% of objectives contain criteria 50%-74% of objectives contain criteria More than 75% of objectives contain criteria B. Reasonable to expect: Describes what is fair to expect a resident of average ability to be able to do and what will be considered acceptable performance within a specified period of time (the duration of the rotation). Please circle the rating (1-4) that most clo sely represents your assessment of this group of learning objectives as it relates to the above criterion. Rating 1 2 3 4 Category Rarely Sometimes Often Most of the time Definition Less than 25% of objectives are reasonable to expect 25%-49% of objectives are reasonable to expect 50%-74% of objectives are reasonable to expect 75%+ of objectives are reasonable to expect C. Understandable: Residents and faculty can easily comprehend, at their present level of knowledge and skill, performance needed to achieve the objective. Further, the objective is clear enough as to elicit the same meaning among all residents and faculty. Please circle the rating (1-4) that most closel y represents your assessment of this group of learning objectives as it relates to the above criterion. Rating 1 2 3 4 Category Rarely Sometimes Often Most of the time Definition Less than 25% of objectives are understandable 25%-49% of objectives are understandable 50%-74% of objectives are understandable More than 75% of objectives are understandable D. Related to subsequent ability: The learning objectives are directly related to the subsequent ability of each individual to practice medicine. Please circle the rating (1-4) that most closel y represents your assessment of this group of learning objectives as it relates to the above criterion. Rating 1 2 3 4 Category Rarely Sometimes Often Most of the time Definition Less than 25% of objectives are related 25%-49% of objectives are related 50%-74% of objectives are related More than 75% of objectives are related

PAGE 76

67 Appendix E (Continued) GLOBAL RATING REGARDING USEFULNESS TO DEVELOPING EVALUATION SYSTEMS: Now that you have evaluated the group of learning objectives on the above criteria, please consider these in making an overall rating of the entire set of objectives submitted by each program or subspecialty. Please use the following scale to rate how useful you believe the set of objectives are in helping to develop appropriate and customized evaluations for resident performance. Please circle the number that most closely corresponds to your overall assessment. Poor Fair Average Good Superior 1 2 3 4 5 Not clear how evaluation(s) can be developed from this set of objectives; Would take substantial work and revision to translate into operational evaluations Provides the beginning of a foundation for developing evaluations; Can be translated into operational evaluations with major revisions Provides a sufficient foundation for developing evaluations; Can be translated into operational evaluations with a fair amount of work Provides a good foundation for developing evaluations; Learning objectives need a little revising to be translated into operational evaluations Provides a complete foundation for developing evaluations; Can be translated into operational evaluations with minimal work

PAGE 77

68 Appendix F: Resident Evaluation Pre-Survey USF COLLEGE OF MEDICINE-RESIDENT EVALUATION SURVEY Consider the method(s) of resident evalua tions used during your last rotation including how residents were graded and the procedures in place that provided evaluations of resident knowledge and skills. Please mark an “x” in the box that most closely matches your attitude toward the statement to the left. Please mark an answer for every item. All responses will be kept confidential and no individuals will be identified. The current method of resident evaluation: 1. Is objective Strongly disagree Disagree Agree Strongly agree 2. Is fair to people from different demographic backgrounds Strongly disagree Disagree Agree Strongly agree 3. Provides information helpful to faculty in judging resident proficiency Strongly disagree Disagree Agree Strongly agree 4. Provides information helpful to residents in judging their own proficiency Strongly disagree Disagree Agree Strongly agree 5. Provides feedback helpful to residents in how to develop their own skills Strongly disagree Disagree Agree Strongly agree 6. Provides information helpful to faculty in developing instructional materials Strongly disagree Disagree Agree Strongly agree 7. Provides information helpful to faculty in developing instructional objectives Strongly disagree Disagree Agree Strongly agree 8. Are clearly linked to instructional objectives Strongly disagree Disagree Agree Strongly agree 9. Documents resident skills in a way that provides data about actual resident accomplishments Strongly disagree Disagree Agree Strongly agree 10. Documents resident skills in a way that meets professional standards Strongly disagree Disagree Agree Strongly agree 11. Are based primarily on the opinions of faculty Strongly disagree Disagree Agree Strongly agree

PAGE 78

69 Appendix F (Continued) 12. Are largely a matter of reputation rather than actual skill Strongly disagree Disagree Agree Strongly agree 13. Are clearly linked to core competencies required of residents Strongly disagree Disagree Agree Strongly agree 14. Helps provide a portfolio of resident accomplishments based on actions rather than opinions Strongly disagree Disagree Agree Strongly agree 15. Provides evidence of who will be a successful practitioner Strongly disagree Disagree Agree Strongly agree 16. Properly distinguishes more and less proficient residents Strongly disagree Disagree Agree Strongly agree 17. Provides consistent, reliable evaluations Strongly disagree Disagree Agree Strongly agree 18. Reflects the critical components addressed during the rotation Strongly disagree Disagree Agree Strongly agree 19. Acceptable standards of performance are clearly communicated to residents at the beginning of the rotation Strongly disagree Disagree Agree Strongly agree 20. Residents are evaluated on proficiencies that are reasonable to expect based on the training given in the rotation Strongly disagree Disagree Agree Strongly agree 21. Current learning objectives for evaluation are hard to understand Strongly disagree Disagree Agree Strongly agree 22. Overall, I am satisfied with the current method of providing resident evaluations Strongly disagree Disagree Agree Strongly agree 23. The USF College of Medicine is willing to act on opportunities to improve training for residents Strongly disagree Disagree Agree Strongly agree 24. The USF College of Medicine adapts successfully to changes in the training needs of residents Strongly disagree Disagree Agree Strongly agree 25. The USF College of Medicine is reluctant to change policies and procedures Strongly disagree Disagree Agree Strongly agree

PAGE 79

70 Appendix F (Continued) 26. The USF College of Medicine is capable of responding to changes in training dictated by medical advancements Strongly disagree Disagree Agree Strongly agree 27. The USF College of Medicine is able to implement changes that result in positive outcomes Strongly disagree Disagree Agree Strongly agree

PAGE 80

71 Appendix G: Resident Evaluation Post-Survey Items This Appendix displays the items that appeared, in addition to the survey items in Appendix D, on the posttest. Please consider any changes that have been made in resident evaluations over the past 6 months: 28. Changes made in the evaluation of resident on this rotation have been None Slight Moderate Extensive If no changes have happened, please disregard the following… 29. Changes made in resident evaluations on this rotation have improved the objectivity of the assessment Strongly disagree Disagree Agree Strongly agree 30. Changes made in resident evaluations on this rotation have improved the quality of the feedback provided to residents Strongly disagree Disagree Agree Strongly agree 31. Changes made in resident evaluations on this rotation have improved the ability of residents to use this information in developing appropriate skills Strongly disagree Disagree Agree Strongly agree

PAGE 81

72 Appendix H: Correlation between Items on Pre-Survey PRE1 PRE2 PRE3 PRE4 PRE5 PRE6 PRE7 PRE8 PRE9 PRE10 PRE 11R PRE 12R PRE 13 PRE14 PRE16 PRE17 PRE1 1 0.57 0.6 0.57 0.55 0.48 0.53 0.57 0.59 0.59 0.165 0.246 0.59 0.556 0.372 0.469 PRE2 0.57 1 0.53 0.47 0.49 0.35 0.32 0.34 0.5 0.49 0.004 0.236 0.428 0.36 0.315 0.384 PRE3 0.6 0.53 1 0.72 0.63 0.57 0.6 0.55 0.58 0.654 0.065 0.286 0.624 0.481 0.371 0.522 PRE4 0.57 0.47 0.72 1 0.77 0.55 0.56 0.55 0.63 0.587 0.08 0.259 0.591 0.555 0.436 0.5 PRE5 0.55 0.49 0.63 0.77 1 0.62 0.59 0.58 0.66 0.605 0.092 0.26 0.568 0.57 0.419 0.484 PRE6 0.48 0.35 0.57 0.55 0.62 1 0.88 0.67 0.56 0.492 0.112 0.032 0.533 0.516 0.323 0.389 PRE7 0.53 0.32 0.6 0.56 0.59 0.88 1 0.71 0.57 0.508 0.165 0.04 0.544 0.52 0.341 0.402 PRE8 0.57 0.34 0.55 0.55 0.58 0.67 0.71 1 0.57 0.535 0.041 0.032 0.664 0.545 0.399 0.437 PRE9 0.59 0.5 0.58 0.63 0.66 0.56 0.57 0.57 1 0.675 0.102 0.25 0.572 0.627 0.476 0.548 PRE10 0.59 0.49 0.65 0.59 0.61 0.49 0.51 0.54 0.68 1 0.061 0.208 0.626 0.514 0.441 0.51 PRE11R 0.17 0 0.07 0.08 0.09 0.11 0.17 0.04 0.1 0.061 1 0.164 0.062 0.074 0.202 0.194 PRE12R 0.25 0.24 0.29 0.26 0.26 0.03 0.04 0.03 0.25 0.208 0.164 1 0.197 0.262 0.249 0.307 PRE13 0.59 0.43 0.62 0.59 0.57 0.53 0.54 0.66 0.57 0.626 0.062 0.197 1 0.591 0.44 0.445 PRE14 0.56 0.36 0.48 0.56 0.57 0.52 0.52 0.55 0.63 0.514 0.074 0.262 0.591 1 0.373 0.439 PRE16 0.37 0.32 0.37 0.44 0.42 0.32 0.34 0.4 0.48 0.441 0.202 0.249 0.44 0.373 1 0.61 PRE17 0.47 0.38 0.52 0.5 0.48 0.39 0.4 0.44 0.55 0.51 0.194 0.307 0.445 0.439 0.61 1 PRE18 0.62 0.43 0.55 0.51 0.52 0.51 0.52 0.53 0.57 0.539 0.126 0.31 0.533 0.588 0.526 0.617 PRE19 0.61 0.41 0.56 0.5 0.56 0.56 0.57 0.6 0.56 0.586 0.167 0.237 0.635 0.577 0.461 0.578 PRE20 0.57 0.33 0.44 0.46 0.48 0.53 0.54 0.58 0.57 0.492 0.184 0.146 0.531 0.591 0.351 0.468 PRE21 0.53 0.4 0.51 0.48 0.51 0.46 0.47 0.48 0.52 0.592 0.144 0.186 0.509 0.44 0.399 0.462 PRE23 0.58 0.45 0.64 0.64 0.65 0.61 0.62 0.6 0.65 0.6 0.079 0.235 0.623 0.624 0.446 0.531 PRE24 0.31 0.26 0.34 0.37 0.41 0.35 0.36 0.38 0.32 0.313 0.128 0.052 0.369 0.308 0.353 0.242 PRE25 0.41 0.33 0.38 0.45 0.46 0.4 0.4 0.42 0.41 0.369 0.019 0.092 0.463 0.432 0.3 0.281 PRE27 0.32 0.23 0.29 0.38 0.3 0.32 0.38 0.37 0.29 0.27 0.013 -0.012 0.291 0.305 0.266 0.178 PRE28 0.42 0.33 0.4 0.42 0.42 0.4 0.4 0.43 0.35 0.381 0.042 0.052 0.394 0.337 0.313 0.247 PRE26R 0.21 0.2 0.24 0.14 0.19 0.15 0.13 0.18 0.16 0.171 -0.048 0.302 0.189 0.159 0.168 0.106 PRE22R 0.17 0.18 0.21 0.15 0.18 0.18 0.17 0.18 0.24 0.181 0.064 0.423 0.211 0.258 0.1 0.111

PAGE 82

73 Appendix H (Continued) PRE18 PRE19 PRE20 PRE21 PRE23 PRE24 PRE25 PRE27 PRE28 PRE26R PRE22R PRE1 0.62 0.61 0.57 0.53 0.58 0.31 0.41 0.32 0.42 0.213 0.165 PRE2 0.43 0.41 0.33 0.4 0.45 0.26 0.33 0.23 0.33 0.196 0.182 PRE3 0.55 0.56 0.44 0.51 0.64 0.34 0.38 0.29 0.4 0.242 0.21 PRE4 0.51 0.5 0.46 0.48 0.64 0.37 0.45 0.38 0.42 0.143 0.153 PRE5 0.52 0.56 0.48 0.51 0.65 0.41 0.46 0.3 0.42 0.193 0.177 PRE6 0.51 0.56 0.53 0.46 0.61 0.35 0.4 0.32 0.4 0.153 0.184 PRE7 0.52 0.57 0.54 0.47 0.62 0.36 0.4 0.38 0.4 0.132 0.165 PRE8 0.53 0.6 0.58 0.48 0.6 0.38 0.42 0.37 0.43 0.183 0.18 PRE9 0.57 0.56 0.57 0.52 0.65 0.32 0.41 0.29 0.35 0.161 0.244 PRE10 0.54 0.59 0.49 0.59 0.6 0.31 0.37 0.27 0.38 0.171 0.181 PRE11R 0.13 0.17 0.18 0.14 0.08 0.13 0.02 0.01 0.04 -0.048 0.064 PRE12R 0.31 0.24 0.15 0.19 0.24 0.05 0.09 -0.01 0.05 0.302 0.423 PRE13 0.53 0.64 0.53 0.51 0.62 0.37 0.46 0.29 0.39 0.189 0.211 PRE14 0.59 0.58 0.59 0.44 0.62 0.31 0.43 0.31 0.34 0.159 0.258 PRE16 0.53 0.46 0.35 0.4 0.45 0.35 0.3 0.27 0.31 0.168 0.1 PRE17 0.62 0.58 0.47 0.46 0.53 0.24 0.28 0.18 0.25 0.106 0.111 PRE18 1 0.69 0.54 0.49 0.63 0.3 0.35 0.26 0.34 0.161 0.222 PRE19 0.69 1 0.63 0.57 0.62 0.35 0.41 0.2 0.33 0.174 0.246 PRE20 0.54 0.63 1 0.55 0.59 0.38 0.49 0.29 0.42 0.226 0.251 PRE21 0.49 0.57 0.55 1 0.55 0.39 0.4 0.23 0.43 0.161 0.259 PRE23 0.63 0.62 0.59 0.55 1 0.39 0.47 0.32 0.44 0.223 0.279 PRE24 0.3 0.35 0.38 0.39 0.39 1 0.76 0.54 0.62 0.334 0.157 PRE25 0.35 0.41 0.49 0.4 0.47 0.76 1 0.51 0.62 0.382 0.135 PRE27 0.26 0.2 0.29 0.23 0.32 0.54 0.51 1 0.73 0.229 0.013 PRE28 0.34 0.33 0.42 0.43 0.44 0.62 0.62 0.73 1 0.289 0.16 PRE26R 0.16 0.17 0.23 0.16 0.22 0.33 0.38 0.23 0.29 1 0.33 PRE22R 0.22 0.25 0.25 0.26 0.28 0.16 0.14 0.01 0.16 0.33 1

PAGE 83

74 Appendix I: Mean Global and Composite Ratings by Program (for those available) Program Subspecialty Mean Global Rating Mean Composite Rating Anesthesiology 3.6 3.09 Family Medicine 3.6 2.71 Internal Medicine 2.2 2.29 Internal Medicine Allergy & Immunology 2.5 2.29 Internal Medicine Dermatology 1.5 1.79 Internal Medicine Endo Metabolism 2 2.14 Internal Medicine Hemotology/Oncology 1.5 1.86 Internal Medicine Infectious Disease 3.5 2.93 Internal Medicine Nephrology 3.5 2.64 Internal Medicine Occupational Medicine 3 2.86 Internal Medicine Pulmonary & Critical Care 1 1.21 Internal Medicine Rheumatology 2 2.29 Internal Medicine Gastroenterology 3.5 3.07 Internal Medicine Pediatrics 4.4 3.49 Neurology 2.6 2.63 Neurosurgery 3.6 2.89 OB/GYN 3.6 3.09 Opthalmology 2.4 2.57 Otolaryngology 2.6 2.46 Pathology 3.4 2.91 Pathology Cytopathology 1 1.21 Pathology Forensic 2 2.14 Pathology Pediatric 2.5 2.64 Pediatrics 4.25 3.46 Pediatrics Neonatal-Perinatal 2 2.07 Pediatrics Allergy & Immunology 3.5 2.86 Pediatrics Allergy & Immunology Lab 2 2.29 Physical Medicine 2.6 2.71 Physical Medicine Spinal Cord 4.5 3.14 Psychiatry 3.2 2.80 Psychiatry Addiction 4.5 3.71 Psychiatry Childhood & Adolescence 4.5 2.69 Psychiatry Geriatric Medicine 2 2.07 Psychiatry Psychosocial 3 2.71 Radiology 4 3.40 Radiology Vascular & Interventional 1.5 1.93 Surgery 3 2.94

PAGE 84

75 Appendix J: Means and Standard Deviations for Pre-Survey Items Item Mean Standard Deviation n PRE1 2.86 0.65 128 PRE2 3.23 0.64 128 PRE3 2.91 0.62 127 PRE4 2.80 0.66 127 PRE5 2.79 0.73 127 PRE6 2.52 0.76 126 PRE7 2.54 0.73 126 PRE8 2.62 0.73 127 PRE9 2.69 0.74 128 PRE10 2.95 0.63 128 PRE11 2.13 0.78 126 PRE12 2.62 0.81 126 PRE13 2.84 0.66 128 PRE14 2.62 0.70 126 PRE16 2.57 0.68 128 PRE17 2.80 0.60 126 PRE18 2.62 0.66 126 PRE19 2.73 0.65 126 PRE20 2.49 0.72 126 PRE21 3.02 0.52 127 PRE22 2.69 0.69 126 PRE23 2.67 0.74 127 PRE24 2.98 0.55 126 PRE25 2.84 .62 121 PRE26 2.62 .68 122 PRE27 3.07 .40 122 PRE28 3.02 .42 123

PAGE 85

76 Appendix K: Means and Standard Deviations for Post-Survey Items Item Mean Standard Deviation n POST1 2.70 0.70 128 POST2 3.02 0.70 128 POST3 2.78 0.69 127 POST4 2.66 0.70 127 POST5 2.62 0.71 127 POST6 2.43 0.73 126 POST7 2.58 0.67 126 POST8 2.57 0.74 127 POST9 2.60 0.74 128 POST10 2.94 0.59 128 POST11R 1.97 0.70 126 POST12R 2.79 0.74 126 POST13 2.80 0.69 128 POST14 2.48 0.71 126 POST16 2.43 0.78 128 POST17 2.79 0.60 126 POST18 2.48 0.71 126 POST19 2.67 0.69 126 POST20 2.50 0.77 126 POST21 3.02 0.54 127 POST22 2.81 0.60 126 POST23 2.61 0.72 127 POST24 2.94 0.66 126 POST25 2.79 0.64 126 POST26 2.66 0.69 126 POST27 2.99 0.45 127 POST28 2.99 0.47 125 POST29 1.90 0.95 125 POST30 3.14 1.51 118 POST31 3.08 1.49 118 POST32 3.08 1.51 118

PAGE 86

77 About the Author Kimberly Richards received a Bachelor’s Degree in Management and Psychology from Eckerd College in 1995, graduating with High Honors. She earned her Master’s Degree in Industrial/Organizational Psychology from the University of South Florida in 2000. While in the program at the University of South Florida, Mrs. Richards was very active in the Institute for Decision-Making, Cybernetics and Human-Computer Interaction; participating in several research efforts. She also led several efforts to enhance professional development such as the IOOB Graduate Student Conference. Mrs. Richards has spent the last three years working as an internal consultant for large Fortune 100 retailers designing and implementing Succession Planning, Performance Management and Organization Development and Change initiatives.